00:00:00.001 Started by upstream project "autotest-per-patch" build number 121236 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "jbp-per-patch" build number 21653 00:00:00.002 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.033 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.034 The recommended git tool is: git 00:00:00.034 using credential 00000000-0000-0000-0000-000000000002 00:00:00.036 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.053 Fetching changes from the remote Git repository 00:00:00.055 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.082 Using shallow fetch with depth 1 00:00:00.082 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.082 > git --version # timeout=10 00:00:00.107 > git --version # 'git version 2.39.2' 00:00:00.107 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.108 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.108 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/changes/56/22956/2 # timeout=5 00:00:02.754 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.766 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.777 Checking out Revision 7df4c156d1a3f2eb71365249c31e0c8af51d619b (FETCH_HEAD) 00:00:02.777 > git config core.sparsecheckout # timeout=10 00:00:02.789 > git read-tree -mu HEAD # timeout=10 00:00:02.805 > git checkout -f 7df4c156d1a3f2eb71365249c31e0c8af51d619b # timeout=5 00:00:02.828 Commit message: "jenkins/jjb-config: Add ubuntu2404 to per-patch and nightly testing" 00:00:02.828 > git rev-list --no-walk f889f3ac3b572be5acb45f713fd98e0ce37eec5a # timeout=10 00:00:02.951 [Pipeline] Start of Pipeline 00:00:02.966 [Pipeline] library 00:00:02.967 Loading library shm_lib@master 00:00:02.967 Library shm_lib@master is cached. Copying from home. 00:00:02.988 [Pipeline] node 00:00:03.003 Running on CYP12 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:03.004 [Pipeline] { 00:00:03.014 [Pipeline] catchError 00:00:03.016 [Pipeline] { 00:00:03.026 [Pipeline] wrap 00:00:03.034 [Pipeline] { 00:00:03.040 [Pipeline] stage 00:00:03.041 [Pipeline] { (Prologue) 00:00:03.192 [Pipeline] sh 00:00:03.479 + logger -p user.info -t JENKINS-CI 00:00:03.494 [Pipeline] echo 00:00:03.495 Node: CYP12 00:00:03.501 [Pipeline] sh 00:00:03.800 [Pipeline] setCustomBuildProperty 00:00:03.812 [Pipeline] echo 00:00:03.813 Cleanup processes 00:00:03.818 [Pipeline] sh 00:00:04.105 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.105 3069231 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.119 [Pipeline] sh 00:00:04.405 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.405 ++ grep -v 'sudo pgrep' 00:00:04.405 ++ awk '{print $1}' 00:00:04.405 + sudo kill -9 00:00:04.405 + true 00:00:04.421 [Pipeline] cleanWs 00:00:04.433 [WS-CLEANUP] Deleting project workspace... 00:00:04.433 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.441 [WS-CLEANUP] done 00:00:04.445 [Pipeline] setCustomBuildProperty 00:00:04.459 [Pipeline] sh 00:00:04.744 + sudo git config --global --replace-all safe.directory '*' 00:00:04.824 [Pipeline] nodesByLabel 00:00:04.826 Found a total of 1 nodes with the 'sorcerer' label 00:00:04.836 [Pipeline] httpRequest 00:00:04.842 HttpMethod: GET 00:00:04.842 URL: http://10.211.164.96/packages/jbp_7df4c156d1a3f2eb71365249c31e0c8af51d619b.tar.gz 00:00:04.848 Sending request to url: http://10.211.164.96/packages/jbp_7df4c156d1a3f2eb71365249c31e0c8af51d619b.tar.gz 00:00:04.851 Response Code: HTTP/1.1 200 OK 00:00:04.852 Success: Status code 200 is in the accepted range: 200,404 00:00:04.852 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_7df4c156d1a3f2eb71365249c31e0c8af51d619b.tar.gz 00:00:05.309 [Pipeline] sh 00:00:05.600 + tar --no-same-owner -xf jbp_7df4c156d1a3f2eb71365249c31e0c8af51d619b.tar.gz 00:00:05.619 [Pipeline] httpRequest 00:00:05.623 HttpMethod: GET 00:00:05.624 URL: http://10.211.164.96/packages/spdk_06472fb6d0c234046253a9989fef790e0cbb219e.tar.gz 00:00:05.628 Sending request to url: http://10.211.164.96/packages/spdk_06472fb6d0c234046253a9989fef790e0cbb219e.tar.gz 00:00:05.635 Response Code: HTTP/1.1 200 OK 00:00:05.636 Success: Status code 200 is in the accepted range: 200,404 00:00:05.636 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_06472fb6d0c234046253a9989fef790e0cbb219e.tar.gz 00:00:28.353 [Pipeline] sh 00:00:28.641 + tar --no-same-owner -xf spdk_06472fb6d0c234046253a9989fef790e0cbb219e.tar.gz 00:00:31.197 [Pipeline] sh 00:00:31.483 + git -C spdk log --oneline -n5 00:00:31.483 06472fb6d lib/idxd: fix batch size in kernel IDXD 00:00:31.483 44dcf4fb9 pkgdep/idxd: Add dependency for accel-config used in kernel IDXD 00:00:31.483 3dbaa93c1 nvmf: pass command dword 12 and 13 for write 00:00:31.483 19327fc3a bdev/nvme: use dtype/dspec for write commands 00:00:31.483 c11e5c113 bdev: introduce bdev_nvme_cdw12 and cdw13, and add them to ext_opts 00:00:31.495 [Pipeline] } 00:00:31.510 [Pipeline] // stage 00:00:31.517 [Pipeline] stage 00:00:31.519 [Pipeline] { (Prepare) 00:00:31.536 [Pipeline] writeFile 00:00:31.552 [Pipeline] sh 00:00:31.841 + logger -p user.info -t JENKINS-CI 00:00:31.855 [Pipeline] sh 00:00:32.141 + logger -p user.info -t JENKINS-CI 00:00:32.160 [Pipeline] sh 00:00:32.446 + cat autorun-spdk.conf 00:00:32.446 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:32.446 SPDK_TEST_NVMF=1 00:00:32.446 SPDK_TEST_NVME_CLI=1 00:00:32.446 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:32.446 SPDK_TEST_NVMF_NICS=e810 00:00:32.446 SPDK_TEST_VFIOUSER=1 00:00:32.446 SPDK_RUN_UBSAN=1 00:00:32.446 NET_TYPE=phy 00:00:32.454 RUN_NIGHTLY=0 00:00:32.458 [Pipeline] readFile 00:00:32.481 [Pipeline] withEnv 00:00:32.482 [Pipeline] { 00:00:32.495 [Pipeline] sh 00:00:32.782 + set -ex 00:00:32.782 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:32.782 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:32.782 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:32.782 ++ SPDK_TEST_NVMF=1 00:00:32.782 ++ SPDK_TEST_NVME_CLI=1 00:00:32.782 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:32.782 ++ SPDK_TEST_NVMF_NICS=e810 00:00:32.782 ++ SPDK_TEST_VFIOUSER=1 00:00:32.782 ++ SPDK_RUN_UBSAN=1 00:00:32.782 ++ NET_TYPE=phy 00:00:32.782 ++ RUN_NIGHTLY=0 00:00:32.782 + case $SPDK_TEST_NVMF_NICS in 00:00:32.782 + DRIVERS=ice 00:00:32.782 + [[ tcp == \r\d\m\a ]] 00:00:32.782 + [[ -n ice ]] 00:00:32.782 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:32.782 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:42.782 rmmod: ERROR: Module irdma is not currently loaded 00:00:42.782 rmmod: ERROR: Module i40iw is not currently loaded 00:00:42.782 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:42.782 + true 00:00:42.782 + for D in $DRIVERS 00:00:42.782 + sudo modprobe ice 00:00:42.782 + exit 0 00:00:42.792 [Pipeline] } 00:00:42.811 [Pipeline] // withEnv 00:00:42.816 [Pipeline] } 00:00:42.833 [Pipeline] // stage 00:00:42.844 [Pipeline] catchError 00:00:42.846 [Pipeline] { 00:00:42.861 [Pipeline] timeout 00:00:42.861 Timeout set to expire in 40 min 00:00:42.863 [Pipeline] { 00:00:42.878 [Pipeline] stage 00:00:42.881 [Pipeline] { (Tests) 00:00:42.894 [Pipeline] sh 00:00:43.177 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:43.177 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:43.177 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:43.177 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:43.177 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:43.177 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:43.177 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:43.178 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:43.178 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:43.178 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:43.178 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:43.178 + source /etc/os-release 00:00:43.178 ++ NAME='Fedora Linux' 00:00:43.178 ++ VERSION='38 (Cloud Edition)' 00:00:43.178 ++ ID=fedora 00:00:43.178 ++ VERSION_ID=38 00:00:43.178 ++ VERSION_CODENAME= 00:00:43.178 ++ PLATFORM_ID=platform:f38 00:00:43.178 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:43.178 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:43.178 ++ LOGO=fedora-logo-icon 00:00:43.178 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:43.178 ++ HOME_URL=https://fedoraproject.org/ 00:00:43.178 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:43.178 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:43.178 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:43.178 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:43.178 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:43.178 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:43.178 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:43.178 ++ SUPPORT_END=2024-05-14 00:00:43.178 ++ VARIANT='Cloud Edition' 00:00:43.178 ++ VARIANT_ID=cloud 00:00:43.178 + uname -a 00:00:43.178 Linux spdk-cyp-12 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:43.178 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:45.725 Hugepages 00:00:45.725 node hugesize free / total 00:00:45.725 node0 1048576kB 0 / 0 00:00:45.725 node0 2048kB 0 / 0 00:00:45.725 node1 1048576kB 0 / 0 00:00:45.725 node1 2048kB 0 / 0 00:00:45.725 00:00:45.725 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:45.725 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:00:45.725 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:00:45.725 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:00:45.725 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:00:45.725 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:00:45.725 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:00:45.725 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:00:45.725 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:00:45.725 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:00:45.725 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:00:45.725 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:00:45.725 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:00:45.725 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:00:45.725 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:00:45.725 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:00:45.725 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:00:45.725 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:00:45.986 + rm -f /tmp/spdk-ld-path 00:00:45.986 + source autorun-spdk.conf 00:00:45.986 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:45.986 ++ SPDK_TEST_NVMF=1 00:00:45.986 ++ SPDK_TEST_NVME_CLI=1 00:00:45.986 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:45.986 ++ SPDK_TEST_NVMF_NICS=e810 00:00:45.986 ++ SPDK_TEST_VFIOUSER=1 00:00:45.986 ++ SPDK_RUN_UBSAN=1 00:00:45.986 ++ NET_TYPE=phy 00:00:45.986 ++ RUN_NIGHTLY=0 00:00:45.986 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:45.986 + [[ -n '' ]] 00:00:45.986 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:45.986 + for M in /var/spdk/build-*-manifest.txt 00:00:45.986 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:45.986 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:45.986 + for M in /var/spdk/build-*-manifest.txt 00:00:45.986 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:45.986 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:45.986 ++ uname 00:00:45.986 + [[ Linux == \L\i\n\u\x ]] 00:00:45.986 + sudo dmesg -T 00:00:45.986 + sudo dmesg --clear 00:00:45.986 + dmesg_pid=3070238 00:00:45.986 + [[ Fedora Linux == FreeBSD ]] 00:00:45.986 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:45.986 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:45.986 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:45.986 + [[ -x /usr/src/fio-static/fio ]] 00:00:45.986 + export FIO_BIN=/usr/src/fio-static/fio 00:00:45.986 + FIO_BIN=/usr/src/fio-static/fio 00:00:45.986 + sudo dmesg -Tw 00:00:45.986 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:45.986 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:45.986 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:45.986 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:45.986 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:45.986 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:45.986 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:45.986 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:45.986 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:45.986 Test configuration: 00:00:45.986 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:45.986 SPDK_TEST_NVMF=1 00:00:45.986 SPDK_TEST_NVME_CLI=1 00:00:45.986 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:45.986 SPDK_TEST_NVMF_NICS=e810 00:00:45.986 SPDK_TEST_VFIOUSER=1 00:00:45.986 SPDK_RUN_UBSAN=1 00:00:45.986 NET_TYPE=phy 00:00:45.986 RUN_NIGHTLY=0 11:56:47 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:45.986 11:56:47 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:45.986 11:56:47 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:45.986 11:56:47 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:45.986 11:56:47 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:45.986 11:56:47 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:45.986 11:56:47 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:45.986 11:56:47 -- paths/export.sh@5 -- $ export PATH 00:00:45.986 11:56:47 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:45.986 11:56:47 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:45.986 11:56:47 -- common/autobuild_common.sh@435 -- $ date +%s 00:00:45.986 11:56:47 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1714125407.XXXXXX 00:00:45.986 11:56:47 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1714125407.M6R95y 00:00:45.986 11:56:47 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:00:45.986 11:56:47 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:00:45.986 11:56:47 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:45.986 11:56:47 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:45.986 11:56:47 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:45.986 11:56:47 -- common/autobuild_common.sh@451 -- $ get_config_params 00:00:45.986 11:56:47 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:00:45.986 11:56:47 -- common/autotest_common.sh@10 -- $ set +x 00:00:46.248 11:56:47 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:46.248 11:56:47 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:00:46.248 11:56:47 -- pm/common@17 -- $ local monitor 00:00:46.248 11:56:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:46.248 11:56:47 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3070274 00:00:46.248 11:56:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:46.248 11:56:47 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3070275 00:00:46.248 11:56:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:46.248 11:56:47 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3070277 00:00:46.248 11:56:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:46.248 11:56:47 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3070279 00:00:46.248 11:56:47 -- pm/common@26 -- $ sleep 1 00:00:46.248 11:56:47 -- pm/common@21 -- $ date +%s 00:00:46.248 11:56:47 -- pm/common@21 -- $ date +%s 00:00:46.248 11:56:47 -- pm/common@21 -- $ date +%s 00:00:46.248 11:56:47 -- pm/common@21 -- $ date +%s 00:00:46.248 11:56:47 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1714125407 00:00:46.248 11:56:47 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1714125407 00:00:46.248 11:56:47 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1714125407 00:00:46.248 11:56:47 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1714125407 00:00:46.248 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1714125407_collect-vmstat.pm.log 00:00:46.248 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1714125407_collect-bmc-pm.bmc.pm.log 00:00:46.248 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1714125407_collect-cpu-load.pm.log 00:00:46.248 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1714125407_collect-cpu-temp.pm.log 00:00:47.193 11:56:48 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:00:47.193 11:56:48 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:47.193 11:56:48 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:47.193 11:56:48 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:47.193 11:56:48 -- spdk/autobuild.sh@16 -- $ date -u 00:00:47.193 Fri Apr 26 09:56:48 AM UTC 2024 00:00:47.193 11:56:48 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:47.193 v24.05-pre-448-g06472fb6d 00:00:47.193 11:56:48 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:47.193 11:56:48 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:47.193 11:56:48 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:47.193 11:56:48 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:00:47.193 11:56:48 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:00:47.193 11:56:48 -- common/autotest_common.sh@10 -- $ set +x 00:00:47.193 ************************************ 00:00:47.193 START TEST ubsan 00:00:47.193 ************************************ 00:00:47.193 11:56:48 -- common/autotest_common.sh@1111 -- $ echo 'using ubsan' 00:00:47.193 using ubsan 00:00:47.193 00:00:47.193 real 0m0.000s 00:00:47.193 user 0m0.000s 00:00:47.193 sys 0m0.000s 00:00:47.193 11:56:48 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:00:47.193 11:56:48 -- common/autotest_common.sh@10 -- $ set +x 00:00:47.193 ************************************ 00:00:47.193 END TEST ubsan 00:00:47.193 ************************************ 00:00:47.454 11:56:48 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:47.454 11:56:48 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:47.454 11:56:48 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:47.454 11:56:48 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:47.454 11:56:48 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:47.454 11:56:48 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:47.454 11:56:48 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:47.454 11:56:48 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:47.454 11:56:48 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:47.454 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:47.454 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:48.026 Using 'verbs' RDMA provider 00:01:03.516 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:15.860 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:15.860 Creating mk/config.mk...done. 00:01:15.860 Creating mk/cc.flags.mk...done. 00:01:15.860 Type 'make' to build. 00:01:15.860 11:57:16 -- spdk/autobuild.sh@69 -- $ run_test make make -j144 00:01:15.860 11:57:16 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:15.860 11:57:16 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:15.860 11:57:16 -- common/autotest_common.sh@10 -- $ set +x 00:01:15.860 ************************************ 00:01:15.860 START TEST make 00:01:15.860 ************************************ 00:01:15.860 11:57:16 -- common/autotest_common.sh@1111 -- $ make -j144 00:01:15.860 make[1]: Nothing to be done for 'all'. 00:01:17.249 The Meson build system 00:01:17.249 Version: 1.3.1 00:01:17.249 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:17.249 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:17.249 Build type: native build 00:01:17.249 Project name: libvfio-user 00:01:17.249 Project version: 0.0.1 00:01:17.249 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:17.249 C linker for the host machine: cc ld.bfd 2.39-16 00:01:17.249 Host machine cpu family: x86_64 00:01:17.249 Host machine cpu: x86_64 00:01:17.249 Run-time dependency threads found: YES 00:01:17.249 Library dl found: YES 00:01:17.249 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:17.249 Run-time dependency json-c found: YES 0.17 00:01:17.249 Run-time dependency cmocka found: YES 1.1.7 00:01:17.249 Program pytest-3 found: NO 00:01:17.249 Program flake8 found: NO 00:01:17.249 Program misspell-fixer found: NO 00:01:17.249 Program restructuredtext-lint found: NO 00:01:17.249 Program valgrind found: YES (/usr/bin/valgrind) 00:01:17.249 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:17.249 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:17.249 Compiler for C supports arguments -Wwrite-strings: YES 00:01:17.249 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:17.249 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:17.249 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:17.249 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:17.249 Build targets in project: 8 00:01:17.249 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:17.249 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:17.249 00:01:17.249 libvfio-user 0.0.1 00:01:17.249 00:01:17.249 User defined options 00:01:17.249 buildtype : debug 00:01:17.249 default_library: shared 00:01:17.249 libdir : /usr/local/lib 00:01:17.250 00:01:17.250 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:17.250 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:17.250 [1/37] Compiling C object samples/null.p/null.c.o 00:01:17.250 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:17.250 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:17.250 [4/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:17.510 [5/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:17.510 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:17.510 [7/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:17.510 [8/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:17.510 [9/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:17.510 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:17.510 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:17.510 [12/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:17.510 [13/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:17.510 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:17.510 [15/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:17.510 [16/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:17.510 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:17.510 [18/37] Compiling C object samples/server.p/server.c.o 00:01:17.510 [19/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:17.510 [20/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:17.510 [21/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:17.510 [22/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:17.510 [23/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:17.510 [24/37] Compiling C object samples/client.p/client.c.o 00:01:17.510 [25/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:17.510 [26/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:17.510 [27/37] Linking target samples/client 00:01:17.510 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:17.510 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:01:17.510 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:17.510 [31/37] Linking target test/unit_tests 00:01:17.510 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:17.772 [33/37] Linking target samples/lspci 00:01:17.772 [34/37] Linking target samples/server 00:01:17.772 [35/37] Linking target samples/shadow_ioeventfd_server 00:01:17.772 [36/37] Linking target samples/gpio-pci-idio-16 00:01:17.772 [37/37] Linking target samples/null 00:01:17.772 INFO: autodetecting backend as ninja 00:01:17.772 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:17.772 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:18.032 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:18.032 ninja: no work to do. 00:01:24.612 The Meson build system 00:01:24.613 Version: 1.3.1 00:01:24.613 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:24.613 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:24.613 Build type: native build 00:01:24.613 Program cat found: YES (/usr/bin/cat) 00:01:24.613 Project name: DPDK 00:01:24.613 Project version: 23.11.0 00:01:24.613 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:24.613 C linker for the host machine: cc ld.bfd 2.39-16 00:01:24.613 Host machine cpu family: x86_64 00:01:24.613 Host machine cpu: x86_64 00:01:24.613 Message: ## Building in Developer Mode ## 00:01:24.613 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:24.613 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:24.613 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:24.613 Program python3 found: YES (/usr/bin/python3) 00:01:24.613 Program cat found: YES (/usr/bin/cat) 00:01:24.613 Compiler for C supports arguments -march=native: YES 00:01:24.613 Checking for size of "void *" : 8 00:01:24.613 Checking for size of "void *" : 8 (cached) 00:01:24.613 Library m found: YES 00:01:24.613 Library numa found: YES 00:01:24.613 Has header "numaif.h" : YES 00:01:24.613 Library fdt found: NO 00:01:24.613 Library execinfo found: NO 00:01:24.613 Has header "execinfo.h" : YES 00:01:24.613 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:24.613 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:24.613 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:24.613 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:24.613 Run-time dependency openssl found: YES 3.0.9 00:01:24.613 Run-time dependency libpcap found: YES 1.10.4 00:01:24.613 Has header "pcap.h" with dependency libpcap: YES 00:01:24.613 Compiler for C supports arguments -Wcast-qual: YES 00:01:24.613 Compiler for C supports arguments -Wdeprecated: YES 00:01:24.613 Compiler for C supports arguments -Wformat: YES 00:01:24.613 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:24.613 Compiler for C supports arguments -Wformat-security: NO 00:01:24.613 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:24.613 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:24.613 Compiler for C supports arguments -Wnested-externs: YES 00:01:24.613 Compiler for C supports arguments -Wold-style-definition: YES 00:01:24.613 Compiler for C supports arguments -Wpointer-arith: YES 00:01:24.613 Compiler for C supports arguments -Wsign-compare: YES 00:01:24.613 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:24.613 Compiler for C supports arguments -Wundef: YES 00:01:24.613 Compiler for C supports arguments -Wwrite-strings: YES 00:01:24.613 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:24.613 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:24.613 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:24.613 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:24.613 Program objdump found: YES (/usr/bin/objdump) 00:01:24.613 Compiler for C supports arguments -mavx512f: YES 00:01:24.613 Checking if "AVX512 checking" compiles: YES 00:01:24.613 Fetching value of define "__SSE4_2__" : 1 00:01:24.613 Fetching value of define "__AES__" : 1 00:01:24.613 Fetching value of define "__AVX__" : 1 00:01:24.613 Fetching value of define "__AVX2__" : 1 00:01:24.613 Fetching value of define "__AVX512BW__" : 1 00:01:24.613 Fetching value of define "__AVX512CD__" : 1 00:01:24.613 Fetching value of define "__AVX512DQ__" : 1 00:01:24.613 Fetching value of define "__AVX512F__" : 1 00:01:24.613 Fetching value of define "__AVX512VL__" : 1 00:01:24.613 Fetching value of define "__PCLMUL__" : 1 00:01:24.613 Fetching value of define "__RDRND__" : 1 00:01:24.613 Fetching value of define "__RDSEED__" : 1 00:01:24.613 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:24.613 Fetching value of define "__znver1__" : (undefined) 00:01:24.613 Fetching value of define "__znver2__" : (undefined) 00:01:24.613 Fetching value of define "__znver3__" : (undefined) 00:01:24.613 Fetching value of define "__znver4__" : (undefined) 00:01:24.613 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:24.613 Message: lib/log: Defining dependency "log" 00:01:24.613 Message: lib/kvargs: Defining dependency "kvargs" 00:01:24.613 Message: lib/telemetry: Defining dependency "telemetry" 00:01:24.613 Checking for function "getentropy" : NO 00:01:24.613 Message: lib/eal: Defining dependency "eal" 00:01:24.613 Message: lib/ring: Defining dependency "ring" 00:01:24.613 Message: lib/rcu: Defining dependency "rcu" 00:01:24.613 Message: lib/mempool: Defining dependency "mempool" 00:01:24.613 Message: lib/mbuf: Defining dependency "mbuf" 00:01:24.613 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:24.613 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:24.613 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:24.613 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:24.613 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:24.613 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:24.613 Compiler for C supports arguments -mpclmul: YES 00:01:24.613 Compiler for C supports arguments -maes: YES 00:01:24.613 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:24.613 Compiler for C supports arguments -mavx512bw: YES 00:01:24.613 Compiler for C supports arguments -mavx512dq: YES 00:01:24.613 Compiler for C supports arguments -mavx512vl: YES 00:01:24.613 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:24.613 Compiler for C supports arguments -mavx2: YES 00:01:24.613 Compiler for C supports arguments -mavx: YES 00:01:24.613 Message: lib/net: Defining dependency "net" 00:01:24.613 Message: lib/meter: Defining dependency "meter" 00:01:24.613 Message: lib/ethdev: Defining dependency "ethdev" 00:01:24.613 Message: lib/pci: Defining dependency "pci" 00:01:24.613 Message: lib/cmdline: Defining dependency "cmdline" 00:01:24.613 Message: lib/hash: Defining dependency "hash" 00:01:24.613 Message: lib/timer: Defining dependency "timer" 00:01:24.613 Message: lib/compressdev: Defining dependency "compressdev" 00:01:24.613 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:24.613 Message: lib/dmadev: Defining dependency "dmadev" 00:01:24.613 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:24.613 Message: lib/power: Defining dependency "power" 00:01:24.613 Message: lib/reorder: Defining dependency "reorder" 00:01:24.613 Message: lib/security: Defining dependency "security" 00:01:24.613 Has header "linux/userfaultfd.h" : YES 00:01:24.613 Has header "linux/vduse.h" : YES 00:01:24.613 Message: lib/vhost: Defining dependency "vhost" 00:01:24.613 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:24.613 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:24.613 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:24.613 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:24.613 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:24.613 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:24.613 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:24.613 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:24.614 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:24.614 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:24.614 Program doxygen found: YES (/usr/bin/doxygen) 00:01:24.614 Configuring doxy-api-html.conf using configuration 00:01:24.614 Configuring doxy-api-man.conf using configuration 00:01:24.614 Program mandb found: YES (/usr/bin/mandb) 00:01:24.614 Program sphinx-build found: NO 00:01:24.614 Configuring rte_build_config.h using configuration 00:01:24.614 Message: 00:01:24.614 ================= 00:01:24.614 Applications Enabled 00:01:24.614 ================= 00:01:24.614 00:01:24.614 apps: 00:01:24.614 00:01:24.614 00:01:24.614 Message: 00:01:24.614 ================= 00:01:24.614 Libraries Enabled 00:01:24.614 ================= 00:01:24.614 00:01:24.614 libs: 00:01:24.614 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:24.614 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:24.614 cryptodev, dmadev, power, reorder, security, vhost, 00:01:24.614 00:01:24.614 Message: 00:01:24.614 =============== 00:01:24.614 Drivers Enabled 00:01:24.614 =============== 00:01:24.614 00:01:24.614 common: 00:01:24.614 00:01:24.614 bus: 00:01:24.614 pci, vdev, 00:01:24.614 mempool: 00:01:24.614 ring, 00:01:24.614 dma: 00:01:24.614 00:01:24.614 net: 00:01:24.614 00:01:24.614 crypto: 00:01:24.614 00:01:24.614 compress: 00:01:24.614 00:01:24.614 vdpa: 00:01:24.614 00:01:24.614 00:01:24.614 Message: 00:01:24.614 ================= 00:01:24.614 Content Skipped 00:01:24.614 ================= 00:01:24.614 00:01:24.614 apps: 00:01:24.614 dumpcap: explicitly disabled via build config 00:01:24.614 graph: explicitly disabled via build config 00:01:24.614 pdump: explicitly disabled via build config 00:01:24.614 proc-info: explicitly disabled via build config 00:01:24.614 test-acl: explicitly disabled via build config 00:01:24.614 test-bbdev: explicitly disabled via build config 00:01:24.614 test-cmdline: explicitly disabled via build config 00:01:24.614 test-compress-perf: explicitly disabled via build config 00:01:24.614 test-crypto-perf: explicitly disabled via build config 00:01:24.614 test-dma-perf: explicitly disabled via build config 00:01:24.614 test-eventdev: explicitly disabled via build config 00:01:24.614 test-fib: explicitly disabled via build config 00:01:24.614 test-flow-perf: explicitly disabled via build config 00:01:24.614 test-gpudev: explicitly disabled via build config 00:01:24.614 test-mldev: explicitly disabled via build config 00:01:24.614 test-pipeline: explicitly disabled via build config 00:01:24.614 test-pmd: explicitly disabled via build config 00:01:24.614 test-regex: explicitly disabled via build config 00:01:24.614 test-sad: explicitly disabled via build config 00:01:24.614 test-security-perf: explicitly disabled via build config 00:01:24.614 00:01:24.614 libs: 00:01:24.614 metrics: explicitly disabled via build config 00:01:24.614 acl: explicitly disabled via build config 00:01:24.614 bbdev: explicitly disabled via build config 00:01:24.614 bitratestats: explicitly disabled via build config 00:01:24.614 bpf: explicitly disabled via build config 00:01:24.614 cfgfile: explicitly disabled via build config 00:01:24.614 distributor: explicitly disabled via build config 00:01:24.614 efd: explicitly disabled via build config 00:01:24.614 eventdev: explicitly disabled via build config 00:01:24.614 dispatcher: explicitly disabled via build config 00:01:24.614 gpudev: explicitly disabled via build config 00:01:24.614 gro: explicitly disabled via build config 00:01:24.614 gso: explicitly disabled via build config 00:01:24.614 ip_frag: explicitly disabled via build config 00:01:24.614 jobstats: explicitly disabled via build config 00:01:24.614 latencystats: explicitly disabled via build config 00:01:24.614 lpm: explicitly disabled via build config 00:01:24.614 member: explicitly disabled via build config 00:01:24.614 pcapng: explicitly disabled via build config 00:01:24.614 rawdev: explicitly disabled via build config 00:01:24.614 regexdev: explicitly disabled via build config 00:01:24.614 mldev: explicitly disabled via build config 00:01:24.614 rib: explicitly disabled via build config 00:01:24.614 sched: explicitly disabled via build config 00:01:24.614 stack: explicitly disabled via build config 00:01:24.614 ipsec: explicitly disabled via build config 00:01:24.614 pdcp: explicitly disabled via build config 00:01:24.614 fib: explicitly disabled via build config 00:01:24.614 port: explicitly disabled via build config 00:01:24.614 pdump: explicitly disabled via build config 00:01:24.614 table: explicitly disabled via build config 00:01:24.614 pipeline: explicitly disabled via build config 00:01:24.614 graph: explicitly disabled via build config 00:01:24.614 node: explicitly disabled via build config 00:01:24.614 00:01:24.614 drivers: 00:01:24.614 common/cpt: not in enabled drivers build config 00:01:24.614 common/dpaax: not in enabled drivers build config 00:01:24.614 common/iavf: not in enabled drivers build config 00:01:24.614 common/idpf: not in enabled drivers build config 00:01:24.614 common/mvep: not in enabled drivers build config 00:01:24.614 common/octeontx: not in enabled drivers build config 00:01:24.614 bus/auxiliary: not in enabled drivers build config 00:01:24.614 bus/cdx: not in enabled drivers build config 00:01:24.614 bus/dpaa: not in enabled drivers build config 00:01:24.614 bus/fslmc: not in enabled drivers build config 00:01:24.614 bus/ifpga: not in enabled drivers build config 00:01:24.614 bus/platform: not in enabled drivers build config 00:01:24.614 bus/vmbus: not in enabled drivers build config 00:01:24.614 common/cnxk: not in enabled drivers build config 00:01:24.614 common/mlx5: not in enabled drivers build config 00:01:24.614 common/nfp: not in enabled drivers build config 00:01:24.614 common/qat: not in enabled drivers build config 00:01:24.614 common/sfc_efx: not in enabled drivers build config 00:01:24.614 mempool/bucket: not in enabled drivers build config 00:01:24.614 mempool/cnxk: not in enabled drivers build config 00:01:24.614 mempool/dpaa: not in enabled drivers build config 00:01:24.614 mempool/dpaa2: not in enabled drivers build config 00:01:24.614 mempool/octeontx: not in enabled drivers build config 00:01:24.614 mempool/stack: not in enabled drivers build config 00:01:24.614 dma/cnxk: not in enabled drivers build config 00:01:24.614 dma/dpaa: not in enabled drivers build config 00:01:24.614 dma/dpaa2: not in enabled drivers build config 00:01:24.614 dma/hisilicon: not in enabled drivers build config 00:01:24.614 dma/idxd: not in enabled drivers build config 00:01:24.614 dma/ioat: not in enabled drivers build config 00:01:24.615 dma/skeleton: not in enabled drivers build config 00:01:24.615 net/af_packet: not in enabled drivers build config 00:01:24.615 net/af_xdp: not in enabled drivers build config 00:01:24.615 net/ark: not in enabled drivers build config 00:01:24.615 net/atlantic: not in enabled drivers build config 00:01:24.615 net/avp: not in enabled drivers build config 00:01:24.615 net/axgbe: not in enabled drivers build config 00:01:24.615 net/bnx2x: not in enabled drivers build config 00:01:24.615 net/bnxt: not in enabled drivers build config 00:01:24.615 net/bonding: not in enabled drivers build config 00:01:24.615 net/cnxk: not in enabled drivers build config 00:01:24.615 net/cpfl: not in enabled drivers build config 00:01:24.615 net/cxgbe: not in enabled drivers build config 00:01:24.615 net/dpaa: not in enabled drivers build config 00:01:24.615 net/dpaa2: not in enabled drivers build config 00:01:24.615 net/e1000: not in enabled drivers build config 00:01:24.615 net/ena: not in enabled drivers build config 00:01:24.615 net/enetc: not in enabled drivers build config 00:01:24.615 net/enetfec: not in enabled drivers build config 00:01:24.615 net/enic: not in enabled drivers build config 00:01:24.615 net/failsafe: not in enabled drivers build config 00:01:24.615 net/fm10k: not in enabled drivers build config 00:01:24.615 net/gve: not in enabled drivers build config 00:01:24.615 net/hinic: not in enabled drivers build config 00:01:24.615 net/hns3: not in enabled drivers build config 00:01:24.615 net/i40e: not in enabled drivers build config 00:01:24.615 net/iavf: not in enabled drivers build config 00:01:24.615 net/ice: not in enabled drivers build config 00:01:24.615 net/idpf: not in enabled drivers build config 00:01:24.615 net/igc: not in enabled drivers build config 00:01:24.615 net/ionic: not in enabled drivers build config 00:01:24.615 net/ipn3ke: not in enabled drivers build config 00:01:24.615 net/ixgbe: not in enabled drivers build config 00:01:24.615 net/mana: not in enabled drivers build config 00:01:24.615 net/memif: not in enabled drivers build config 00:01:24.615 net/mlx4: not in enabled drivers build config 00:01:24.615 net/mlx5: not in enabled drivers build config 00:01:24.615 net/mvneta: not in enabled drivers build config 00:01:24.615 net/mvpp2: not in enabled drivers build config 00:01:24.615 net/netvsc: not in enabled drivers build config 00:01:24.615 net/nfb: not in enabled drivers build config 00:01:24.615 net/nfp: not in enabled drivers build config 00:01:24.615 net/ngbe: not in enabled drivers build config 00:01:24.615 net/null: not in enabled drivers build config 00:01:24.615 net/octeontx: not in enabled drivers build config 00:01:24.615 net/octeon_ep: not in enabled drivers build config 00:01:24.615 net/pcap: not in enabled drivers build config 00:01:24.615 net/pfe: not in enabled drivers build config 00:01:24.615 net/qede: not in enabled drivers build config 00:01:24.615 net/ring: not in enabled drivers build config 00:01:24.615 net/sfc: not in enabled drivers build config 00:01:24.615 net/softnic: not in enabled drivers build config 00:01:24.615 net/tap: not in enabled drivers build config 00:01:24.615 net/thunderx: not in enabled drivers build config 00:01:24.615 net/txgbe: not in enabled drivers build config 00:01:24.615 net/vdev_netvsc: not in enabled drivers build config 00:01:24.615 net/vhost: not in enabled drivers build config 00:01:24.615 net/virtio: not in enabled drivers build config 00:01:24.615 net/vmxnet3: not in enabled drivers build config 00:01:24.615 raw/*: missing internal dependency, "rawdev" 00:01:24.615 crypto/armv8: not in enabled drivers build config 00:01:24.615 crypto/bcmfs: not in enabled drivers build config 00:01:24.615 crypto/caam_jr: not in enabled drivers build config 00:01:24.615 crypto/ccp: not in enabled drivers build config 00:01:24.615 crypto/cnxk: not in enabled drivers build config 00:01:24.615 crypto/dpaa_sec: not in enabled drivers build config 00:01:24.615 crypto/dpaa2_sec: not in enabled drivers build config 00:01:24.615 crypto/ipsec_mb: not in enabled drivers build config 00:01:24.615 crypto/mlx5: not in enabled drivers build config 00:01:24.615 crypto/mvsam: not in enabled drivers build config 00:01:24.615 crypto/nitrox: not in enabled drivers build config 00:01:24.615 crypto/null: not in enabled drivers build config 00:01:24.615 crypto/octeontx: not in enabled drivers build config 00:01:24.615 crypto/openssl: not in enabled drivers build config 00:01:24.615 crypto/scheduler: not in enabled drivers build config 00:01:24.615 crypto/uadk: not in enabled drivers build config 00:01:24.615 crypto/virtio: not in enabled drivers build config 00:01:24.615 compress/isal: not in enabled drivers build config 00:01:24.615 compress/mlx5: not in enabled drivers build config 00:01:24.615 compress/octeontx: not in enabled drivers build config 00:01:24.615 compress/zlib: not in enabled drivers build config 00:01:24.615 regex/*: missing internal dependency, "regexdev" 00:01:24.615 ml/*: missing internal dependency, "mldev" 00:01:24.615 vdpa/ifc: not in enabled drivers build config 00:01:24.615 vdpa/mlx5: not in enabled drivers build config 00:01:24.615 vdpa/nfp: not in enabled drivers build config 00:01:24.615 vdpa/sfc: not in enabled drivers build config 00:01:24.615 event/*: missing internal dependency, "eventdev" 00:01:24.615 baseband/*: missing internal dependency, "bbdev" 00:01:24.615 gpu/*: missing internal dependency, "gpudev" 00:01:24.615 00:01:24.615 00:01:24.615 Build targets in project: 84 00:01:24.615 00:01:24.615 DPDK 23.11.0 00:01:24.615 00:01:24.615 User defined options 00:01:24.615 buildtype : debug 00:01:24.615 default_library : shared 00:01:24.615 libdir : lib 00:01:24.615 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:24.615 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:24.615 c_link_args : 00:01:24.615 cpu_instruction_set: native 00:01:24.615 disable_apps : test-acl,test-bbdev,test-crypto-perf,test-fib,test-pipeline,test-gpudev,test-flow-perf,pdump,dumpcap,test-sad,test-cmdline,test-eventdev,proc-info,test,test-dma-perf,test-pmd,test-mldev,test-compress-perf,test-security-perf,graph,test-regex 00:01:24.615 disable_libs : pipeline,member,eventdev,efd,bbdev,cfgfile,rib,sched,mldev,metrics,lpm,latencystats,pdump,pdcp,bpf,ipsec,fib,ip_frag,table,port,stack,gro,jobstats,regexdev,rawdev,pcapng,dispatcher,node,bitratestats,acl,gpudev,distributor,graph,gso 00:01:24.615 enable_docs : false 00:01:24.615 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:24.615 enable_kmods : false 00:01:24.615 tests : false 00:01:24.615 00:01:24.615 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:24.615 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:24.615 [1/264] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:24.615 [2/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:24.615 [3/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:24.615 [4/264] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:24.615 [5/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:24.615 [6/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:24.615 [7/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:24.615 [8/264] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:24.615 [9/264] Linking static target lib/librte_kvargs.a 00:01:24.615 [10/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:24.616 [11/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:24.616 [12/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:24.616 [13/264] Linking static target lib/librte_log.a 00:01:24.616 [14/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:24.616 [15/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:24.616 [16/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:24.616 [17/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:24.616 [18/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:24.616 [19/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:24.616 [20/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:24.616 [21/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:24.616 [22/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:24.616 [23/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:24.616 [24/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:24.616 [25/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:24.616 [26/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:24.616 [27/264] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:24.616 [28/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:24.616 [29/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:24.616 [30/264] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:24.616 [31/264] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:24.616 [32/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:24.616 [33/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:24.616 [34/264] Linking static target lib/librte_pci.a 00:01:24.616 [35/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:24.616 [36/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:24.616 [37/264] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:24.616 [38/264] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:24.616 [39/264] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:24.616 [40/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:24.616 [41/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:24.616 [42/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:24.616 [43/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:24.616 [44/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:24.616 [45/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:24.616 [46/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:24.616 [47/264] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.616 [48/264] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.616 [49/264] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:24.616 [50/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:24.616 [51/264] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:24.616 [52/264] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:24.616 [53/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:24.616 [54/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:24.876 [55/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:24.876 [56/264] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:24.876 [57/264] Linking static target lib/librte_ring.a 00:01:24.876 [58/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:24.876 [59/264] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:24.876 [60/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:24.876 [61/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:24.876 [62/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:24.876 [63/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:24.876 [64/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:24.876 [65/264] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:24.876 [66/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:24.876 [67/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:24.876 [68/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:24.876 [69/264] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:24.876 [70/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:24.876 [71/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:24.876 [72/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:24.876 [73/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:24.876 [74/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:24.876 [75/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:24.876 [76/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:24.876 [77/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:24.876 [78/264] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:24.876 [79/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:24.876 [80/264] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:24.876 [81/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:24.876 [82/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:24.876 [83/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:24.876 [84/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:24.876 [85/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:24.876 [86/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:24.876 [87/264] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:24.876 [88/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:24.876 [89/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:24.876 [90/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:24.876 [91/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:24.876 [92/264] Linking static target lib/librte_meter.a 00:01:24.876 [93/264] Linking static target lib/librte_telemetry.a 00:01:24.876 [94/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:24.876 [95/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:24.876 [96/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:24.876 [97/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:24.876 [98/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:24.876 [99/264] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:24.876 [100/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:24.876 [101/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:24.876 [102/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:24.876 [103/264] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:24.876 [104/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:24.876 [105/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:24.876 [106/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:24.876 [107/264] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:24.876 [108/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:24.876 [109/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:24.876 [110/264] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:24.876 [111/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:24.876 [112/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:24.876 [113/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:24.877 [114/264] Linking static target lib/librte_cmdline.a 00:01:24.877 [115/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:24.877 [116/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:24.877 [117/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:24.877 [118/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:24.877 [119/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:24.877 [120/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:24.877 [121/264] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:24.877 [122/264] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:24.877 [123/264] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:24.877 [124/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:24.877 [125/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:24.877 [126/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:24.877 [127/264] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:24.877 [128/264] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.877 [129/264] Linking static target lib/librte_timer.a 00:01:24.877 [130/264] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:24.877 [131/264] Linking static target lib/librte_net.a 00:01:24.877 [132/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:24.877 [133/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:24.877 [134/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:24.877 [135/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:24.877 [136/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:24.877 [137/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:24.877 [138/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:24.877 [139/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:24.877 [140/264] Linking target lib/librte_log.so.24.0 00:01:24.877 [141/264] Linking static target lib/librte_dmadev.a 00:01:24.877 [142/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:24.877 [143/264] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:24.877 [144/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:24.877 [145/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:24.877 [146/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:24.877 [147/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:24.877 [148/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:25.137 [149/264] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:25.137 [150/264] Linking static target lib/librte_compressdev.a 00:01:25.137 [151/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:25.137 [152/264] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:25.137 [153/264] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:25.137 [154/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:25.137 [155/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:25.137 [156/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:25.137 [157/264] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:25.137 [158/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:25.137 [159/264] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.137 [160/264] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:25.137 [161/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:25.137 [162/264] Linking static target lib/librte_mempool.a 00:01:25.137 [163/264] Linking static target lib/librte_rcu.a 00:01:25.137 [164/264] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:25.137 [165/264] Linking static target lib/librte_eal.a 00:01:25.137 [166/264] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:25.137 [167/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:25.137 [168/264] Linking static target lib/librte_security.a 00:01:25.137 [169/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:25.137 [170/264] Linking static target lib/librte_power.a 00:01:25.137 [171/264] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:25.137 [172/264] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:25.137 [173/264] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:25.137 [174/264] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:25.137 [175/264] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:25.137 [176/264] Linking static target lib/librte_reorder.a 00:01:25.137 [177/264] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:25.137 [178/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:25.137 [179/264] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:25.137 [180/264] Linking static target lib/librte_hash.a 00:01:25.137 [181/264] Linking target lib/librte_kvargs.so.24.0 00:01:25.137 [182/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:25.137 [183/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:25.137 [184/264] Linking static target lib/librte_mbuf.a 00:01:25.137 [185/264] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:25.137 [186/264] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:25.137 [187/264] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.137 [188/264] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:25.137 [189/264] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:25.137 [190/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:25.137 [191/264] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:25.137 [192/264] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:25.137 [193/264] Linking static target drivers/librte_bus_pci.a 00:01:25.137 [194/264] Linking static target drivers/librte_bus_vdev.a 00:01:25.137 [195/264] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:25.398 [196/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:25.398 [197/264] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:25.398 [198/264] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:25.398 [199/264] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.398 [200/264] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:25.398 [201/264] Linking static target drivers/librte_mempool_ring.a 00:01:25.398 [202/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:25.398 [203/264] Linking static target lib/librte_cryptodev.a 00:01:25.398 [204/264] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.398 [205/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:25.398 [206/264] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.398 [207/264] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.658 [208/264] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.658 [209/264] Linking target lib/librte_telemetry.so.24.0 00:01:25.658 [210/264] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.658 [211/264] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.658 [212/264] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:25.658 [213/264] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.658 [214/264] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.918 [215/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:25.918 [216/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:25.918 [217/264] Linking static target lib/librte_ethdev.a 00:01:25.918 [218/264] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.918 [219/264] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.918 [220/264] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.918 [221/264] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.179 [222/264] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.179 [223/264] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.120 [224/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:27.120 [225/264] Linking static target lib/librte_vhost.a 00:01:27.399 [226/264] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.338 [227/264] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.942 [228/264] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.513 [229/264] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.513 [230/264] Linking target lib/librte_eal.so.24.0 00:01:36.773 [231/264] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:36.773 [232/264] Linking target lib/librte_timer.so.24.0 00:01:36.773 [233/264] Linking target lib/librte_ring.so.24.0 00:01:36.773 [234/264] Linking target lib/librte_dmadev.so.24.0 00:01:36.773 [235/264] Linking target lib/librte_pci.so.24.0 00:01:36.773 [236/264] Linking target drivers/librte_bus_vdev.so.24.0 00:01:36.773 [237/264] Linking target lib/librte_meter.so.24.0 00:01:36.773 [238/264] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:36.773 [239/264] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:36.773 [240/264] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:36.773 [241/264] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:36.773 [242/264] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:37.033 [243/264] Linking target drivers/librte_bus_pci.so.24.0 00:01:37.033 [244/264] Linking target lib/librte_rcu.so.24.0 00:01:37.033 [245/264] Linking target lib/librte_mempool.so.24.0 00:01:37.033 [246/264] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:37.034 [247/264] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:37.034 [248/264] Linking target lib/librte_mbuf.so.24.0 00:01:37.034 [249/264] Linking target drivers/librte_mempool_ring.so.24.0 00:01:37.294 [250/264] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:37.294 [251/264] Linking target lib/librte_compressdev.so.24.0 00:01:37.294 [252/264] Linking target lib/librte_net.so.24.0 00:01:37.294 [253/264] Linking target lib/librte_reorder.so.24.0 00:01:37.294 [254/264] Linking target lib/librte_cryptodev.so.24.0 00:01:37.294 [255/264] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:37.554 [256/264] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:37.554 [257/264] Linking target lib/librte_cmdline.so.24.0 00:01:37.554 [258/264] Linking target lib/librte_hash.so.24.0 00:01:37.554 [259/264] Linking target lib/librte_ethdev.so.24.0 00:01:37.554 [260/264] Linking target lib/librte_security.so.24.0 00:01:37.554 [261/264] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:37.554 [262/264] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:37.554 [263/264] Linking target lib/librte_power.so.24.0 00:01:37.554 [264/264] Linking target lib/librte_vhost.so.24.0 00:01:37.815 INFO: autodetecting backend as ninja 00:01:37.815 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:01:38.757 CC lib/ut_mock/mock.o 00:01:38.757 CC lib/log/log.o 00:01:38.757 CC lib/log/log_flags.o 00:01:38.757 CC lib/log/log_deprecated.o 00:01:38.757 CC lib/ut/ut.o 00:01:39.028 LIB libspdk_ut_mock.a 00:01:39.028 SO libspdk_ut_mock.so.6.0 00:01:39.028 LIB libspdk_log.a 00:01:39.028 LIB libspdk_ut.a 00:01:39.028 SO libspdk_log.so.7.0 00:01:39.028 SYMLINK libspdk_ut_mock.so 00:01:39.028 SO libspdk_ut.so.2.0 00:01:39.028 SYMLINK libspdk_log.so 00:01:39.028 SYMLINK libspdk_ut.so 00:01:39.291 CC lib/util/base64.o 00:01:39.291 CXX lib/trace_parser/trace.o 00:01:39.291 CC lib/util/bit_array.o 00:01:39.552 CC lib/util/cpuset.o 00:01:39.552 CC lib/util/crc16.o 00:01:39.552 CC lib/util/crc32c.o 00:01:39.552 CC lib/util/crc32.o 00:01:39.552 CC lib/util/crc32_ieee.o 00:01:39.552 CC lib/util/crc64.o 00:01:39.552 CC lib/util/dif.o 00:01:39.552 CC lib/util/fd.o 00:01:39.552 CC lib/util/file.o 00:01:39.552 CC lib/dma/dma.o 00:01:39.552 CC lib/util/hexlify.o 00:01:39.552 CC lib/util/iov.o 00:01:39.552 CC lib/ioat/ioat.o 00:01:39.552 CC lib/util/math.o 00:01:39.552 CC lib/util/pipe.o 00:01:39.552 CC lib/util/strerror_tls.o 00:01:39.552 CC lib/util/string.o 00:01:39.552 CC lib/util/uuid.o 00:01:39.552 CC lib/util/fd_group.o 00:01:39.552 CC lib/util/xor.o 00:01:39.552 CC lib/util/zipf.o 00:01:39.552 CC lib/vfio_user/host/vfio_user_pci.o 00:01:39.552 CC lib/vfio_user/host/vfio_user.o 00:01:39.552 LIB libspdk_dma.a 00:01:39.813 SO libspdk_dma.so.4.0 00:01:39.813 LIB libspdk_ioat.a 00:01:39.813 SYMLINK libspdk_dma.so 00:01:39.813 SO libspdk_ioat.so.7.0 00:01:39.813 LIB libspdk_vfio_user.a 00:01:39.813 SYMLINK libspdk_ioat.so 00:01:39.813 SO libspdk_vfio_user.so.5.0 00:01:39.813 LIB libspdk_util.a 00:01:40.075 SYMLINK libspdk_vfio_user.so 00:01:40.075 SO libspdk_util.so.9.0 00:01:40.075 SYMLINK libspdk_util.so 00:01:40.336 LIB libspdk_trace_parser.a 00:01:40.336 SO libspdk_trace_parser.so.5.0 00:01:40.336 SYMLINK libspdk_trace_parser.so 00:01:40.597 CC lib/json/json_parse.o 00:01:40.597 CC lib/idxd/idxd.o 00:01:40.597 CC lib/idxd/idxd_user.o 00:01:40.597 CC lib/json/json_util.o 00:01:40.597 CC lib/json/json_write.o 00:01:40.597 CC lib/rdma/common.o 00:01:40.597 CC lib/rdma/rdma_verbs.o 00:01:40.597 CC lib/vmd/vmd.o 00:01:40.597 CC lib/vmd/led.o 00:01:40.597 CC lib/conf/conf.o 00:01:40.597 CC lib/env_dpdk/env.o 00:01:40.597 CC lib/env_dpdk/memory.o 00:01:40.597 CC lib/env_dpdk/pci.o 00:01:40.597 CC lib/env_dpdk/init.o 00:01:40.597 CC lib/env_dpdk/threads.o 00:01:40.597 CC lib/env_dpdk/pci_ioat.o 00:01:40.597 CC lib/env_dpdk/pci_virtio.o 00:01:40.597 CC lib/env_dpdk/pci_vmd.o 00:01:40.597 CC lib/env_dpdk/pci_idxd.o 00:01:40.597 CC lib/env_dpdk/pci_event.o 00:01:40.597 CC lib/env_dpdk/sigbus_handler.o 00:01:40.597 CC lib/env_dpdk/pci_dpdk.o 00:01:40.597 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:40.597 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:40.859 LIB libspdk_conf.a 00:01:40.859 SO libspdk_conf.so.6.0 00:01:40.859 LIB libspdk_rdma.a 00:01:40.859 LIB libspdk_json.a 00:01:40.859 SO libspdk_rdma.so.6.0 00:01:40.859 SYMLINK libspdk_conf.so 00:01:40.859 SO libspdk_json.so.6.0 00:01:40.859 SYMLINK libspdk_rdma.so 00:01:40.859 SYMLINK libspdk_json.so 00:01:41.119 LIB libspdk_idxd.a 00:01:41.119 SO libspdk_idxd.so.12.0 00:01:41.119 LIB libspdk_vmd.a 00:01:41.119 SYMLINK libspdk_idxd.so 00:01:41.119 SO libspdk_vmd.so.6.0 00:01:41.119 SYMLINK libspdk_vmd.so 00:01:41.379 CC lib/jsonrpc/jsonrpc_server.o 00:01:41.379 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:41.379 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:41.379 CC lib/jsonrpc/jsonrpc_client.o 00:01:41.639 LIB libspdk_jsonrpc.a 00:01:41.639 SO libspdk_jsonrpc.so.6.0 00:01:41.639 SYMLINK libspdk_jsonrpc.so 00:01:41.639 LIB libspdk_env_dpdk.a 00:01:41.900 SO libspdk_env_dpdk.so.14.0 00:01:41.900 SYMLINK libspdk_env_dpdk.so 00:01:42.160 CC lib/rpc/rpc.o 00:01:42.160 LIB libspdk_rpc.a 00:01:42.420 SO libspdk_rpc.so.6.0 00:01:42.420 SYMLINK libspdk_rpc.so 00:01:42.681 CC lib/notify/notify.o 00:01:42.681 CC lib/notify/notify_rpc.o 00:01:42.681 CC lib/trace/trace.o 00:01:42.681 CC lib/trace/trace_flags.o 00:01:42.681 CC lib/trace/trace_rpc.o 00:01:42.681 CC lib/keyring/keyring.o 00:01:42.681 CC lib/keyring/keyring_rpc.o 00:01:42.941 LIB libspdk_notify.a 00:01:42.941 SO libspdk_notify.so.6.0 00:01:42.941 LIB libspdk_keyring.a 00:01:42.941 LIB libspdk_trace.a 00:01:42.941 SYMLINK libspdk_notify.so 00:01:42.941 SO libspdk_keyring.so.1.0 00:01:42.941 SO libspdk_trace.so.10.0 00:01:43.202 SYMLINK libspdk_keyring.so 00:01:43.202 SYMLINK libspdk_trace.so 00:01:43.496 CC lib/sock/sock.o 00:01:43.496 CC lib/sock/sock_rpc.o 00:01:43.496 CC lib/thread/thread.o 00:01:43.496 CC lib/thread/iobuf.o 00:01:43.787 LIB libspdk_sock.a 00:01:43.787 SO libspdk_sock.so.9.0 00:01:43.787 SYMLINK libspdk_sock.so 00:01:44.358 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:44.358 CC lib/nvme/nvme_ctrlr.o 00:01:44.358 CC lib/nvme/nvme_fabric.o 00:01:44.358 CC lib/nvme/nvme_ns_cmd.o 00:01:44.358 CC lib/nvme/nvme_ns.o 00:01:44.358 CC lib/nvme/nvme_pcie_common.o 00:01:44.358 CC lib/nvme/nvme_pcie.o 00:01:44.358 CC lib/nvme/nvme_qpair.o 00:01:44.358 CC lib/nvme/nvme.o 00:01:44.358 CC lib/nvme/nvme_quirks.o 00:01:44.358 CC lib/nvme/nvme_transport.o 00:01:44.358 CC lib/nvme/nvme_discovery.o 00:01:44.358 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:44.358 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:44.358 CC lib/nvme/nvme_tcp.o 00:01:44.358 CC lib/nvme/nvme_opal.o 00:01:44.358 CC lib/nvme/nvme_io_msg.o 00:01:44.358 CC lib/nvme/nvme_poll_group.o 00:01:44.358 CC lib/nvme/nvme_stubs.o 00:01:44.358 CC lib/nvme/nvme_zns.o 00:01:44.358 CC lib/nvme/nvme_auth.o 00:01:44.358 CC lib/nvme/nvme_cuse.o 00:01:44.358 CC lib/nvme/nvme_vfio_user.o 00:01:44.358 CC lib/nvme/nvme_rdma.o 00:01:44.619 LIB libspdk_thread.a 00:01:44.619 SO libspdk_thread.so.10.0 00:01:44.880 SYMLINK libspdk_thread.so 00:01:45.140 CC lib/vfu_tgt/tgt_endpoint.o 00:01:45.140 CC lib/virtio/virtio.o 00:01:45.140 CC lib/vfu_tgt/tgt_rpc.o 00:01:45.140 CC lib/virtio/virtio_vhost_user.o 00:01:45.140 CC lib/virtio/virtio_vfio_user.o 00:01:45.140 CC lib/virtio/virtio_pci.o 00:01:45.140 CC lib/blob/blobstore.o 00:01:45.140 CC lib/blob/request.o 00:01:45.140 CC lib/blob/zeroes.o 00:01:45.140 CC lib/blob/blob_bs_dev.o 00:01:45.140 CC lib/accel/accel.o 00:01:45.140 CC lib/accel/accel_rpc.o 00:01:45.140 CC lib/accel/accel_sw.o 00:01:45.140 CC lib/init/json_config.o 00:01:45.140 CC lib/init/subsystem.o 00:01:45.140 CC lib/init/subsystem_rpc.o 00:01:45.140 CC lib/init/rpc.o 00:01:45.401 LIB libspdk_init.a 00:01:45.401 LIB libspdk_virtio.a 00:01:45.401 LIB libspdk_vfu_tgt.a 00:01:45.401 SO libspdk_init.so.5.0 00:01:45.401 SO libspdk_virtio.so.7.0 00:01:45.401 SO libspdk_vfu_tgt.so.3.0 00:01:45.663 SYMLINK libspdk_init.so 00:01:45.663 SYMLINK libspdk_virtio.so 00:01:45.663 SYMLINK libspdk_vfu_tgt.so 00:01:45.924 CC lib/event/app.o 00:01:45.924 CC lib/event/reactor.o 00:01:45.924 CC lib/event/log_rpc.o 00:01:45.924 CC lib/event/app_rpc.o 00:01:45.924 CC lib/event/scheduler_static.o 00:01:45.924 LIB libspdk_accel.a 00:01:46.185 SO libspdk_accel.so.15.0 00:01:46.185 LIB libspdk_nvme.a 00:01:46.185 SYMLINK libspdk_accel.so 00:01:46.185 SO libspdk_nvme.so.13.0 00:01:46.185 LIB libspdk_event.a 00:01:46.446 SO libspdk_event.so.13.0 00:01:46.446 SYMLINK libspdk_event.so 00:01:46.446 CC lib/bdev/bdev.o 00:01:46.446 CC lib/bdev/bdev_rpc.o 00:01:46.446 CC lib/bdev/part.o 00:01:46.446 CC lib/bdev/bdev_zone.o 00:01:46.446 CC lib/bdev/scsi_nvme.o 00:01:46.446 SYMLINK libspdk_nvme.so 00:01:47.389 LIB libspdk_blob.a 00:01:47.650 SO libspdk_blob.so.11.0 00:01:47.650 SYMLINK libspdk_blob.so 00:01:47.911 CC lib/lvol/lvol.o 00:01:47.911 CC lib/blobfs/blobfs.o 00:01:47.911 CC lib/blobfs/tree.o 00:01:48.856 LIB libspdk_blobfs.a 00:01:48.856 LIB libspdk_bdev.a 00:01:48.856 SO libspdk_blobfs.so.10.0 00:01:48.856 LIB libspdk_lvol.a 00:01:48.856 SO libspdk_bdev.so.15.0 00:01:48.856 SO libspdk_lvol.so.10.0 00:01:48.856 SYMLINK libspdk_blobfs.so 00:01:48.856 SYMLINK libspdk_lvol.so 00:01:48.856 SYMLINK libspdk_bdev.so 00:01:49.117 CC lib/ftl/ftl_core.o 00:01:49.117 CC lib/ftl/ftl_init.o 00:01:49.117 CC lib/ftl/ftl_layout.o 00:01:49.117 CC lib/ftl/ftl_debug.o 00:01:49.117 CC lib/ftl/ftl_io.o 00:01:49.117 CC lib/ftl/ftl_sb.o 00:01:49.117 CC lib/ftl/ftl_l2p.o 00:01:49.117 CC lib/ftl/ftl_l2p_flat.o 00:01:49.117 CC lib/ftl/ftl_nv_cache.o 00:01:49.117 CC lib/ftl/ftl_band.o 00:01:49.117 CC lib/ftl/ftl_band_ops.o 00:01:49.117 CC lib/ftl/ftl_writer.o 00:01:49.117 CC lib/ftl/ftl_rq.o 00:01:49.117 CC lib/ftl/ftl_l2p_cache.o 00:01:49.117 CC lib/scsi/dev.o 00:01:49.117 CC lib/ftl/ftl_reloc.o 00:01:49.117 CC lib/ftl/mngt/ftl_mngt.o 00:01:49.117 CC lib/scsi/lun.o 00:01:49.117 CC lib/nbd/nbd.o 00:01:49.117 CC lib/ublk/ublk.o 00:01:49.117 CC lib/ftl/ftl_p2l.o 00:01:49.117 CC lib/nvmf/ctrlr.o 00:01:49.117 CC lib/nbd/nbd_rpc.o 00:01:49.117 CC lib/scsi/port.o 00:01:49.117 CC lib/nvmf/ctrlr_discovery.o 00:01:49.117 CC lib/scsi/scsi.o 00:01:49.117 CC lib/nvmf/ctrlr_bdev.o 00:01:49.117 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:49.117 CC lib/scsi/scsi_bdev.o 00:01:49.117 CC lib/ublk/ublk_rpc.o 00:01:49.117 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:49.117 CC lib/nvmf/subsystem.o 00:01:49.117 CC lib/scsi/scsi_pr.o 00:01:49.117 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:49.117 CC lib/nvmf/nvmf.o 00:01:49.117 CC lib/scsi/scsi_rpc.o 00:01:49.117 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:49.117 CC lib/nvmf/nvmf_rpc.o 00:01:49.117 CC lib/nvmf/tcp.o 00:01:49.117 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:49.117 CC lib/scsi/task.o 00:01:49.117 CC lib/nvmf/transport.o 00:01:49.117 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:49.117 CC lib/nvmf/vfio_user.o 00:01:49.117 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:49.117 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:49.117 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:49.117 CC lib/nvmf/rdma.o 00:01:49.117 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:49.117 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:49.374 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:49.374 CC lib/ftl/utils/ftl_md.o 00:01:49.374 CC lib/ftl/utils/ftl_conf.o 00:01:49.374 CC lib/ftl/utils/ftl_mempool.o 00:01:49.374 CC lib/ftl/utils/ftl_bitmap.o 00:01:49.374 CC lib/ftl/utils/ftl_property.o 00:01:49.374 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:49.374 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:49.374 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:49.374 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:49.374 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:49.374 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:49.375 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:49.375 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:49.375 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:49.375 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:49.375 CC lib/ftl/base/ftl_base_dev.o 00:01:49.375 CC lib/ftl/ftl_trace.o 00:01:49.375 CC lib/ftl/base/ftl_base_bdev.o 00:01:49.632 LIB libspdk_nbd.a 00:01:49.632 SO libspdk_nbd.so.7.0 00:01:49.632 SYMLINK libspdk_nbd.so 00:01:49.892 LIB libspdk_scsi.a 00:01:49.892 SO libspdk_scsi.so.9.0 00:01:49.892 LIB libspdk_ublk.a 00:01:49.892 SO libspdk_ublk.so.3.0 00:01:49.892 SYMLINK libspdk_scsi.so 00:01:49.892 SYMLINK libspdk_ublk.so 00:01:50.151 LIB libspdk_ftl.a 00:01:50.151 CC lib/vhost/vhost.o 00:01:50.151 CC lib/vhost/vhost_rpc.o 00:01:50.151 CC lib/vhost/vhost_scsi.o 00:01:50.151 CC lib/vhost/vhost_blk.o 00:01:50.151 CC lib/vhost/rte_vhost_user.o 00:01:50.151 CC lib/iscsi/conn.o 00:01:50.151 CC lib/iscsi/init_grp.o 00:01:50.151 CC lib/iscsi/iscsi.o 00:01:50.151 SO libspdk_ftl.so.9.0 00:01:50.151 CC lib/iscsi/md5.o 00:01:50.151 CC lib/iscsi/portal_grp.o 00:01:50.151 CC lib/iscsi/param.o 00:01:50.151 CC lib/iscsi/iscsi_subsystem.o 00:01:50.151 CC lib/iscsi/tgt_node.o 00:01:50.151 CC lib/iscsi/iscsi_rpc.o 00:01:50.151 CC lib/iscsi/task.o 00:01:50.723 SYMLINK libspdk_ftl.so 00:01:50.983 LIB libspdk_nvmf.a 00:01:51.244 SO libspdk_nvmf.so.18.0 00:01:51.244 LIB libspdk_vhost.a 00:01:51.244 SO libspdk_vhost.so.8.0 00:01:51.244 SYMLINK libspdk_nvmf.so 00:01:51.505 SYMLINK libspdk_vhost.so 00:01:51.505 LIB libspdk_iscsi.a 00:01:51.505 SO libspdk_iscsi.so.8.0 00:01:51.765 SYMLINK libspdk_iscsi.so 00:01:52.335 CC module/env_dpdk/env_dpdk_rpc.o 00:01:52.335 CC module/vfu_device/vfu_virtio.o 00:01:52.335 CC module/vfu_device/vfu_virtio_blk.o 00:01:52.335 CC module/vfu_device/vfu_virtio_scsi.o 00:01:52.335 CC module/vfu_device/vfu_virtio_rpc.o 00:01:52.335 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:52.335 LIB libspdk_env_dpdk_rpc.a 00:01:52.335 CC module/accel/iaa/accel_iaa.o 00:01:52.335 CC module/accel/iaa/accel_iaa_rpc.o 00:01:52.335 CC module/scheduler/gscheduler/gscheduler.o 00:01:52.335 CC module/sock/posix/posix.o 00:01:52.335 CC module/accel/dsa/accel_dsa.o 00:01:52.335 CC module/accel/dsa/accel_dsa_rpc.o 00:01:52.335 CC module/accel/ioat/accel_ioat.o 00:01:52.335 CC module/accel/error/accel_error.o 00:01:52.335 CC module/accel/ioat/accel_ioat_rpc.o 00:01:52.335 CC module/accel/error/accel_error_rpc.o 00:01:52.335 CC module/keyring/file/keyring.o 00:01:52.335 CC module/keyring/file/keyring_rpc.o 00:01:52.335 CC module/blob/bdev/blob_bdev.o 00:01:52.335 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:52.335 SO libspdk_env_dpdk_rpc.so.6.0 00:01:52.596 SYMLINK libspdk_env_dpdk_rpc.so 00:01:52.596 LIB libspdk_scheduler_dynamic.a 00:01:52.596 LIB libspdk_scheduler_gscheduler.a 00:01:52.596 LIB libspdk_keyring_file.a 00:01:52.596 LIB libspdk_scheduler_dpdk_governor.a 00:01:52.597 SO libspdk_scheduler_dynamic.so.4.0 00:01:52.597 LIB libspdk_accel_iaa.a 00:01:52.597 LIB libspdk_accel_ioat.a 00:01:52.597 LIB libspdk_accel_error.a 00:01:52.597 SO libspdk_scheduler_gscheduler.so.4.0 00:01:52.597 SO libspdk_keyring_file.so.1.0 00:01:52.597 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:52.597 SO libspdk_accel_iaa.so.3.0 00:01:52.597 SYMLINK libspdk_scheduler_dynamic.so 00:01:52.597 SO libspdk_accel_ioat.so.6.0 00:01:52.597 LIB libspdk_accel_dsa.a 00:01:52.597 SO libspdk_accel_error.so.2.0 00:01:52.597 LIB libspdk_blob_bdev.a 00:01:52.597 SO libspdk_accel_dsa.so.5.0 00:01:52.597 SYMLINK libspdk_scheduler_gscheduler.so 00:01:52.597 SYMLINK libspdk_keyring_file.so 00:01:52.597 SO libspdk_blob_bdev.so.11.0 00:01:52.597 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:52.597 SYMLINK libspdk_accel_iaa.so 00:01:52.597 SYMLINK libspdk_accel_ioat.so 00:01:52.858 SYMLINK libspdk_accel_error.so 00:01:52.858 SYMLINK libspdk_accel_dsa.so 00:01:52.858 SYMLINK libspdk_blob_bdev.so 00:01:52.858 LIB libspdk_vfu_device.a 00:01:52.858 SO libspdk_vfu_device.so.3.0 00:01:52.858 SYMLINK libspdk_vfu_device.so 00:01:53.119 LIB libspdk_sock_posix.a 00:01:53.119 SO libspdk_sock_posix.so.6.0 00:01:53.119 SYMLINK libspdk_sock_posix.so 00:01:53.378 CC module/bdev/malloc/bdev_malloc.o 00:01:53.378 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:53.378 CC module/bdev/raid/bdev_raid.o 00:01:53.378 CC module/bdev/raid/bdev_raid_sb.o 00:01:53.378 CC module/bdev/raid/bdev_raid_rpc.o 00:01:53.378 CC module/bdev/raid/raid0.o 00:01:53.378 CC module/bdev/raid/raid1.o 00:01:53.378 CC module/bdev/raid/concat.o 00:01:53.378 CC module/bdev/gpt/gpt.o 00:01:53.378 CC module/bdev/gpt/vbdev_gpt.o 00:01:53.378 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:53.378 CC module/blobfs/bdev/blobfs_bdev.o 00:01:53.378 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:53.378 CC module/bdev/delay/vbdev_delay.o 00:01:53.378 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:53.378 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:53.378 CC module/bdev/null/bdev_null.o 00:01:53.378 CC module/bdev/null/bdev_null_rpc.o 00:01:53.378 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:53.378 CC module/bdev/split/vbdev_split.o 00:01:53.378 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:53.378 CC module/bdev/split/vbdev_split_rpc.o 00:01:53.378 CC module/bdev/nvme/bdev_nvme.o 00:01:53.378 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:53.378 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:53.378 CC module/bdev/nvme/nvme_rpc.o 00:01:53.378 CC module/bdev/nvme/bdev_mdns_client.o 00:01:53.378 CC module/bdev/nvme/vbdev_opal.o 00:01:53.378 CC module/bdev/error/vbdev_error.o 00:01:53.378 CC module/bdev/error/vbdev_error_rpc.o 00:01:53.378 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:53.378 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:53.378 CC module/bdev/aio/bdev_aio_rpc.o 00:01:53.378 CC module/bdev/aio/bdev_aio.o 00:01:53.378 CC module/bdev/passthru/vbdev_passthru.o 00:01:53.378 CC module/bdev/ftl/bdev_ftl.o 00:01:53.378 CC module/bdev/iscsi/bdev_iscsi.o 00:01:53.378 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:53.378 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:53.378 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:53.378 CC module/bdev/lvol/vbdev_lvol.o 00:01:53.378 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:53.638 LIB libspdk_bdev_null.a 00:01:53.638 LIB libspdk_blobfs_bdev.a 00:01:53.638 LIB libspdk_bdev_split.a 00:01:53.638 SO libspdk_bdev_null.so.6.0 00:01:53.638 SO libspdk_bdev_split.so.6.0 00:01:53.638 SO libspdk_blobfs_bdev.so.6.0 00:01:53.638 LIB libspdk_bdev_gpt.a 00:01:53.638 LIB libspdk_bdev_error.a 00:01:53.638 SYMLINK libspdk_bdev_null.so 00:01:53.638 SO libspdk_bdev_gpt.so.6.0 00:01:53.638 SO libspdk_bdev_error.so.6.0 00:01:53.638 SYMLINK libspdk_blobfs_bdev.so 00:01:53.638 LIB libspdk_bdev_ftl.a 00:01:53.638 SYMLINK libspdk_bdev_split.so 00:01:53.638 LIB libspdk_bdev_passthru.a 00:01:53.638 LIB libspdk_bdev_malloc.a 00:01:53.638 LIB libspdk_bdev_zone_block.a 00:01:53.638 LIB libspdk_bdev_aio.a 00:01:53.638 SO libspdk_bdev_ftl.so.6.0 00:01:53.638 SYMLINK libspdk_bdev_gpt.so 00:01:53.638 SO libspdk_bdev_passthru.so.6.0 00:01:53.638 LIB libspdk_bdev_delay.a 00:01:53.638 SO libspdk_bdev_aio.so.6.0 00:01:53.638 SYMLINK libspdk_bdev_error.so 00:01:53.638 SO libspdk_bdev_malloc.so.6.0 00:01:53.638 SO libspdk_bdev_zone_block.so.6.0 00:01:53.638 LIB libspdk_bdev_iscsi.a 00:01:53.638 SO libspdk_bdev_delay.so.6.0 00:01:53.638 SYMLINK libspdk_bdev_ftl.so 00:01:53.638 SO libspdk_bdev_iscsi.so.6.0 00:01:53.899 SYMLINK libspdk_bdev_passthru.so 00:01:53.899 SYMLINK libspdk_bdev_aio.so 00:01:53.899 SYMLINK libspdk_bdev_malloc.so 00:01:53.899 SYMLINK libspdk_bdev_zone_block.so 00:01:53.899 SYMLINK libspdk_bdev_delay.so 00:01:53.899 LIB libspdk_bdev_lvol.a 00:01:53.899 LIB libspdk_bdev_virtio.a 00:01:53.899 SYMLINK libspdk_bdev_iscsi.so 00:01:53.899 SO libspdk_bdev_lvol.so.6.0 00:01:53.899 SO libspdk_bdev_virtio.so.6.0 00:01:53.899 SYMLINK libspdk_bdev_lvol.so 00:01:53.899 SYMLINK libspdk_bdev_virtio.so 00:01:54.159 LIB libspdk_bdev_raid.a 00:01:54.159 SO libspdk_bdev_raid.so.6.0 00:01:54.159 SYMLINK libspdk_bdev_raid.so 00:01:55.098 LIB libspdk_bdev_nvme.a 00:01:55.098 SO libspdk_bdev_nvme.so.7.0 00:01:55.358 SYMLINK libspdk_bdev_nvme.so 00:01:55.929 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:55.929 CC module/event/subsystems/iobuf/iobuf.o 00:01:55.929 CC module/event/subsystems/scheduler/scheduler.o 00:01:55.929 CC module/event/subsystems/keyring/keyring.o 00:01:55.929 CC module/event/subsystems/sock/sock.o 00:01:55.929 CC module/event/subsystems/vmd/vmd.o 00:01:55.929 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:01:55.929 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:55.929 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:56.190 LIB libspdk_event_scheduler.a 00:01:56.190 LIB libspdk_event_iobuf.a 00:01:56.190 LIB libspdk_event_vfu_tgt.a 00:01:56.190 LIB libspdk_event_sock.a 00:01:56.190 LIB libspdk_event_keyring.a 00:01:56.190 LIB libspdk_event_vhost_blk.a 00:01:56.190 LIB libspdk_event_vmd.a 00:01:56.190 SO libspdk_event_scheduler.so.4.0 00:01:56.190 SO libspdk_event_iobuf.so.3.0 00:01:56.190 SO libspdk_event_vfu_tgt.so.3.0 00:01:56.190 SO libspdk_event_keyring.so.1.0 00:01:56.190 SO libspdk_event_sock.so.5.0 00:01:56.190 SO libspdk_event_vhost_blk.so.3.0 00:01:56.190 SO libspdk_event_vmd.so.6.0 00:01:56.190 SYMLINK libspdk_event_scheduler.so 00:01:56.190 SYMLINK libspdk_event_vfu_tgt.so 00:01:56.190 SYMLINK libspdk_event_iobuf.so 00:01:56.450 SYMLINK libspdk_event_keyring.so 00:01:56.450 SYMLINK libspdk_event_sock.so 00:01:56.450 SYMLINK libspdk_event_vhost_blk.so 00:01:56.450 SYMLINK libspdk_event_vmd.so 00:01:56.710 CC module/event/subsystems/accel/accel.o 00:01:56.710 LIB libspdk_event_accel.a 00:01:56.970 SO libspdk_event_accel.so.6.0 00:01:56.970 SYMLINK libspdk_event_accel.so 00:01:57.229 CC module/event/subsystems/bdev/bdev.o 00:01:57.490 LIB libspdk_event_bdev.a 00:01:57.490 SO libspdk_event_bdev.so.6.0 00:01:57.490 SYMLINK libspdk_event_bdev.so 00:01:57.750 CC module/event/subsystems/ublk/ublk.o 00:01:57.750 CC module/event/subsystems/scsi/scsi.o 00:01:57.750 CC module/event/subsystems/nbd/nbd.o 00:01:57.750 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:57.750 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:01:58.010 LIB libspdk_event_nbd.a 00:01:58.010 LIB libspdk_event_ublk.a 00:01:58.010 LIB libspdk_event_scsi.a 00:01:58.010 SO libspdk_event_nbd.so.6.0 00:01:58.010 SO libspdk_event_ublk.so.3.0 00:01:58.010 SO libspdk_event_scsi.so.6.0 00:01:58.010 LIB libspdk_event_nvmf.a 00:01:58.010 SYMLINK libspdk_event_nbd.so 00:01:58.010 SYMLINK libspdk_event_ublk.so 00:01:58.010 SYMLINK libspdk_event_scsi.so 00:01:58.010 SO libspdk_event_nvmf.so.6.0 00:01:58.269 SYMLINK libspdk_event_nvmf.so 00:01:58.529 CC module/event/subsystems/iscsi/iscsi.o 00:01:58.529 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:01:58.529 LIB libspdk_event_iscsi.a 00:01:58.529 LIB libspdk_event_vhost_scsi.a 00:01:58.789 SO libspdk_event_iscsi.so.6.0 00:01:58.789 SO libspdk_event_vhost_scsi.so.3.0 00:01:58.789 SYMLINK libspdk_event_iscsi.so 00:01:58.789 SYMLINK libspdk_event_vhost_scsi.so 00:01:59.048 SO libspdk.so.6.0 00:01:59.048 SYMLINK libspdk.so 00:01:59.306 CXX app/trace/trace.o 00:01:59.306 CC app/spdk_lspci/spdk_lspci.o 00:01:59.306 CC app/spdk_nvme_discover/discovery_aer.o 00:01:59.306 CC app/trace_record/trace_record.o 00:01:59.306 CC app/spdk_top/spdk_top.o 00:01:59.306 TEST_HEADER include/spdk/accel.h 00:01:59.306 CC test/rpc_client/rpc_client_test.o 00:01:59.306 TEST_HEADER include/spdk/accel_module.h 00:01:59.306 CC app/spdk_nvme_identify/identify.o 00:01:59.306 TEST_HEADER include/spdk/barrier.h 00:01:59.306 TEST_HEADER include/spdk/assert.h 00:01:59.306 TEST_HEADER include/spdk/base64.h 00:01:59.306 CC app/spdk_nvme_perf/perf.o 00:01:59.306 TEST_HEADER include/spdk/bdev.h 00:01:59.306 TEST_HEADER include/spdk/bdev_zone.h 00:01:59.306 TEST_HEADER include/spdk/bit_array.h 00:01:59.306 TEST_HEADER include/spdk/bdev_module.h 00:01:59.306 TEST_HEADER include/spdk/bit_pool.h 00:01:59.306 TEST_HEADER include/spdk/blobfs_bdev.h 00:01:59.306 TEST_HEADER include/spdk/blob_bdev.h 00:01:59.306 TEST_HEADER include/spdk/blob.h 00:01:59.306 TEST_HEADER include/spdk/conf.h 00:01:59.306 TEST_HEADER include/spdk/blobfs.h 00:01:59.306 TEST_HEADER include/spdk/config.h 00:01:59.306 CC app/vhost/vhost.o 00:01:59.306 TEST_HEADER include/spdk/cpuset.h 00:01:59.306 TEST_HEADER include/spdk/crc32.h 00:01:59.306 CC app/iscsi_tgt/iscsi_tgt.o 00:01:59.306 TEST_HEADER include/spdk/crc16.h 00:01:59.306 TEST_HEADER include/spdk/crc64.h 00:01:59.306 CC app/spdk_dd/spdk_dd.o 00:01:59.307 TEST_HEADER include/spdk/dif.h 00:01:59.307 TEST_HEADER include/spdk/dma.h 00:01:59.307 TEST_HEADER include/spdk/env_dpdk.h 00:01:59.307 TEST_HEADER include/spdk/endian.h 00:01:59.307 TEST_HEADER include/spdk/env.h 00:01:59.307 TEST_HEADER include/spdk/event.h 00:01:59.307 TEST_HEADER include/spdk/fd.h 00:01:59.570 TEST_HEADER include/spdk/fd_group.h 00:01:59.570 CC examples/interrupt_tgt/interrupt_tgt.o 00:01:59.570 TEST_HEADER include/spdk/ftl.h 00:01:59.570 TEST_HEADER include/spdk/gpt_spec.h 00:01:59.570 TEST_HEADER include/spdk/file.h 00:01:59.570 TEST_HEADER include/spdk/hexlify.h 00:01:59.570 TEST_HEADER include/spdk/idxd.h 00:01:59.570 TEST_HEADER include/spdk/histogram_data.h 00:01:59.570 TEST_HEADER include/spdk/idxd_spec.h 00:01:59.570 CC app/spdk_tgt/spdk_tgt.o 00:01:59.570 TEST_HEADER include/spdk/ioat.h 00:01:59.570 TEST_HEADER include/spdk/init.h 00:01:59.570 CC app/nvmf_tgt/nvmf_main.o 00:01:59.570 TEST_HEADER include/spdk/iscsi_spec.h 00:01:59.570 TEST_HEADER include/spdk/ioat_spec.h 00:01:59.570 TEST_HEADER include/spdk/json.h 00:01:59.570 TEST_HEADER include/spdk/jsonrpc.h 00:01:59.570 TEST_HEADER include/spdk/keyring.h 00:01:59.570 TEST_HEADER include/spdk/keyring_module.h 00:01:59.570 TEST_HEADER include/spdk/likely.h 00:01:59.570 TEST_HEADER include/spdk/log.h 00:01:59.570 TEST_HEADER include/spdk/lvol.h 00:01:59.570 TEST_HEADER include/spdk/memory.h 00:01:59.570 TEST_HEADER include/spdk/mmio.h 00:01:59.570 TEST_HEADER include/spdk/notify.h 00:01:59.570 TEST_HEADER include/spdk/nbd.h 00:01:59.570 TEST_HEADER include/spdk/nvme.h 00:01:59.570 TEST_HEADER include/spdk/nvme_intel.h 00:01:59.570 TEST_HEADER include/spdk/nvme_ocssd.h 00:01:59.570 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:01:59.570 TEST_HEADER include/spdk/nvme_zns.h 00:01:59.570 TEST_HEADER include/spdk/nvme_spec.h 00:01:59.570 TEST_HEADER include/spdk/nvmf_cmd.h 00:01:59.570 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:01:59.570 TEST_HEADER include/spdk/nvmf_transport.h 00:01:59.570 TEST_HEADER include/spdk/nvmf_spec.h 00:01:59.570 TEST_HEADER include/spdk/opal.h 00:01:59.570 TEST_HEADER include/spdk/nvmf.h 00:01:59.570 TEST_HEADER include/spdk/opal_spec.h 00:01:59.570 TEST_HEADER include/spdk/pci_ids.h 00:01:59.570 TEST_HEADER include/spdk/queue.h 00:01:59.570 TEST_HEADER include/spdk/reduce.h 00:01:59.570 TEST_HEADER include/spdk/pipe.h 00:01:59.570 TEST_HEADER include/spdk/rpc.h 00:01:59.570 TEST_HEADER include/spdk/scheduler.h 00:01:59.570 TEST_HEADER include/spdk/scsi.h 00:01:59.570 TEST_HEADER include/spdk/scsi_spec.h 00:01:59.570 TEST_HEADER include/spdk/sock.h 00:01:59.570 TEST_HEADER include/spdk/string.h 00:01:59.570 TEST_HEADER include/spdk/stdinc.h 00:01:59.570 TEST_HEADER include/spdk/thread.h 00:01:59.570 TEST_HEADER include/spdk/trace.h 00:01:59.570 TEST_HEADER include/spdk/trace_parser.h 00:01:59.570 TEST_HEADER include/spdk/tree.h 00:01:59.570 TEST_HEADER include/spdk/ublk.h 00:01:59.570 TEST_HEADER include/spdk/util.h 00:01:59.570 TEST_HEADER include/spdk/uuid.h 00:01:59.570 TEST_HEADER include/spdk/version.h 00:01:59.570 TEST_HEADER include/spdk/vfio_user_spec.h 00:01:59.570 TEST_HEADER include/spdk/vfio_user_pci.h 00:01:59.570 TEST_HEADER include/spdk/xor.h 00:01:59.570 TEST_HEADER include/spdk/vhost.h 00:01:59.570 TEST_HEADER include/spdk/vmd.h 00:01:59.570 TEST_HEADER include/spdk/zipf.h 00:01:59.571 CXX test/cpp_headers/accel.o 00:01:59.571 CXX test/cpp_headers/assert.o 00:01:59.571 CXX test/cpp_headers/accel_module.o 00:01:59.571 CXX test/cpp_headers/barrier.o 00:01:59.571 CXX test/cpp_headers/base64.o 00:01:59.571 CXX test/cpp_headers/bdev.o 00:01:59.571 CXX test/cpp_headers/bdev_zone.o 00:01:59.571 CXX test/cpp_headers/bdev_module.o 00:01:59.571 CXX test/cpp_headers/bit_pool.o 00:01:59.571 CXX test/cpp_headers/bit_array.o 00:01:59.571 CXX test/cpp_headers/blob_bdev.o 00:01:59.571 CXX test/cpp_headers/blobfs_bdev.o 00:01:59.571 CXX test/cpp_headers/blobfs.o 00:01:59.571 CXX test/cpp_headers/blob.o 00:01:59.571 CXX test/cpp_headers/config.o 00:01:59.571 CXX test/cpp_headers/conf.o 00:01:59.571 CXX test/cpp_headers/cpuset.o 00:01:59.571 CXX test/cpp_headers/crc32.o 00:01:59.571 CXX test/cpp_headers/crc16.o 00:01:59.571 CXX test/cpp_headers/dma.o 00:01:59.571 CXX test/cpp_headers/crc64.o 00:01:59.571 CXX test/cpp_headers/dif.o 00:01:59.571 CXX test/cpp_headers/endian.o 00:01:59.571 CXX test/cpp_headers/env_dpdk.o 00:01:59.571 CXX test/cpp_headers/env.o 00:01:59.571 CXX test/cpp_headers/event.o 00:01:59.571 CXX test/cpp_headers/fd_group.o 00:01:59.571 CXX test/cpp_headers/fd.o 00:01:59.571 CXX test/cpp_headers/ftl.o 00:01:59.571 CXX test/cpp_headers/file.o 00:01:59.571 CXX test/cpp_headers/gpt_spec.o 00:01:59.571 CXX test/cpp_headers/histogram_data.o 00:01:59.571 CXX test/cpp_headers/hexlify.o 00:01:59.571 CXX test/cpp_headers/idxd.o 00:01:59.571 CXX test/cpp_headers/idxd_spec.o 00:01:59.571 CXX test/cpp_headers/init.o 00:01:59.571 CXX test/cpp_headers/ioat.o 00:01:59.571 CXX test/cpp_headers/ioat_spec.o 00:01:59.571 CXX test/cpp_headers/json.o 00:01:59.571 CXX test/cpp_headers/iscsi_spec.o 00:01:59.571 CXX test/cpp_headers/jsonrpc.o 00:01:59.571 CXX test/cpp_headers/likely.o 00:01:59.571 CXX test/cpp_headers/keyring.o 00:01:59.571 CXX test/cpp_headers/log.o 00:01:59.571 CXX test/cpp_headers/keyring_module.o 00:01:59.571 CXX test/cpp_headers/lvol.o 00:01:59.571 CXX test/cpp_headers/memory.o 00:01:59.571 CXX test/cpp_headers/mmio.o 00:01:59.571 CXX test/cpp_headers/nbd.o 00:01:59.571 CXX test/cpp_headers/nvme_intel.o 00:01:59.571 CXX test/cpp_headers/notify.o 00:01:59.571 CXX test/cpp_headers/nvme.o 00:01:59.571 CXX test/cpp_headers/nvme_ocssd.o 00:01:59.571 CXX test/cpp_headers/nvme_ocssd_spec.o 00:01:59.571 CXX test/cpp_headers/nvme_zns.o 00:01:59.571 CXX test/cpp_headers/nvme_spec.o 00:01:59.571 CXX test/cpp_headers/nvmf_fc_spec.o 00:01:59.571 CXX test/cpp_headers/nvmf_cmd.o 00:01:59.571 CXX test/cpp_headers/nvmf.o 00:01:59.571 CXX test/cpp_headers/nvmf_spec.o 00:01:59.571 CXX test/cpp_headers/opal.o 00:01:59.571 CXX test/cpp_headers/nvmf_transport.o 00:01:59.571 CXX test/cpp_headers/opal_spec.o 00:01:59.571 CXX test/cpp_headers/pipe.o 00:01:59.571 CXX test/cpp_headers/pci_ids.o 00:01:59.571 CXX test/cpp_headers/queue.o 00:01:59.571 CXX test/cpp_headers/reduce.o 00:01:59.571 CXX test/cpp_headers/rpc.o 00:01:59.571 CXX test/cpp_headers/scheduler.o 00:01:59.571 CC examples/sock/hello_world/hello_sock.o 00:01:59.571 CC examples/util/zipf/zipf.o 00:01:59.571 CXX test/cpp_headers/scsi.o 00:01:59.571 CC test/app/histogram_perf/histogram_perf.o 00:01:59.571 CC examples/idxd/perf/perf.o 00:01:59.571 CC examples/ioat/perf/perf.o 00:01:59.571 CC examples/ioat/verify/verify.o 00:01:59.571 CC examples/vmd/led/led.o 00:01:59.571 CC test/app/jsoncat/jsoncat.o 00:01:59.571 CC examples/vmd/lsvmd/lsvmd.o 00:01:59.571 CC app/fio/nvme/fio_plugin.o 00:01:59.571 CC test/event/reactor_perf/reactor_perf.o 00:01:59.571 CC test/event/event_perf/event_perf.o 00:01:59.571 CC examples/nvme/cmb_copy/cmb_copy.o 00:01:59.571 CC test/env/memory/memory_ut.o 00:01:59.571 CC test/event/reactor/reactor.o 00:01:59.571 CC examples/accel/perf/accel_perf.o 00:01:59.571 CC examples/nvme/reconnect/reconnect.o 00:01:59.571 CC test/app/stub/stub.o 00:01:59.571 CC examples/nvme/nvme_manage/nvme_manage.o 00:01:59.571 CC examples/nvme/arbitration/arbitration.o 00:01:59.571 CC test/thread/poller_perf/poller_perf.o 00:01:59.571 CC examples/bdev/hello_world/hello_bdev.o 00:01:59.833 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:01:59.833 CC examples/nvmf/nvmf/nvmf.o 00:01:59.833 CC examples/nvme/abort/abort.o 00:01:59.833 CC examples/bdev/bdevperf/bdevperf.o 00:01:59.833 CC test/nvme/sgl/sgl.o 00:01:59.833 CC test/nvme/aer/aer.o 00:01:59.833 CC test/nvme/err_injection/err_injection.o 00:01:59.833 CC test/event/app_repeat/app_repeat.o 00:01:59.833 CC test/env/vtophys/vtophys.o 00:01:59.833 CC test/nvme/reserve/reserve.o 00:01:59.833 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:01:59.833 CC examples/nvme/hello_world/hello_world.o 00:01:59.833 CC test/nvme/reset/reset.o 00:01:59.833 CC test/env/pci/pci_ut.o 00:01:59.833 CC test/accel/dif/dif.o 00:01:59.833 CC test/nvme/startup/startup.o 00:01:59.833 CC test/nvme/doorbell_aers/doorbell_aers.o 00:01:59.833 CC test/nvme/overhead/overhead.o 00:01:59.833 CC test/nvme/connect_stress/connect_stress.o 00:01:59.833 CC test/nvme/e2edp/nvme_dp.o 00:01:59.833 CC test/nvme/fused_ordering/fused_ordering.o 00:01:59.833 CC test/nvme/fdp/fdp.o 00:01:59.833 CC test/dma/test_dma/test_dma.o 00:01:59.833 CC app/fio/bdev/fio_plugin.o 00:01:59.833 CC test/nvme/simple_copy/simple_copy.o 00:01:59.833 CC test/nvme/compliance/nvme_compliance.o 00:01:59.833 CC examples/nvme/hotplug/hotplug.o 00:01:59.833 CC test/nvme/cuse/cuse.o 00:01:59.833 CC test/app/bdev_svc/bdev_svc.o 00:01:59.833 CC test/bdev/bdevio/bdevio.o 00:01:59.833 CC examples/blob/cli/blobcli.o 00:01:59.833 LINK spdk_lspci 00:01:59.833 CC test/nvme/boot_partition/boot_partition.o 00:01:59.833 CC test/event/scheduler/scheduler.o 00:01:59.833 CC examples/blob/hello_world/hello_blob.o 00:01:59.833 CC examples/thread/thread/thread_ex.o 00:01:59.833 CC test/blobfs/mkfs/mkfs.o 00:02:00.099 LINK rpc_client_test 00:02:00.099 LINK spdk_nvme_discover 00:02:00.099 LINK interrupt_tgt 00:02:00.099 LINK vhost 00:02:00.099 LINK spdk_trace_record 00:02:00.099 LINK iscsi_tgt 00:02:00.099 CC test/env/mem_callbacks/mem_callbacks.o 00:02:00.099 LINK spdk_tgt 00:02:00.099 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:00.099 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:00.099 CC test/lvol/esnap/esnap.o 00:02:00.099 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:00.099 LINK nvmf_tgt 00:02:00.099 LINK reactor_perf 00:02:00.357 LINK jsoncat 00:02:00.357 LINK lsvmd 00:02:00.357 LINK led 00:02:00.357 LINK event_perf 00:02:00.357 LINK histogram_perf 00:02:00.357 LINK zipf 00:02:00.357 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:00.357 LINK vtophys 00:02:00.357 LINK reactor 00:02:00.357 LINK cmb_copy 00:02:00.357 LINK poller_perf 00:02:00.357 LINK env_dpdk_post_init 00:02:00.357 CXX test/cpp_headers/scsi_spec.o 00:02:00.357 LINK app_repeat 00:02:00.357 CXX test/cpp_headers/stdinc.o 00:02:00.357 CXX test/cpp_headers/sock.o 00:02:00.357 CXX test/cpp_headers/string.o 00:02:00.357 CXX test/cpp_headers/thread.o 00:02:00.357 CXX test/cpp_headers/trace.o 00:02:00.357 LINK stub 00:02:00.357 LINK connect_stress 00:02:00.357 CXX test/cpp_headers/trace_parser.o 00:02:00.357 LINK ioat_perf 00:02:00.357 LINK hello_sock 00:02:00.357 CXX test/cpp_headers/tree.o 00:02:00.357 LINK boot_partition 00:02:00.357 CXX test/cpp_headers/ublk.o 00:02:00.357 CXX test/cpp_headers/util.o 00:02:00.357 CXX test/cpp_headers/uuid.o 00:02:00.357 CXX test/cpp_headers/version.o 00:02:00.357 LINK verify 00:02:00.357 CXX test/cpp_headers/vfio_user_pci.o 00:02:00.357 CXX test/cpp_headers/vfio_user_spec.o 00:02:00.357 LINK pmr_persistence 00:02:00.357 CXX test/cpp_headers/vhost.o 00:02:00.357 CXX test/cpp_headers/vmd.o 00:02:00.357 LINK doorbell_aers 00:02:00.357 CXX test/cpp_headers/xor.o 00:02:00.357 CXX test/cpp_headers/zipf.o 00:02:00.357 LINK bdev_svc 00:02:00.357 LINK spdk_dd 00:02:00.357 LINK err_injection 00:02:00.357 LINK startup 00:02:00.357 LINK fused_ordering 00:02:00.357 LINK hello_world 00:02:00.615 LINK reserve 00:02:00.615 LINK reset 00:02:00.615 LINK mkfs 00:02:00.615 LINK hotplug 00:02:00.615 LINK hello_bdev 00:02:00.615 LINK hello_blob 00:02:00.615 LINK scheduler 00:02:00.615 LINK thread 00:02:00.615 LINK sgl 00:02:00.615 LINK simple_copy 00:02:00.615 LINK aer 00:02:00.615 LINK arbitration 00:02:00.615 LINK overhead 00:02:00.615 LINK nvme_compliance 00:02:00.615 LINK idxd_perf 00:02:00.615 LINK nvme_dp 00:02:00.615 LINK nvmf 00:02:00.615 LINK spdk_trace 00:02:00.615 LINK fdp 00:02:00.615 LINK reconnect 00:02:00.615 LINK dif 00:02:00.615 LINK abort 00:02:00.615 LINK test_dma 00:02:00.615 LINK spdk_nvme 00:02:00.615 LINK bdevio 00:02:00.615 LINK pci_ut 00:02:00.615 LINK spdk_bdev 00:02:00.615 LINK accel_perf 00:02:00.874 LINK nvme_manage 00:02:00.874 LINK blobcli 00:02:00.874 LINK nvme_fuzz 00:02:00.874 LINK vhost_fuzz 00:02:00.874 LINK spdk_nvme_perf 00:02:00.874 LINK spdk_top 00:02:00.874 LINK spdk_nvme_identify 00:02:00.874 LINK mem_callbacks 00:02:01.135 LINK bdevperf 00:02:01.135 LINK memory_ut 00:02:01.135 LINK cuse 00:02:02.083 LINK iscsi_fuzz 00:02:04.065 LINK esnap 00:02:04.331 00:02:04.331 real 0m49.020s 00:02:04.331 user 6m33.645s 00:02:04.331 sys 4m37.361s 00:02:04.331 11:58:05 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:02:04.331 11:58:05 -- common/autotest_common.sh@10 -- $ set +x 00:02:04.331 ************************************ 00:02:04.331 END TEST make 00:02:04.331 ************************************ 00:02:04.331 11:58:05 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:04.331 11:58:05 -- pm/common@30 -- $ signal_monitor_resources TERM 00:02:04.331 11:58:05 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:02:04.331 11:58:05 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:04.331 11:58:05 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:04.331 11:58:05 -- pm/common@45 -- $ pid=3070292 00:02:04.331 11:58:05 -- pm/common@52 -- $ sudo kill -TERM 3070292 00:02:04.593 11:58:05 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:04.593 11:58:05 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:04.593 11:58:05 -- pm/common@45 -- $ pid=3070287 00:02:04.593 11:58:05 -- pm/common@52 -- $ sudo kill -TERM 3070287 00:02:04.593 11:58:05 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:04.593 11:58:05 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:04.593 11:58:05 -- pm/common@45 -- $ pid=3070295 00:02:04.593 11:58:05 -- pm/common@52 -- $ sudo kill -TERM 3070295 00:02:04.593 11:58:05 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:04.593 11:58:05 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:04.593 11:58:05 -- pm/common@45 -- $ pid=3070294 00:02:04.593 11:58:05 -- pm/common@52 -- $ sudo kill -TERM 3070294 00:02:04.593 11:58:05 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:04.593 11:58:05 -- nvmf/common.sh@7 -- # uname -s 00:02:04.593 11:58:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:04.593 11:58:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:04.593 11:58:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:04.593 11:58:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:04.593 11:58:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:04.593 11:58:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:04.593 11:58:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:04.593 11:58:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:04.593 11:58:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:04.593 11:58:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:04.593 11:58:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:02:04.593 11:58:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:02:04.593 11:58:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:04.593 11:58:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:04.593 11:58:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:04.593 11:58:05 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:04.593 11:58:05 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:04.593 11:58:05 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:04.593 11:58:05 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:04.593 11:58:05 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:04.593 11:58:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:04.593 11:58:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:04.593 11:58:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:04.593 11:58:05 -- paths/export.sh@5 -- # export PATH 00:02:04.593 11:58:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:04.593 11:58:05 -- nvmf/common.sh@47 -- # : 0 00:02:04.593 11:58:05 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:04.593 11:58:05 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:04.593 11:58:05 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:04.593 11:58:05 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:04.593 11:58:05 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:04.593 11:58:05 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:04.593 11:58:05 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:04.593 11:58:05 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:04.593 11:58:05 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:04.593 11:58:05 -- spdk/autotest.sh@32 -- # uname -s 00:02:04.593 11:58:05 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:04.593 11:58:05 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:04.593 11:58:05 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:04.593 11:58:05 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:04.593 11:58:05 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:04.593 11:58:05 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:04.593 11:58:05 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:04.593 11:58:05 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:04.593 11:58:05 -- spdk/autotest.sh@48 -- # udevadm_pid=3133009 00:02:04.593 11:58:05 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:04.593 11:58:05 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:04.593 11:58:05 -- pm/common@17 -- # local monitor 00:02:04.593 11:58:05 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:04.593 11:58:05 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=3133011 00:02:04.593 11:58:05 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:04.593 11:58:05 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=3133014 00:02:04.593 11:58:05 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:04.593 11:58:05 -- pm/common@21 -- # date +%s 00:02:04.593 11:58:05 -- pm/common@21 -- # date +%s 00:02:04.593 11:58:05 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=3133016 00:02:04.593 11:58:05 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:04.593 11:58:05 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=3133020 00:02:04.593 11:58:05 -- pm/common@26 -- # sleep 1 00:02:04.593 11:58:05 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1714125485 00:02:04.593 11:58:05 -- pm/common@21 -- # date +%s 00:02:04.854 11:58:05 -- pm/common@21 -- # date +%s 00:02:04.854 11:58:05 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1714125485 00:02:04.854 11:58:05 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1714125485 00:02:04.854 11:58:05 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1714125485 00:02:04.854 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1714125485_collect-cpu-load.pm.log 00:02:04.854 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1714125485_collect-vmstat.pm.log 00:02:04.854 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1714125485_collect-bmc-pm.bmc.pm.log 00:02:04.854 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1714125485_collect-cpu-temp.pm.log 00:02:05.854 11:58:06 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:05.854 11:58:06 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:05.854 11:58:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:02:05.854 11:58:06 -- common/autotest_common.sh@10 -- # set +x 00:02:05.854 11:58:06 -- spdk/autotest.sh@59 -- # create_test_list 00:02:05.854 11:58:06 -- common/autotest_common.sh@734 -- # xtrace_disable 00:02:05.854 11:58:06 -- common/autotest_common.sh@10 -- # set +x 00:02:05.854 11:58:06 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:05.854 11:58:06 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:05.854 11:58:06 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:05.854 11:58:06 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:05.854 11:58:06 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:05.854 11:58:06 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:05.854 11:58:06 -- common/autotest_common.sh@1441 -- # uname 00:02:05.855 11:58:06 -- common/autotest_common.sh@1441 -- # '[' Linux = FreeBSD ']' 00:02:05.855 11:58:06 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:05.855 11:58:06 -- common/autotest_common.sh@1461 -- # uname 00:02:05.855 11:58:06 -- common/autotest_common.sh@1461 -- # [[ Linux = FreeBSD ]] 00:02:05.855 11:58:06 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:05.855 11:58:06 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:05.855 11:58:06 -- spdk/autotest.sh@72 -- # hash lcov 00:02:05.855 11:58:06 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:05.855 11:58:06 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:05.855 --rc lcov_branch_coverage=1 00:02:05.855 --rc lcov_function_coverage=1 00:02:05.855 --rc genhtml_branch_coverage=1 00:02:05.855 --rc genhtml_function_coverage=1 00:02:05.855 --rc genhtml_legend=1 00:02:05.855 --rc geninfo_all_blocks=1 00:02:05.855 ' 00:02:05.855 11:58:06 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:05.855 --rc lcov_branch_coverage=1 00:02:05.855 --rc lcov_function_coverage=1 00:02:05.855 --rc genhtml_branch_coverage=1 00:02:05.855 --rc genhtml_function_coverage=1 00:02:05.855 --rc genhtml_legend=1 00:02:05.855 --rc geninfo_all_blocks=1 00:02:05.855 ' 00:02:05.855 11:58:06 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:05.855 --rc lcov_branch_coverage=1 00:02:05.855 --rc lcov_function_coverage=1 00:02:05.855 --rc genhtml_branch_coverage=1 00:02:05.855 --rc genhtml_function_coverage=1 00:02:05.855 --rc genhtml_legend=1 00:02:05.855 --rc geninfo_all_blocks=1 00:02:05.855 --no-external' 00:02:05.855 11:58:06 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:05.855 --rc lcov_branch_coverage=1 00:02:05.855 --rc lcov_function_coverage=1 00:02:05.855 --rc genhtml_branch_coverage=1 00:02:05.855 --rc genhtml_function_coverage=1 00:02:05.855 --rc genhtml_legend=1 00:02:05.855 --rc geninfo_all_blocks=1 00:02:05.855 --no-external' 00:02:05.855 11:58:06 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:05.855 lcov: LCOV version 1.14 00:02:05.855 11:58:06 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:13.996 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:13.996 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:13.996 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:13.996 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:13.996 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:13.996 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:13.996 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:13.996 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:13.996 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:13.996 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:13.996 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:13.996 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:13.996 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:13.996 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:13.996 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:13.996 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:13.996 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:13.996 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:13.996 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:13.996 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:13.996 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:13.996 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:13.996 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:13.996 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:13.996 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:13.996 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:13.996 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:13.996 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:13.996 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:13.996 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:13.996 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:13.996 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:13.996 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:13.996 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:13.996 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:13.996 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:13.996 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:13.996 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:13.996 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:13.996 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:13.996 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:13.996 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:13.996 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:13.996 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:13.996 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:13.996 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:13.996 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:13.996 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:13.996 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:13.996 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:13.996 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:13.996 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:13.996 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:13.996 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:13.996 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:13.996 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:13.996 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:13.996 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:13.996 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:13.996 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:13.996 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:13.996 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:13.996 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:13.996 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:13.996 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:13.996 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:13.996 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:13.996 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:13.996 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:13.996 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:13.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:13.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:13.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:13.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:13.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:13.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:13.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:13.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:13.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:13.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:13.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:13.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:13.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:13.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:13.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:13.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:13.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:13.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:13.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:13.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:13.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:13.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:13.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:13.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:13.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:13.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:13.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:13.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:13.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:13.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:13.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:13.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:13.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:13.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:13.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:13.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:13.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:13.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:13.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:13.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:13.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:13.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:13.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:13.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:13.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:13.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:13.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:13.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:13.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:13.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:13.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:13.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:13.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:13.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:13.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:13.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:13.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:13.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:13.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:13.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:13.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:13.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:13.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:13.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:13.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:13.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:13.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:13.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:13.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:13.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:13.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:13.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:13.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:13.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:13.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:13.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:13.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:13.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:13.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:13.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:13.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:13.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:13.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:13.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:13.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:13.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:13.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:13.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:13.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:13.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:13.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:13.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:13.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:13.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:13.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:13.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:13.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:13.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:13.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:13.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:13.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:13.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:13.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:13.998 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:13.998 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:13.998 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:17.298 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:17.298 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:27.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:27.295 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:27.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:27.295 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:27.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:27.295 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:35.432 11:58:35 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:35.432 11:58:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:02:35.432 11:58:35 -- common/autotest_common.sh@10 -- # set +x 00:02:35.432 11:58:35 -- spdk/autotest.sh@91 -- # rm -f 00:02:35.432 11:58:35 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:37.974 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:02:37.974 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:02:37.974 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:02:37.974 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:02:37.974 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:02:37.974 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:02:37.974 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:02:37.974 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:02:37.974 0000:65:00.0 (144d a80a): Already using the nvme driver 00:02:37.974 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:02:37.974 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:02:37.974 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:02:37.974 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:02:37.974 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:02:37.974 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:02:37.974 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:02:37.974 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:02:38.234 11:58:39 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:38.234 11:58:39 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:02:38.234 11:58:39 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:02:38.234 11:58:39 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:02:38.234 11:58:39 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:38.234 11:58:39 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:02:38.234 11:58:39 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:02:38.234 11:58:39 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:38.234 11:58:39 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:38.234 11:58:39 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:38.234 11:58:39 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:38.234 11:58:39 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:38.234 11:58:39 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:38.234 11:58:39 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:38.234 11:58:39 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:38.234 No valid GPT data, bailing 00:02:38.234 11:58:39 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:38.234 11:58:39 -- scripts/common.sh@391 -- # pt= 00:02:38.234 11:58:39 -- scripts/common.sh@392 -- # return 1 00:02:38.234 11:58:39 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:38.234 1+0 records in 00:02:38.234 1+0 records out 00:02:38.234 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00149753 s, 700 MB/s 00:02:38.234 11:58:39 -- spdk/autotest.sh@118 -- # sync 00:02:38.234 11:58:39 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:38.234 11:58:39 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:38.234 11:58:39 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:46.368 11:58:47 -- spdk/autotest.sh@124 -- # uname -s 00:02:46.368 11:58:47 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:46.368 11:58:47 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:46.368 11:58:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:46.368 11:58:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:46.368 11:58:47 -- common/autotest_common.sh@10 -- # set +x 00:02:46.368 ************************************ 00:02:46.368 START TEST setup.sh 00:02:46.368 ************************************ 00:02:46.368 11:58:47 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:46.368 * Looking for test storage... 00:02:46.368 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:46.368 11:58:47 -- setup/test-setup.sh@10 -- # uname -s 00:02:46.368 11:58:47 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:46.368 11:58:47 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:46.368 11:58:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:46.369 11:58:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:46.369 11:58:47 -- common/autotest_common.sh@10 -- # set +x 00:02:46.369 ************************************ 00:02:46.369 START TEST acl 00:02:46.369 ************************************ 00:02:46.369 11:58:47 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:46.629 * Looking for test storage... 00:02:46.629 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:46.629 11:58:47 -- setup/acl.sh@10 -- # get_zoned_devs 00:02:46.629 11:58:47 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:02:46.629 11:58:47 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:02:46.629 11:58:47 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:02:46.629 11:58:47 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:46.629 11:58:47 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:02:46.629 11:58:47 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:02:46.629 11:58:47 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:46.629 11:58:47 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:46.629 11:58:47 -- setup/acl.sh@12 -- # devs=() 00:02:46.629 11:58:47 -- setup/acl.sh@12 -- # declare -a devs 00:02:46.629 11:58:47 -- setup/acl.sh@13 -- # drivers=() 00:02:46.629 11:58:47 -- setup/acl.sh@13 -- # declare -A drivers 00:02:46.629 11:58:47 -- setup/acl.sh@51 -- # setup reset 00:02:46.629 11:58:47 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:46.629 11:58:47 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:50.831 11:58:51 -- setup/acl.sh@52 -- # collect_setup_devs 00:02:50.831 11:58:51 -- setup/acl.sh@16 -- # local dev driver 00:02:50.831 11:58:51 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:50.831 11:58:51 -- setup/acl.sh@15 -- # setup output status 00:02:50.831 11:58:51 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:50.831 11:58:51 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:54.175 Hugepages 00:02:54.175 node hugesize free / total 00:02:54.176 11:58:54 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:54.176 11:58:54 -- setup/acl.sh@19 -- # continue 00:02:54.176 11:58:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:54.176 11:58:54 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:54.176 11:58:54 -- setup/acl.sh@19 -- # continue 00:02:54.176 11:58:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:54.176 11:58:54 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:54.176 11:58:54 -- setup/acl.sh@19 -- # continue 00:02:54.176 11:58:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:54.176 00:02:54.176 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:54.176 11:58:54 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:54.176 11:58:54 -- setup/acl.sh@19 -- # continue 00:02:54.176 11:58:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:54.176 11:58:54 -- setup/acl.sh@19 -- # [[ 0000:00:01.0 == *:*:*.* ]] 00:02:54.176 11:58:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:54.176 11:58:54 -- setup/acl.sh@20 -- # continue 00:02:54.176 11:58:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:54.176 11:58:54 -- setup/acl.sh@19 -- # [[ 0000:00:01.1 == *:*:*.* ]] 00:02:54.176 11:58:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:54.176 11:58:54 -- setup/acl.sh@20 -- # continue 00:02:54.176 11:58:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:54.176 11:58:54 -- setup/acl.sh@19 -- # [[ 0000:00:01.2 == *:*:*.* ]] 00:02:54.176 11:58:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:54.176 11:58:54 -- setup/acl.sh@20 -- # continue 00:02:54.176 11:58:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:54.176 11:58:54 -- setup/acl.sh@19 -- # [[ 0000:00:01.3 == *:*:*.* ]] 00:02:54.176 11:58:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:54.176 11:58:54 -- setup/acl.sh@20 -- # continue 00:02:54.176 11:58:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:54.176 11:58:54 -- setup/acl.sh@19 -- # [[ 0000:00:01.4 == *:*:*.* ]] 00:02:54.176 11:58:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:54.176 11:58:54 -- setup/acl.sh@20 -- # continue 00:02:54.176 11:58:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:54.176 11:58:54 -- setup/acl.sh@19 -- # [[ 0000:00:01.5 == *:*:*.* ]] 00:02:54.176 11:58:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:54.176 11:58:54 -- setup/acl.sh@20 -- # continue 00:02:54.176 11:58:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:54.176 11:58:54 -- setup/acl.sh@19 -- # [[ 0000:00:01.6 == *:*:*.* ]] 00:02:54.176 11:58:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:54.176 11:58:54 -- setup/acl.sh@20 -- # continue 00:02:54.176 11:58:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:54.176 11:58:54 -- setup/acl.sh@19 -- # [[ 0000:00:01.7 == *:*:*.* ]] 00:02:54.176 11:58:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:54.176 11:58:54 -- setup/acl.sh@20 -- # continue 00:02:54.176 11:58:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:54.176 11:58:55 -- setup/acl.sh@19 -- # [[ 0000:65:00.0 == *:*:*.* ]] 00:02:54.176 11:58:55 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:54.176 11:58:55 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:02:54.176 11:58:55 -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:54.176 11:58:55 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:54.176 11:58:55 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:54.176 11:58:55 -- setup/acl.sh@19 -- # [[ 0000:80:01.0 == *:*:*.* ]] 00:02:54.176 11:58:55 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:54.176 11:58:55 -- setup/acl.sh@20 -- # continue 00:02:54.176 11:58:55 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:54.176 11:58:55 -- setup/acl.sh@19 -- # [[ 0000:80:01.1 == *:*:*.* ]] 00:02:54.176 11:58:55 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:54.176 11:58:55 -- setup/acl.sh@20 -- # continue 00:02:54.176 11:58:55 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:54.176 11:58:55 -- setup/acl.sh@19 -- # [[ 0000:80:01.2 == *:*:*.* ]] 00:02:54.176 11:58:55 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:54.176 11:58:55 -- setup/acl.sh@20 -- # continue 00:02:54.176 11:58:55 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:54.176 11:58:55 -- setup/acl.sh@19 -- # [[ 0000:80:01.3 == *:*:*.* ]] 00:02:54.176 11:58:55 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:54.176 11:58:55 -- setup/acl.sh@20 -- # continue 00:02:54.176 11:58:55 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:54.176 11:58:55 -- setup/acl.sh@19 -- # [[ 0000:80:01.4 == *:*:*.* ]] 00:02:54.176 11:58:55 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:54.176 11:58:55 -- setup/acl.sh@20 -- # continue 00:02:54.176 11:58:55 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:54.176 11:58:55 -- setup/acl.sh@19 -- # [[ 0000:80:01.5 == *:*:*.* ]] 00:02:54.176 11:58:55 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:54.176 11:58:55 -- setup/acl.sh@20 -- # continue 00:02:54.176 11:58:55 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:54.176 11:58:55 -- setup/acl.sh@19 -- # [[ 0000:80:01.6 == *:*:*.* ]] 00:02:54.176 11:58:55 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:54.176 11:58:55 -- setup/acl.sh@20 -- # continue 00:02:54.176 11:58:55 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:54.176 11:58:55 -- setup/acl.sh@19 -- # [[ 0000:80:01.7 == *:*:*.* ]] 00:02:54.176 11:58:55 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:54.176 11:58:55 -- setup/acl.sh@20 -- # continue 00:02:54.176 11:58:55 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:54.176 11:58:55 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:54.176 11:58:55 -- setup/acl.sh@54 -- # run_test denied denied 00:02:54.176 11:58:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:54.176 11:58:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:54.176 11:58:55 -- common/autotest_common.sh@10 -- # set +x 00:02:54.176 ************************************ 00:02:54.176 START TEST denied 00:02:54.176 ************************************ 00:02:54.176 11:58:55 -- common/autotest_common.sh@1111 -- # denied 00:02:54.176 11:58:55 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:65:00.0' 00:02:54.176 11:58:55 -- setup/acl.sh@38 -- # setup output config 00:02:54.176 11:58:55 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:65:00.0' 00:02:54.176 11:58:55 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:54.176 11:58:55 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:57.557 0000:65:00.0 (144d a80a): Skipping denied controller at 0000:65:00.0 00:02:57.557 11:58:58 -- setup/acl.sh@40 -- # verify 0000:65:00.0 00:02:57.557 11:58:58 -- setup/acl.sh@28 -- # local dev driver 00:02:57.557 11:58:58 -- setup/acl.sh@30 -- # for dev in "$@" 00:02:57.557 11:58:58 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:65:00.0 ]] 00:02:57.557 11:58:58 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:65:00.0/driver 00:02:57.557 11:58:58 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:57.557 11:58:58 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:57.557 11:58:58 -- setup/acl.sh@41 -- # setup reset 00:02:57.557 11:58:58 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:57.557 11:58:58 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:02.859 00:03:02.859 real 0m8.168s 00:03:02.859 user 0m2.546s 00:03:02.859 sys 0m4.845s 00:03:02.859 11:59:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:02.859 11:59:03 -- common/autotest_common.sh@10 -- # set +x 00:03:02.859 ************************************ 00:03:02.859 END TEST denied 00:03:02.859 ************************************ 00:03:02.859 11:59:03 -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:02.859 11:59:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:02.859 11:59:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:02.859 11:59:03 -- common/autotest_common.sh@10 -- # set +x 00:03:02.859 ************************************ 00:03:02.859 START TEST allowed 00:03:02.859 ************************************ 00:03:02.859 11:59:03 -- common/autotest_common.sh@1111 -- # allowed 00:03:02.859 11:59:03 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:65:00.0 00:03:02.859 11:59:03 -- setup/acl.sh@45 -- # setup output config 00:03:02.859 11:59:03 -- setup/acl.sh@46 -- # grep -E '0000:65:00.0 .*: nvme -> .*' 00:03:02.859 11:59:03 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:02.859 11:59:03 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:08.148 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:08.148 11:59:09 -- setup/acl.sh@47 -- # verify 00:03:08.148 11:59:09 -- setup/acl.sh@28 -- # local dev driver 00:03:08.148 11:59:09 -- setup/acl.sh@48 -- # setup reset 00:03:08.148 11:59:09 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:08.148 11:59:09 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:12.353 00:03:12.353 real 0m9.727s 00:03:12.353 user 0m2.933s 00:03:12.353 sys 0m5.047s 00:03:12.353 11:59:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:12.353 11:59:13 -- common/autotest_common.sh@10 -- # set +x 00:03:12.353 ************************************ 00:03:12.353 END TEST allowed 00:03:12.353 ************************************ 00:03:12.353 00:03:12.353 real 0m25.774s 00:03:12.353 user 0m8.418s 00:03:12.353 sys 0m14.958s 00:03:12.353 11:59:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:12.353 11:59:13 -- common/autotest_common.sh@10 -- # set +x 00:03:12.353 ************************************ 00:03:12.353 END TEST acl 00:03:12.353 ************************************ 00:03:12.353 11:59:13 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:12.353 11:59:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:12.353 11:59:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:12.353 11:59:13 -- common/autotest_common.sh@10 -- # set +x 00:03:12.353 ************************************ 00:03:12.353 START TEST hugepages 00:03:12.353 ************************************ 00:03:12.353 11:59:13 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:12.614 * Looking for test storage... 00:03:12.615 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:12.615 11:59:13 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:12.615 11:59:13 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:12.615 11:59:13 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:12.615 11:59:13 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:12.615 11:59:13 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:12.615 11:59:13 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:12.615 11:59:13 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:12.615 11:59:13 -- setup/common.sh@18 -- # local node= 00:03:12.615 11:59:13 -- setup/common.sh@19 -- # local var val 00:03:12.615 11:59:13 -- setup/common.sh@20 -- # local mem_f mem 00:03:12.615 11:59:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.615 11:59:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.615 11:59:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.615 11:59:13 -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.615 11:59:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.615 11:59:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.615 11:59:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.615 11:59:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 107507084 kB' 'MemAvailable: 111032452 kB' 'Buffers: 4124 kB' 'Cached: 10149720 kB' 'SwapCached: 0 kB' 'Active: 7239136 kB' 'Inactive: 3515708 kB' 'Active(anon): 6548740 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515708 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 604900 kB' 'Mapped: 175988 kB' 'Shmem: 5947740 kB' 'KReclaimable: 287192 kB' 'Slab: 1039640 kB' 'SReclaimable: 287192 kB' 'SUnreclaim: 752448 kB' 'KernelStack: 27056 kB' 'PageTables: 8552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69460884 kB' 'Committed_AS: 7919188 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234636 kB' 'VmallocChunk: 0 kB' 'Percpu: 103104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3564916 kB' 'DirectMap2M: 42252288 kB' 'DirectMap1G: 90177536 kB' 00:03:12.615 11:59:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.615 11:59:13 -- setup/common.sh@32 -- # continue 00:03:12.615 11:59:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.615 11:59:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.615 11:59:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.615 11:59:13 -- setup/common.sh@32 -- # continue 00:03:12.615 11:59:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.615 11:59:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.615 11:59:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.615 11:59:13 -- setup/common.sh@32 -- # continue 00:03:12.615 11:59:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.615 11:59:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.615 11:59:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.615 11:59:13 -- setup/common.sh@32 -- # continue 00:03:12.615 11:59:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.615 11:59:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.615 11:59:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.615 11:59:13 -- setup/common.sh@32 -- # continue 00:03:12.615 11:59:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.615 11:59:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.615 11:59:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.615 11:59:13 -- setup/common.sh@32 -- # continue 00:03:12.615 11:59:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.615 11:59:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.615 11:59:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.615 11:59:13 -- setup/common.sh@32 -- # continue 00:03:12.615 11:59:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.615 11:59:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.615 11:59:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.615 11:59:13 -- setup/common.sh@32 -- # continue 00:03:12.615 11:59:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.615 11:59:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.615 11:59:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.615 11:59:13 -- setup/common.sh@32 -- # continue 00:03:12.615 11:59:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.615 11:59:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.615 11:59:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.615 11:59:13 -- setup/common.sh@32 -- # continue 00:03:12.615 11:59:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.615 11:59:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.615 11:59:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.615 11:59:13 -- setup/common.sh@32 -- # continue 00:03:12.615 11:59:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.615 11:59:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.615 11:59:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.615 11:59:13 -- setup/common.sh@32 -- # continue 00:03:12.615 11:59:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.615 11:59:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.615 11:59:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.615 11:59:13 -- setup/common.sh@32 -- # continue 00:03:12.615 11:59:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.615 11:59:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.615 11:59:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.615 11:59:13 -- setup/common.sh@32 -- # continue 00:03:12.615 11:59:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.615 11:59:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.615 11:59:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.615 11:59:13 -- setup/common.sh@32 -- # continue 00:03:12.615 11:59:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.615 11:59:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.615 11:59:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.615 11:59:13 -- setup/common.sh@32 -- # continue 00:03:12.615 11:59:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.615 11:59:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.615 11:59:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.615 11:59:13 -- setup/common.sh@32 -- # continue 00:03:12.615 11:59:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.615 11:59:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.615 11:59:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.615 11:59:13 -- setup/common.sh@32 -- # continue 00:03:12.615 11:59:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.615 11:59:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.615 11:59:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.615 11:59:13 -- setup/common.sh@32 -- # continue 00:03:12.615 11:59:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.615 11:59:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.615 11:59:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.615 11:59:13 -- setup/common.sh@32 -- # continue 00:03:12.615 11:59:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.615 11:59:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.615 11:59:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.615 11:59:13 -- setup/common.sh@32 -- # continue 00:03:12.615 11:59:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.615 11:59:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.615 11:59:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.615 11:59:13 -- setup/common.sh@32 -- # continue 00:03:12.615 11:59:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.615 11:59:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.615 11:59:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.615 11:59:13 -- setup/common.sh@32 -- # continue 00:03:12.615 11:59:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.615 11:59:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.615 11:59:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.615 11:59:13 -- setup/common.sh@32 -- # continue 00:03:12.616 11:59:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.616 11:59:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.616 11:59:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.616 11:59:13 -- setup/common.sh@32 -- # continue 00:03:12.616 11:59:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.616 11:59:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.616 11:59:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.616 11:59:13 -- setup/common.sh@32 -- # continue 00:03:12.616 11:59:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.616 11:59:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.616 11:59:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.616 11:59:13 -- setup/common.sh@32 -- # continue 00:03:12.616 11:59:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.616 11:59:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.616 11:59:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.616 11:59:13 -- setup/common.sh@32 -- # continue 00:03:12.616 11:59:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.616 11:59:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.616 11:59:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.616 11:59:13 -- setup/common.sh@32 -- # continue 00:03:12.616 11:59:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.616 11:59:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.616 11:59:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.616 11:59:13 -- setup/common.sh@32 -- # continue 00:03:12.616 11:59:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.616 11:59:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.616 11:59:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.616 11:59:13 -- setup/common.sh@32 -- # continue 00:03:12.616 11:59:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.616 11:59:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.616 11:59:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.616 11:59:13 -- setup/common.sh@32 -- # continue 00:03:12.616 11:59:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.616 11:59:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.616 11:59:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.616 11:59:13 -- setup/common.sh@32 -- # continue 00:03:12.616 11:59:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.616 11:59:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.616 11:59:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.616 11:59:13 -- setup/common.sh@32 -- # continue 00:03:12.616 11:59:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.616 11:59:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.616 11:59:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.616 11:59:13 -- setup/common.sh@32 -- # continue 00:03:12.616 11:59:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.616 11:59:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.616 11:59:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.616 11:59:13 -- setup/common.sh@32 -- # continue 00:03:12.616 11:59:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.616 11:59:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.616 11:59:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.616 11:59:13 -- setup/common.sh@32 -- # continue 00:03:12.616 11:59:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.616 11:59:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.616 11:59:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.616 11:59:13 -- setup/common.sh@32 -- # continue 00:03:12.616 11:59:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.616 11:59:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.616 11:59:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.616 11:59:13 -- setup/common.sh@32 -- # continue 00:03:12.616 11:59:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.616 11:59:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.616 11:59:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.616 11:59:13 -- setup/common.sh@32 -- # continue 00:03:12.616 11:59:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.616 11:59:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.616 11:59:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.616 11:59:13 -- setup/common.sh@32 -- # continue 00:03:12.616 11:59:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.616 11:59:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.616 11:59:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.616 11:59:13 -- setup/common.sh@32 -- # continue 00:03:12.616 11:59:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.616 11:59:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.616 11:59:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.616 11:59:13 -- setup/common.sh@32 -- # continue 00:03:12.616 11:59:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.616 11:59:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.616 11:59:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.616 11:59:13 -- setup/common.sh@32 -- # continue 00:03:12.616 11:59:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.616 11:59:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.616 11:59:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.616 11:59:13 -- setup/common.sh@32 -- # continue 00:03:12.616 11:59:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.616 11:59:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.616 11:59:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.616 11:59:13 -- setup/common.sh@32 -- # continue 00:03:12.616 11:59:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.616 11:59:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.616 11:59:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.616 11:59:13 -- setup/common.sh@32 -- # continue 00:03:12.616 11:59:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.616 11:59:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.616 11:59:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.616 11:59:13 -- setup/common.sh@32 -- # continue 00:03:12.616 11:59:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.616 11:59:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.616 11:59:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.616 11:59:13 -- setup/common.sh@32 -- # continue 00:03:12.616 11:59:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.616 11:59:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.616 11:59:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.616 11:59:13 -- setup/common.sh@32 -- # continue 00:03:12.616 11:59:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.616 11:59:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.616 11:59:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.616 11:59:13 -- setup/common.sh@32 -- # continue 00:03:12.616 11:59:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.616 11:59:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.616 11:59:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.616 11:59:13 -- setup/common.sh@32 -- # continue 00:03:12.616 11:59:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.616 11:59:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.616 11:59:13 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.616 11:59:13 -- setup/common.sh@33 -- # echo 2048 00:03:12.616 11:59:13 -- setup/common.sh@33 -- # return 0 00:03:12.616 11:59:13 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:12.616 11:59:13 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:12.616 11:59:13 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:12.616 11:59:13 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:12.616 11:59:13 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:12.616 11:59:13 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:12.616 11:59:13 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:12.616 11:59:13 -- setup/hugepages.sh@207 -- # get_nodes 00:03:12.616 11:59:13 -- setup/hugepages.sh@27 -- # local node 00:03:12.616 11:59:13 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:12.616 11:59:13 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:12.616 11:59:13 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:12.616 11:59:13 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:12.616 11:59:13 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:12.616 11:59:13 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:12.616 11:59:13 -- setup/hugepages.sh@208 -- # clear_hp 00:03:12.616 11:59:13 -- setup/hugepages.sh@37 -- # local node hp 00:03:12.616 11:59:13 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:12.616 11:59:13 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:12.616 11:59:13 -- setup/hugepages.sh@41 -- # echo 0 00:03:12.616 11:59:13 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:12.616 11:59:13 -- setup/hugepages.sh@41 -- # echo 0 00:03:12.616 11:59:13 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:12.616 11:59:13 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:12.616 11:59:13 -- setup/hugepages.sh@41 -- # echo 0 00:03:12.616 11:59:13 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:12.616 11:59:13 -- setup/hugepages.sh@41 -- # echo 0 00:03:12.616 11:59:13 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:12.616 11:59:13 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:12.616 11:59:13 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:12.616 11:59:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:12.616 11:59:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:12.616 11:59:13 -- common/autotest_common.sh@10 -- # set +x 00:03:12.616 ************************************ 00:03:12.616 START TEST default_setup 00:03:12.616 ************************************ 00:03:12.617 11:59:13 -- common/autotest_common.sh@1111 -- # default_setup 00:03:12.617 11:59:13 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:12.617 11:59:13 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:12.617 11:59:13 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:12.617 11:59:13 -- setup/hugepages.sh@51 -- # shift 00:03:12.617 11:59:13 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:12.617 11:59:13 -- setup/hugepages.sh@52 -- # local node_ids 00:03:12.617 11:59:13 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:12.617 11:59:13 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:12.617 11:59:13 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:12.617 11:59:13 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:12.617 11:59:13 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:12.617 11:59:13 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:12.617 11:59:13 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:12.617 11:59:13 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:12.617 11:59:13 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:12.617 11:59:13 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:12.617 11:59:13 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:12.617 11:59:13 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:12.617 11:59:13 -- setup/hugepages.sh@73 -- # return 0 00:03:12.617 11:59:13 -- setup/hugepages.sh@137 -- # setup output 00:03:12.617 11:59:13 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:12.617 11:59:13 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:15.920 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:15.920 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:15.920 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:15.920 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:15.920 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:15.920 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:15.920 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:15.920 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:15.920 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:16.180 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:16.180 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:16.180 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:16.180 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:16.180 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:16.180 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:16.180 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:16.180 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:16.442 11:59:17 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:16.442 11:59:17 -- setup/hugepages.sh@89 -- # local node 00:03:16.442 11:59:17 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:16.442 11:59:17 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:16.442 11:59:17 -- setup/hugepages.sh@92 -- # local surp 00:03:16.442 11:59:17 -- setup/hugepages.sh@93 -- # local resv 00:03:16.442 11:59:17 -- setup/hugepages.sh@94 -- # local anon 00:03:16.442 11:59:17 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:16.442 11:59:17 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:16.442 11:59:17 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:16.442 11:59:17 -- setup/common.sh@18 -- # local node= 00:03:16.442 11:59:17 -- setup/common.sh@19 -- # local var val 00:03:16.442 11:59:17 -- setup/common.sh@20 -- # local mem_f mem 00:03:16.442 11:59:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.442 11:59:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.442 11:59:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.442 11:59:17 -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.442 11:59:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.442 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.442 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.442 11:59:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109643304 kB' 'MemAvailable: 113168224 kB' 'Buffers: 4124 kB' 'Cached: 10149856 kB' 'SwapCached: 0 kB' 'Active: 7257492 kB' 'Inactive: 3515708 kB' 'Active(anon): 6567096 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515708 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 622608 kB' 'Mapped: 176472 kB' 'Shmem: 5947876 kB' 'KReclaimable: 286296 kB' 'Slab: 1037368 kB' 'SReclaimable: 286296 kB' 'SUnreclaim: 751072 kB' 'KernelStack: 27168 kB' 'PageTables: 8824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 7939736 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234892 kB' 'VmallocChunk: 0 kB' 'Percpu: 103104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3564916 kB' 'DirectMap2M: 42252288 kB' 'DirectMap1G: 90177536 kB' 00:03:16.442 11:59:17 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.442 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.442 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.442 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.442 11:59:17 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.442 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.442 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.442 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.442 11:59:17 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.442 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.442 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.442 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.442 11:59:17 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.442 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.442 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.442 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.442 11:59:17 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.442 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.442 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.442 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.442 11:59:17 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.442 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.442 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.442 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.442 11:59:17 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.442 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.442 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.442 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.442 11:59:17 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.442 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.442 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.442 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.442 11:59:17 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.442 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.442 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.442 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.442 11:59:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.442 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.442 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.442 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.442 11:59:17 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.442 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.442 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.442 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.442 11:59:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.442 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.442 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.442 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.442 11:59:17 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.442 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.442 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.442 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.442 11:59:17 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.442 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.442 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.442 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.442 11:59:17 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.442 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.442 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.442 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.442 11:59:17 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.442 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.442 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.442 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.442 11:59:17 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.442 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.442 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.442 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.442 11:59:17 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.442 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.442 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.442 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.442 11:59:17 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.442 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.442 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.442 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.442 11:59:17 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.442 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.442 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.442 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.442 11:59:17 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.442 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.442 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.442 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.442 11:59:17 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.442 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.442 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.442 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.442 11:59:17 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.442 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.442 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.442 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.442 11:59:17 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.442 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.442 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.442 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.442 11:59:17 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.442 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.442 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.443 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.443 11:59:17 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.443 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.443 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.443 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.443 11:59:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.443 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.443 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.443 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.443 11:59:17 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.443 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.443 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.443 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.443 11:59:17 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.443 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.443 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.443 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.443 11:59:17 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.443 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.443 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.443 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.443 11:59:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.443 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.443 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.443 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.443 11:59:17 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.443 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.443 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.443 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.443 11:59:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.443 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.443 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.443 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.443 11:59:17 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.443 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.443 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.443 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.443 11:59:17 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.443 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.443 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.443 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.443 11:59:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.443 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.443 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.443 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.443 11:59:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.443 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.443 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.443 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.443 11:59:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.443 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.443 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.443 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.443 11:59:17 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.443 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.443 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.443 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.443 11:59:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.443 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.443 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.443 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.443 11:59:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.443 11:59:17 -- setup/common.sh@33 -- # echo 0 00:03:16.443 11:59:17 -- setup/common.sh@33 -- # return 0 00:03:16.443 11:59:17 -- setup/hugepages.sh@97 -- # anon=0 00:03:16.443 11:59:17 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:16.443 11:59:17 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:16.443 11:59:17 -- setup/common.sh@18 -- # local node= 00:03:16.443 11:59:17 -- setup/common.sh@19 -- # local var val 00:03:16.443 11:59:17 -- setup/common.sh@20 -- # local mem_f mem 00:03:16.443 11:59:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.443 11:59:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.443 11:59:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.443 11:59:17 -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.443 11:59:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.443 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.443 11:59:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109642788 kB' 'MemAvailable: 113167708 kB' 'Buffers: 4124 kB' 'Cached: 10149860 kB' 'SwapCached: 0 kB' 'Active: 7257392 kB' 'Inactive: 3515708 kB' 'Active(anon): 6566996 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515708 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 622304 kB' 'Mapped: 176532 kB' 'Shmem: 5947880 kB' 'KReclaimable: 286296 kB' 'Slab: 1037504 kB' 'SReclaimable: 286296 kB' 'SUnreclaim: 751208 kB' 'KernelStack: 27152 kB' 'PageTables: 9288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 7939748 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234908 kB' 'VmallocChunk: 0 kB' 'Percpu: 103104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3564916 kB' 'DirectMap2M: 42252288 kB' 'DirectMap1G: 90177536 kB' 00:03:16.443 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.443 11:59:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.443 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.443 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.443 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.443 11:59:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.443 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.443 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.443 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.443 11:59:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.443 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.443 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.443 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.443 11:59:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.443 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.443 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.443 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.443 11:59:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.443 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.443 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.443 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.443 11:59:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.443 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.443 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.443 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.443 11:59:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.443 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.443 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.443 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.443 11:59:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.443 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.443 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.443 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.443 11:59:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.443 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.443 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.443 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.443 11:59:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.443 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.443 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.443 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.443 11:59:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.443 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.443 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.443 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.443 11:59:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.443 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.443 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.443 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.443 11:59:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.443 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.443 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.443 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.443 11:59:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.443 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.443 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.443 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.443 11:59:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.443 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.443 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.443 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.443 11:59:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.443 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.443 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.443 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.443 11:59:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.443 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.443 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.443 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.444 11:59:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.444 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.444 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.444 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.444 11:59:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.444 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.444 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.444 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.444 11:59:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.444 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.444 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.444 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.444 11:59:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.444 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.444 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.444 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.444 11:59:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.444 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.444 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.444 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.444 11:59:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.444 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.444 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.444 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.444 11:59:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.444 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.444 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.444 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.444 11:59:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.444 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.444 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.444 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.444 11:59:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.444 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.444 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.444 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.444 11:59:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.444 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.444 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.444 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.444 11:59:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.444 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.444 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.444 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.444 11:59:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.444 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.444 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.444 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.444 11:59:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.444 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.444 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.444 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.444 11:59:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.444 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.444 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.444 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.444 11:59:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.444 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.444 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.444 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.444 11:59:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.444 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.444 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.444 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.444 11:59:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.444 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.444 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.444 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.444 11:59:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.444 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.444 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.444 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.444 11:59:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.444 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.444 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.444 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.444 11:59:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.444 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.444 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.444 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.444 11:59:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.444 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.444 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.444 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.444 11:59:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.444 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.444 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.444 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.444 11:59:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.444 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.444 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.444 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.444 11:59:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.444 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.444 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.444 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.444 11:59:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.444 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.444 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.444 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.444 11:59:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.444 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.444 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.444 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.444 11:59:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.444 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.444 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.444 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.444 11:59:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.444 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.444 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.444 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.444 11:59:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.444 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.444 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.444 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.444 11:59:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.444 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.444 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.444 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.444 11:59:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.444 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.444 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.444 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.444 11:59:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.444 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.444 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.444 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.444 11:59:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.444 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.444 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.444 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.444 11:59:17 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.444 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.444 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.444 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.444 11:59:17 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.444 11:59:17 -- setup/common.sh@33 -- # echo 0 00:03:16.444 11:59:17 -- setup/common.sh@33 -- # return 0 00:03:16.444 11:59:17 -- setup/hugepages.sh@99 -- # surp=0 00:03:16.444 11:59:17 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:16.444 11:59:17 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:16.444 11:59:17 -- setup/common.sh@18 -- # local node= 00:03:16.444 11:59:17 -- setup/common.sh@19 -- # local var val 00:03:16.444 11:59:17 -- setup/common.sh@20 -- # local mem_f mem 00:03:16.444 11:59:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.444 11:59:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.444 11:59:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.444 11:59:17 -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.444 11:59:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.444 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.444 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.445 11:59:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109643048 kB' 'MemAvailable: 113167968 kB' 'Buffers: 4124 kB' 'Cached: 10149872 kB' 'SwapCached: 0 kB' 'Active: 7256708 kB' 'Inactive: 3515708 kB' 'Active(anon): 6566312 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515708 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 621728 kB' 'Mapped: 176444 kB' 'Shmem: 5947892 kB' 'KReclaimable: 286296 kB' 'Slab: 1037472 kB' 'SReclaimable: 286296 kB' 'SUnreclaim: 751176 kB' 'KernelStack: 27136 kB' 'PageTables: 8660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 7938120 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234924 kB' 'VmallocChunk: 0 kB' 'Percpu: 103104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3564916 kB' 'DirectMap2M: 42252288 kB' 'DirectMap1G: 90177536 kB' 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.445 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.445 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.446 11:59:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.446 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.446 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.446 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.446 11:59:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.446 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.446 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.446 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.709 11:59:17 -- setup/common.sh@33 -- # echo 0 00:03:16.709 11:59:17 -- setup/common.sh@33 -- # return 0 00:03:16.709 11:59:17 -- setup/hugepages.sh@100 -- # resv=0 00:03:16.709 11:59:17 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:16.709 nr_hugepages=1024 00:03:16.709 11:59:17 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:16.709 resv_hugepages=0 00:03:16.709 11:59:17 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:16.709 surplus_hugepages=0 00:03:16.709 11:59:17 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:16.709 anon_hugepages=0 00:03:16.709 11:59:17 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:16.709 11:59:17 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:16.709 11:59:17 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:16.709 11:59:17 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:16.709 11:59:17 -- setup/common.sh@18 -- # local node= 00:03:16.709 11:59:17 -- setup/common.sh@19 -- # local var val 00:03:16.709 11:59:17 -- setup/common.sh@20 -- # local mem_f mem 00:03:16.709 11:59:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.709 11:59:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.709 11:59:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.709 11:59:17 -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.709 11:59:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.709 11:59:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109643932 kB' 'MemAvailable: 113168852 kB' 'Buffers: 4124 kB' 'Cached: 10149884 kB' 'SwapCached: 0 kB' 'Active: 7256980 kB' 'Inactive: 3515708 kB' 'Active(anon): 6566584 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515708 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 621748 kB' 'Mapped: 176304 kB' 'Shmem: 5947904 kB' 'KReclaimable: 286296 kB' 'Slab: 1037468 kB' 'SReclaimable: 286296 kB' 'SUnreclaim: 751172 kB' 'KernelStack: 27184 kB' 'PageTables: 8612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 7938132 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234876 kB' 'VmallocChunk: 0 kB' 'Percpu: 103104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3564916 kB' 'DirectMap2M: 42252288 kB' 'DirectMap1G: 90177536 kB' 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.709 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.709 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.710 11:59:17 -- setup/common.sh@33 -- # echo 1024 00:03:16.710 11:59:17 -- setup/common.sh@33 -- # return 0 00:03:16.710 11:59:17 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:16.710 11:59:17 -- setup/hugepages.sh@112 -- # get_nodes 00:03:16.710 11:59:17 -- setup/hugepages.sh@27 -- # local node 00:03:16.710 11:59:17 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:16.710 11:59:17 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:16.710 11:59:17 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:16.710 11:59:17 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:16.710 11:59:17 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:16.710 11:59:17 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:16.710 11:59:17 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:16.710 11:59:17 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:16.710 11:59:17 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:16.710 11:59:17 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:16.710 11:59:17 -- setup/common.sh@18 -- # local node=0 00:03:16.710 11:59:17 -- setup/common.sh@19 -- # local var val 00:03:16.710 11:59:17 -- setup/common.sh@20 -- # local mem_f mem 00:03:16.710 11:59:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.710 11:59:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:16.710 11:59:17 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:16.710 11:59:17 -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.710 11:59:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.710 11:59:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59072572 kB' 'MemUsed: 6586436 kB' 'SwapCached: 0 kB' 'Active: 2284232 kB' 'Inactive: 108924 kB' 'Active(anon): 1974712 kB' 'Inactive(anon): 0 kB' 'Active(file): 309520 kB' 'Inactive(file): 108924 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2284556 kB' 'Mapped: 105608 kB' 'AnonPages: 111832 kB' 'Shmem: 1866112 kB' 'KernelStack: 12104 kB' 'PageTables: 3652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 157208 kB' 'Slab: 523540 kB' 'SReclaimable: 157208 kB' 'SUnreclaim: 366332 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.710 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.710 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.711 11:59:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.711 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.711 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.711 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.711 11:59:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.711 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.711 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.711 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.711 11:59:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.711 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.711 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.711 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.711 11:59:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.711 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.711 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.711 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.711 11:59:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.711 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.711 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.711 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.711 11:59:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.711 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.711 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.711 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.711 11:59:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.711 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.711 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.711 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.711 11:59:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.711 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.711 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.711 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.711 11:59:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.711 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.711 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.711 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.711 11:59:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.711 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.711 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.711 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.711 11:59:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.711 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.711 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.711 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.711 11:59:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.711 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.711 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.711 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.711 11:59:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.711 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.711 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.711 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.711 11:59:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.711 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.711 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.711 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.711 11:59:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.711 11:59:17 -- setup/common.sh@32 -- # continue 00:03:16.711 11:59:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.711 11:59:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.711 11:59:17 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.711 11:59:17 -- setup/common.sh@33 -- # echo 0 00:03:16.711 11:59:17 -- setup/common.sh@33 -- # return 0 00:03:16.711 11:59:17 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:16.711 11:59:17 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:16.711 11:59:17 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:16.711 11:59:17 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:16.711 11:59:17 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:16.711 node0=1024 expecting 1024 00:03:16.711 11:59:17 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:16.711 00:03:16.711 real 0m3.952s 00:03:16.711 user 0m1.506s 00:03:16.711 sys 0m2.440s 00:03:16.711 11:59:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:16.711 11:59:17 -- common/autotest_common.sh@10 -- # set +x 00:03:16.711 ************************************ 00:03:16.711 END TEST default_setup 00:03:16.711 ************************************ 00:03:16.711 11:59:17 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:16.711 11:59:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:16.711 11:59:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:16.711 11:59:17 -- common/autotest_common.sh@10 -- # set +x 00:03:16.971 ************************************ 00:03:16.971 START TEST per_node_1G_alloc 00:03:16.971 ************************************ 00:03:16.971 11:59:17 -- common/autotest_common.sh@1111 -- # per_node_1G_alloc 00:03:16.971 11:59:17 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:16.971 11:59:17 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:16.971 11:59:17 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:16.971 11:59:17 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:16.971 11:59:17 -- setup/hugepages.sh@51 -- # shift 00:03:16.971 11:59:17 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:16.971 11:59:17 -- setup/hugepages.sh@52 -- # local node_ids 00:03:16.971 11:59:17 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:16.971 11:59:17 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:16.971 11:59:17 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:16.971 11:59:17 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:16.971 11:59:17 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:16.971 11:59:17 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:16.971 11:59:17 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:16.971 11:59:17 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:16.971 11:59:17 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:16.971 11:59:17 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:16.971 11:59:17 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:16.971 11:59:17 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:16.971 11:59:17 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:16.971 11:59:17 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:16.971 11:59:17 -- setup/hugepages.sh@73 -- # return 0 00:03:16.971 11:59:17 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:16.971 11:59:17 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:16.971 11:59:17 -- setup/hugepages.sh@146 -- # setup output 00:03:16.971 11:59:17 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:16.972 11:59:17 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:20.271 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:20.271 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:20.271 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:20.271 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:20.271 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:20.271 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:20.271 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:20.271 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:20.271 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:20.271 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:20.271 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:20.271 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:20.271 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:20.271 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:20.271 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:20.271 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:20.271 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:20.536 11:59:21 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:20.536 11:59:21 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:20.536 11:59:21 -- setup/hugepages.sh@89 -- # local node 00:03:20.536 11:59:21 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:20.536 11:59:21 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:20.536 11:59:21 -- setup/hugepages.sh@92 -- # local surp 00:03:20.536 11:59:21 -- setup/hugepages.sh@93 -- # local resv 00:03:20.536 11:59:21 -- setup/hugepages.sh@94 -- # local anon 00:03:20.536 11:59:21 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:20.536 11:59:21 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:20.536 11:59:21 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:20.536 11:59:21 -- setup/common.sh@18 -- # local node= 00:03:20.536 11:59:21 -- setup/common.sh@19 -- # local var val 00:03:20.536 11:59:21 -- setup/common.sh@20 -- # local mem_f mem 00:03:20.536 11:59:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.536 11:59:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.536 11:59:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.536 11:59:21 -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.536 11:59:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.536 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.536 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.536 11:59:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109673900 kB' 'MemAvailable: 113198804 kB' 'Buffers: 4124 kB' 'Cached: 10149980 kB' 'SwapCached: 0 kB' 'Active: 7255000 kB' 'Inactive: 3515708 kB' 'Active(anon): 6564604 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515708 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 619548 kB' 'Mapped: 175264 kB' 'Shmem: 5948000 kB' 'KReclaimable: 286264 kB' 'Slab: 1037136 kB' 'SReclaimable: 286264 kB' 'SUnreclaim: 750872 kB' 'KernelStack: 26976 kB' 'PageTables: 8468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 7925960 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234716 kB' 'VmallocChunk: 0 kB' 'Percpu: 103104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3564916 kB' 'DirectMap2M: 42252288 kB' 'DirectMap1G: 90177536 kB' 00:03:20.536 11:59:21 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.536 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.536 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.536 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.536 11:59:21 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.536 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.536 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.536 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.536 11:59:21 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.536 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.536 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.536 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.536 11:59:21 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.536 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.536 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.536 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.536 11:59:21 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.536 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.536 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.536 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.536 11:59:21 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.536 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.536 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.536 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.536 11:59:21 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.536 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.536 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.536 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.536 11:59:21 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.536 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.536 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.536 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.536 11:59:21 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.536 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.536 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.536 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.536 11:59:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.536 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.536 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.536 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.536 11:59:21 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.536 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.536 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.536 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.536 11:59:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.536 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.536 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.536 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.536 11:59:21 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.536 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.536 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.536 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.536 11:59:21 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.536 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.536 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.536 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.536 11:59:21 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.536 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.536 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.536 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.536 11:59:21 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.536 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.536 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.536 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.536 11:59:21 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.536 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.536 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.536 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.536 11:59:21 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.536 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.536 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.536 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.536 11:59:21 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.536 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.536 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.536 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.536 11:59:21 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.536 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.536 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.536 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.536 11:59:21 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.536 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.536 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.536 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.536 11:59:21 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.536 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.536 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.536 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.536 11:59:21 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.536 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.536 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.536 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.536 11:59:21 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.536 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.536 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.536 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.536 11:59:21 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.536 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.536 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.536 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.536 11:59:21 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.537 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.537 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.537 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.537 11:59:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.537 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.537 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.537 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.537 11:59:21 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.537 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.537 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.537 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.537 11:59:21 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.537 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.537 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.537 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.537 11:59:21 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.537 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.537 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.537 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.537 11:59:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.537 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.537 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.537 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.537 11:59:21 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.537 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.537 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.537 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.537 11:59:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.537 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.537 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.537 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.537 11:59:21 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.537 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.537 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.537 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.537 11:59:21 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.537 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.537 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.537 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.537 11:59:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.537 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.537 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.537 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.537 11:59:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.537 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.537 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.537 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.537 11:59:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.537 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.537 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.537 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.537 11:59:21 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.537 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.537 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.537 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.537 11:59:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.537 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.537 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.537 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.537 11:59:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.537 11:59:21 -- setup/common.sh@33 -- # echo 0 00:03:20.537 11:59:21 -- setup/common.sh@33 -- # return 0 00:03:20.537 11:59:21 -- setup/hugepages.sh@97 -- # anon=0 00:03:20.537 11:59:21 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:20.537 11:59:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:20.537 11:59:21 -- setup/common.sh@18 -- # local node= 00:03:20.537 11:59:21 -- setup/common.sh@19 -- # local var val 00:03:20.537 11:59:21 -- setup/common.sh@20 -- # local mem_f mem 00:03:20.537 11:59:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.537 11:59:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.537 11:59:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.537 11:59:21 -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.537 11:59:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.537 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.537 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.537 11:59:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109674144 kB' 'MemAvailable: 113199048 kB' 'Buffers: 4124 kB' 'Cached: 10149984 kB' 'SwapCached: 0 kB' 'Active: 7254744 kB' 'Inactive: 3515708 kB' 'Active(anon): 6564348 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515708 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 619728 kB' 'Mapped: 175180 kB' 'Shmem: 5948004 kB' 'KReclaimable: 286264 kB' 'Slab: 1037124 kB' 'SReclaimable: 286264 kB' 'SUnreclaim: 750860 kB' 'KernelStack: 26976 kB' 'PageTables: 8456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 7925972 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234684 kB' 'VmallocChunk: 0 kB' 'Percpu: 103104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3564916 kB' 'DirectMap2M: 42252288 kB' 'DirectMap1G: 90177536 kB' 00:03:20.537 11:59:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.537 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.537 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.537 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.537 11:59:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.537 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.537 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.537 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.537 11:59:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.537 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.537 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.537 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.537 11:59:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.537 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.537 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.537 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.537 11:59:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.537 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.537 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.537 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.537 11:59:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.537 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.537 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.537 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.537 11:59:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.537 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.537 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.537 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.537 11:59:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.537 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.537 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.537 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.537 11:59:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.537 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.537 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.537 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.537 11:59:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.537 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.537 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.537 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.537 11:59:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.537 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.537 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.537 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.537 11:59:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.537 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.537 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.537 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.537 11:59:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.537 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.537 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.537 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.537 11:59:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.537 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.537 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.537 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.537 11:59:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.537 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.537 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.537 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.537 11:59:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.537 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.537 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.537 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.537 11:59:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.537 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.537 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.537 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.538 11:59:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.538 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.538 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.538 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.538 11:59:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.538 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.538 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.538 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.538 11:59:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.538 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.538 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.538 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.538 11:59:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.538 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.538 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.538 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.538 11:59:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.538 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.538 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.538 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.538 11:59:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.538 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.538 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.538 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.538 11:59:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.538 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.538 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.538 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.538 11:59:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.538 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.538 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.538 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.538 11:59:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.538 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.538 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.538 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.538 11:59:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.538 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.538 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.538 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.538 11:59:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.538 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.538 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.538 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.538 11:59:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.538 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.538 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.538 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.538 11:59:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.538 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.538 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.538 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.538 11:59:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.538 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.538 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.538 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.538 11:59:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.538 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.538 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.538 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.538 11:59:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.538 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.538 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.538 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.538 11:59:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.538 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.538 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.538 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.538 11:59:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.538 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.538 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.538 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.538 11:59:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.538 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.538 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.538 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.538 11:59:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.538 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.538 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.538 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.538 11:59:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.538 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.538 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.538 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.538 11:59:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.538 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.538 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.538 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.538 11:59:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.538 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.538 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.538 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.538 11:59:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.538 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.538 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.538 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.538 11:59:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.538 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.538 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.538 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.538 11:59:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.538 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.538 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.538 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.538 11:59:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.538 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.538 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.538 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.538 11:59:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.538 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.538 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.538 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.538 11:59:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.538 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.538 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.538 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.538 11:59:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.538 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.538 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.538 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.538 11:59:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.538 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.538 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.538 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.538 11:59:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.538 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.538 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.538 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.538 11:59:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.538 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.538 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.538 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.538 11:59:21 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.538 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.538 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.538 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.538 11:59:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.538 11:59:21 -- setup/common.sh@33 -- # echo 0 00:03:20.538 11:59:21 -- setup/common.sh@33 -- # return 0 00:03:20.538 11:59:21 -- setup/hugepages.sh@99 -- # surp=0 00:03:20.538 11:59:21 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:20.538 11:59:21 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:20.538 11:59:21 -- setup/common.sh@18 -- # local node= 00:03:20.538 11:59:21 -- setup/common.sh@19 -- # local var val 00:03:20.538 11:59:21 -- setup/common.sh@20 -- # local mem_f mem 00:03:20.538 11:59:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.538 11:59:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.538 11:59:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.538 11:59:21 -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.538 11:59:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.538 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.538 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.539 11:59:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109673892 kB' 'MemAvailable: 113198796 kB' 'Buffers: 4124 kB' 'Cached: 10149984 kB' 'SwapCached: 0 kB' 'Active: 7254744 kB' 'Inactive: 3515708 kB' 'Active(anon): 6564348 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515708 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 619728 kB' 'Mapped: 175180 kB' 'Shmem: 5948004 kB' 'KReclaimable: 286264 kB' 'Slab: 1037124 kB' 'SReclaimable: 286264 kB' 'SUnreclaim: 750860 kB' 'KernelStack: 26976 kB' 'PageTables: 8456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 7925984 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234684 kB' 'VmallocChunk: 0 kB' 'Percpu: 103104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3564916 kB' 'DirectMap2M: 42252288 kB' 'DirectMap1G: 90177536 kB' 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.539 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.539 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.540 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.540 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.540 11:59:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.540 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.540 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.540 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.540 11:59:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.540 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.540 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.540 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.540 11:59:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.540 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.540 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.540 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.540 11:59:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.540 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.540 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.540 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.540 11:59:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.540 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.540 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.540 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.540 11:59:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.540 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.540 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.540 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.540 11:59:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.540 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.540 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.540 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.540 11:59:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.540 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.540 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.540 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.540 11:59:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.540 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.540 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.540 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.540 11:59:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.540 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.540 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.540 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.540 11:59:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.540 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.540 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.540 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.540 11:59:21 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.540 11:59:21 -- setup/common.sh@33 -- # echo 0 00:03:20.540 11:59:21 -- setup/common.sh@33 -- # return 0 00:03:20.540 11:59:21 -- setup/hugepages.sh@100 -- # resv=0 00:03:20.540 11:59:21 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:20.540 nr_hugepages=1024 00:03:20.540 11:59:21 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:20.540 resv_hugepages=0 00:03:20.540 11:59:21 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:20.540 surplus_hugepages=0 00:03:20.540 11:59:21 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:20.540 anon_hugepages=0 00:03:20.540 11:59:21 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:20.540 11:59:21 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:20.540 11:59:21 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:20.540 11:59:21 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:20.540 11:59:21 -- setup/common.sh@18 -- # local node= 00:03:20.540 11:59:21 -- setup/common.sh@19 -- # local var val 00:03:20.540 11:59:21 -- setup/common.sh@20 -- # local mem_f mem 00:03:20.540 11:59:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.540 11:59:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.540 11:59:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.540 11:59:21 -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.540 11:59:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.540 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.540 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.540 11:59:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109674584 kB' 'MemAvailable: 113199488 kB' 'Buffers: 4124 kB' 'Cached: 10150012 kB' 'SwapCached: 0 kB' 'Active: 7254732 kB' 'Inactive: 3515708 kB' 'Active(anon): 6564336 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515708 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 619324 kB' 'Mapped: 175180 kB' 'Shmem: 5948032 kB' 'KReclaimable: 286264 kB' 'Slab: 1037124 kB' 'SReclaimable: 286264 kB' 'SUnreclaim: 750860 kB' 'KernelStack: 26960 kB' 'PageTables: 8400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 7926000 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234684 kB' 'VmallocChunk: 0 kB' 'Percpu: 103104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3564916 kB' 'DirectMap2M: 42252288 kB' 'DirectMap1G: 90177536 kB' 00:03:20.540 11:59:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.540 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.540 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.540 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.540 11:59:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.540 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.540 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.540 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.540 11:59:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.540 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.540 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.540 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.540 11:59:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.540 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.540 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.540 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.540 11:59:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.540 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.540 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.540 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.540 11:59:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.540 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.540 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.540 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.540 11:59:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.540 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.540 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.540 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.540 11:59:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.540 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.540 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.540 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.540 11:59:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.540 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.540 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.540 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.540 11:59:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.541 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.541 11:59:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.541 11:59:21 -- setup/common.sh@33 -- # echo 1024 00:03:20.541 11:59:21 -- setup/common.sh@33 -- # return 0 00:03:20.541 11:59:21 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:20.541 11:59:21 -- setup/hugepages.sh@112 -- # get_nodes 00:03:20.541 11:59:21 -- setup/hugepages.sh@27 -- # local node 00:03:20.542 11:59:21 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:20.542 11:59:21 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:20.542 11:59:21 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:20.542 11:59:21 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:20.542 11:59:21 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:20.542 11:59:21 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:20.542 11:59:21 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:20.542 11:59:21 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:20.542 11:59:21 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:20.542 11:59:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:20.542 11:59:21 -- setup/common.sh@18 -- # local node=0 00:03:20.542 11:59:21 -- setup/common.sh@19 -- # local var val 00:03:20.542 11:59:21 -- setup/common.sh@20 -- # local mem_f mem 00:03:20.542 11:59:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.542 11:59:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:20.542 11:59:21 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:20.542 11:59:21 -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.542 11:59:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.542 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.542 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.542 11:59:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 60148416 kB' 'MemUsed: 5510592 kB' 'SwapCached: 0 kB' 'Active: 2283936 kB' 'Inactive: 108924 kB' 'Active(anon): 1974416 kB' 'Inactive(anon): 0 kB' 'Active(file): 309520 kB' 'Inactive(file): 108924 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2284664 kB' 'Mapped: 104460 kB' 'AnonPages: 111396 kB' 'Shmem: 1866220 kB' 'KernelStack: 12104 kB' 'PageTables: 3416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 157176 kB' 'Slab: 523232 kB' 'SReclaimable: 157176 kB' 'SUnreclaim: 366056 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:20.542 11:59:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.542 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.542 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.542 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.542 11:59:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.542 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.542 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.542 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.542 11:59:21 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.542 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.542 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.542 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.542 11:59:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.542 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.542 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.542 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.542 11:59:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.542 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.542 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.542 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.542 11:59:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.542 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.542 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.542 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.542 11:59:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.542 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.542 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.542 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.542 11:59:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.542 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.542 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.542 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.542 11:59:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.542 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.542 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.542 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.542 11:59:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.542 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.542 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.542 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.542 11:59:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.542 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.542 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.542 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.542 11:59:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.542 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.542 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.542 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.542 11:59:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.542 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.542 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.542 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.542 11:59:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.542 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.542 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.542 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.542 11:59:21 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.542 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.542 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.542 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.542 11:59:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.542 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.542 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.542 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.542 11:59:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.542 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.542 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.542 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.542 11:59:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.542 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.542 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.542 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.542 11:59:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.542 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.542 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.542 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.542 11:59:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.542 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.542 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.542 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.542 11:59:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.542 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.542 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.542 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.542 11:59:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.542 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.542 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.542 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.542 11:59:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.542 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.542 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.542 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.542 11:59:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.542 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.542 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.542 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.542 11:59:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.542 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.542 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.542 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.542 11:59:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.542 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.542 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.542 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.542 11:59:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.542 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.542 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.542 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.542 11:59:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.542 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.542 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.542 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.542 11:59:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.542 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.542 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.542 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.542 11:59:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.542 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.542 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.542 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.542 11:59:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.542 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.542 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.542 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.542 11:59:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.542 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.542 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.542 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.542 11:59:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.542 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.542 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.542 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.543 11:59:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.543 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.543 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.543 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.543 11:59:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.543 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.543 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.543 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.543 11:59:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.543 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.543 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.543 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.543 11:59:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.543 11:59:21 -- setup/common.sh@33 -- # echo 0 00:03:20.543 11:59:21 -- setup/common.sh@33 -- # return 0 00:03:20.543 11:59:21 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:20.543 11:59:21 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:20.543 11:59:21 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:20.543 11:59:21 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:20.543 11:59:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:20.543 11:59:21 -- setup/common.sh@18 -- # local node=1 00:03:20.543 11:59:21 -- setup/common.sh@19 -- # local var val 00:03:20.543 11:59:21 -- setup/common.sh@20 -- # local mem_f mem 00:03:20.543 11:59:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.543 11:59:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:20.543 11:59:21 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:20.543 11:59:21 -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.543 11:59:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.543 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.543 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.543 11:59:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679860 kB' 'MemFree: 49526564 kB' 'MemUsed: 11153296 kB' 'SwapCached: 0 kB' 'Active: 4970860 kB' 'Inactive: 3406784 kB' 'Active(anon): 4589984 kB' 'Inactive(anon): 0 kB' 'Active(file): 380876 kB' 'Inactive(file): 3406784 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7869484 kB' 'Mapped: 70720 kB' 'AnonPages: 508332 kB' 'Shmem: 4081824 kB' 'KernelStack: 14872 kB' 'PageTables: 5040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 129088 kB' 'Slab: 513892 kB' 'SReclaimable: 129088 kB' 'SUnreclaim: 384804 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:20.543 11:59:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.543 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.543 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.543 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.543 11:59:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.543 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.543 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.543 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.543 11:59:21 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.543 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.543 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.543 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.543 11:59:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.543 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.543 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.543 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.543 11:59:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.543 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.543 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.543 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.543 11:59:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.543 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.543 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.543 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.543 11:59:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.543 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.543 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.543 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.543 11:59:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.543 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.543 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.543 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.543 11:59:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.543 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.543 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.543 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.543 11:59:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.543 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.543 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.543 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.543 11:59:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.543 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.543 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.543 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.543 11:59:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.543 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.543 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.543 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.543 11:59:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.543 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.543 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.543 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.543 11:59:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.543 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.543 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.543 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.543 11:59:21 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.543 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.543 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.543 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.543 11:59:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.543 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.543 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.543 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.543 11:59:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.543 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.543 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.543 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.543 11:59:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.543 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.543 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.543 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.543 11:59:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.543 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.543 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.543 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.543 11:59:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.543 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.543 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.543 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.543 11:59:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.543 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.543 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.543 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.543 11:59:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.543 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.543 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.543 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.543 11:59:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.543 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.543 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.543 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.543 11:59:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.543 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.543 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.543 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.543 11:59:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.543 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.543 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.543 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.543 11:59:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.543 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.543 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.543 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.543 11:59:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.543 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.543 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.543 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.543 11:59:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.543 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.543 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.543 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.543 11:59:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.543 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.543 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.543 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.544 11:59:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.544 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.544 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.544 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.544 11:59:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.544 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.544 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.544 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.544 11:59:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.544 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.544 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.544 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.544 11:59:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.544 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.544 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.544 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.804 11:59:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.804 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.804 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.804 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.804 11:59:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.804 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.804 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.804 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.804 11:59:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.804 11:59:21 -- setup/common.sh@32 -- # continue 00:03:20.804 11:59:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.804 11:59:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.804 11:59:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.804 11:59:21 -- setup/common.sh@33 -- # echo 0 00:03:20.804 11:59:21 -- setup/common.sh@33 -- # return 0 00:03:20.804 11:59:21 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:20.804 11:59:21 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:20.804 11:59:21 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:20.804 11:59:21 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:20.804 11:59:21 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:20.804 node0=512 expecting 512 00:03:20.804 11:59:21 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:20.804 11:59:21 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:20.804 11:59:21 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:20.804 11:59:21 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:20.804 node1=512 expecting 512 00:03:20.804 11:59:21 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:20.804 00:03:20.804 real 0m3.828s 00:03:20.804 user 0m1.541s 00:03:20.804 sys 0m2.343s 00:03:20.804 11:59:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:20.804 11:59:21 -- common/autotest_common.sh@10 -- # set +x 00:03:20.804 ************************************ 00:03:20.804 END TEST per_node_1G_alloc 00:03:20.804 ************************************ 00:03:20.804 11:59:21 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:20.804 11:59:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:20.804 11:59:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:20.804 11:59:21 -- common/autotest_common.sh@10 -- # set +x 00:03:20.804 ************************************ 00:03:20.804 START TEST even_2G_alloc 00:03:20.804 ************************************ 00:03:20.804 11:59:21 -- common/autotest_common.sh@1111 -- # even_2G_alloc 00:03:20.804 11:59:21 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:20.804 11:59:21 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:20.804 11:59:21 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:20.804 11:59:21 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:20.804 11:59:21 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:20.805 11:59:21 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:20.805 11:59:21 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:20.805 11:59:21 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:20.805 11:59:21 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:20.805 11:59:21 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:20.805 11:59:21 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:20.805 11:59:21 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:20.805 11:59:21 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:20.805 11:59:21 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:20.805 11:59:21 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:20.805 11:59:21 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:20.805 11:59:21 -- setup/hugepages.sh@83 -- # : 512 00:03:20.805 11:59:21 -- setup/hugepages.sh@84 -- # : 1 00:03:20.805 11:59:21 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:20.805 11:59:21 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:20.805 11:59:21 -- setup/hugepages.sh@83 -- # : 0 00:03:20.805 11:59:21 -- setup/hugepages.sh@84 -- # : 0 00:03:20.805 11:59:21 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:20.805 11:59:21 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:20.805 11:59:21 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:20.805 11:59:21 -- setup/hugepages.sh@153 -- # setup output 00:03:20.805 11:59:21 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:20.805 11:59:21 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:24.099 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:24.099 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:24.099 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:24.099 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:24.099 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:24.099 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:24.099 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:24.099 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:24.099 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:24.099 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:24.099 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:24.099 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:24.099 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:24.099 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:24.099 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:24.099 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:24.099 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:24.359 11:59:25 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:24.359 11:59:25 -- setup/hugepages.sh@89 -- # local node 00:03:24.359 11:59:25 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:24.359 11:59:25 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:24.359 11:59:25 -- setup/hugepages.sh@92 -- # local surp 00:03:24.359 11:59:25 -- setup/hugepages.sh@93 -- # local resv 00:03:24.359 11:59:25 -- setup/hugepages.sh@94 -- # local anon 00:03:24.359 11:59:25 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:24.359 11:59:25 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:24.359 11:59:25 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:24.359 11:59:25 -- setup/common.sh@18 -- # local node= 00:03:24.359 11:59:25 -- setup/common.sh@19 -- # local var val 00:03:24.359 11:59:25 -- setup/common.sh@20 -- # local mem_f mem 00:03:24.359 11:59:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.359 11:59:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.359 11:59:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.359 11:59:25 -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.359 11:59:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.359 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.359 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.359 11:59:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109672516 kB' 'MemAvailable: 113197420 kB' 'Buffers: 4124 kB' 'Cached: 10150128 kB' 'SwapCached: 0 kB' 'Active: 7256120 kB' 'Inactive: 3515708 kB' 'Active(anon): 6565724 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515708 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 620872 kB' 'Mapped: 175200 kB' 'Shmem: 5948148 kB' 'KReclaimable: 286264 kB' 'Slab: 1036652 kB' 'SReclaimable: 286264 kB' 'SUnreclaim: 750388 kB' 'KernelStack: 26992 kB' 'PageTables: 8472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 7927044 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234764 kB' 'VmallocChunk: 0 kB' 'Percpu: 103104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3564916 kB' 'DirectMap2M: 42252288 kB' 'DirectMap1G: 90177536 kB' 00:03:24.359 11:59:25 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.359 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.359 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.359 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.359 11:59:25 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.359 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.359 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.359 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.359 11:59:25 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.359 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.359 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.359 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.359 11:59:25 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.359 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.359 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.359 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.359 11:59:25 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.359 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.359 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.359 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.359 11:59:25 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.359 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.359 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.359 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.359 11:59:25 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.359 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.359 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.359 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.359 11:59:25 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.359 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.359 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.359 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.359 11:59:25 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.359 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.359 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.359 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.359 11:59:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.359 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.625 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 11:59:25 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.625 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.625 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 11:59:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.625 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.625 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 11:59:25 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.625 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.625 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 11:59:25 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.625 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.625 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 11:59:25 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.625 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.625 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 11:59:25 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.625 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.625 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 11:59:25 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.625 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.625 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 11:59:25 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.625 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.625 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 11:59:25 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.625 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.625 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 11:59:25 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.625 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.625 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 11:59:25 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.625 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.625 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 11:59:25 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.625 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.625 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 11:59:25 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.625 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.625 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 11:59:25 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.625 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.625 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 11:59:25 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.625 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.625 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 11:59:25 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.625 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.625 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 11:59:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.625 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.625 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 11:59:25 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.625 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.625 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 11:59:25 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.625 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.625 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 11:59:25 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.625 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.625 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 11:59:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.625 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.625 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 11:59:25 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.625 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.625 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 11:59:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.625 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.625 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 11:59:25 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.625 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.625 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 11:59:25 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.625 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.625 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 11:59:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.625 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.625 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 11:59:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.625 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.625 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 11:59:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.625 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.625 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 11:59:25 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.625 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.625 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 11:59:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.625 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.625 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 11:59:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.625 11:59:25 -- setup/common.sh@33 -- # echo 0 00:03:24.625 11:59:25 -- setup/common.sh@33 -- # return 0 00:03:24.625 11:59:25 -- setup/hugepages.sh@97 -- # anon=0 00:03:24.625 11:59:25 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:24.625 11:59:25 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.625 11:59:25 -- setup/common.sh@18 -- # local node= 00:03:24.625 11:59:25 -- setup/common.sh@19 -- # local var val 00:03:24.625 11:59:25 -- setup/common.sh@20 -- # local mem_f mem 00:03:24.625 11:59:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.625 11:59:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.625 11:59:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.625 11:59:25 -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.625 11:59:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.625 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 11:59:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109672776 kB' 'MemAvailable: 113197680 kB' 'Buffers: 4124 kB' 'Cached: 10150128 kB' 'SwapCached: 0 kB' 'Active: 7255888 kB' 'Inactive: 3515708 kB' 'Active(anon): 6565492 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515708 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 620592 kB' 'Mapped: 175160 kB' 'Shmem: 5948148 kB' 'KReclaimable: 286264 kB' 'Slab: 1036652 kB' 'SReclaimable: 286264 kB' 'SUnreclaim: 750388 kB' 'KernelStack: 26992 kB' 'PageTables: 8476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 7927056 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234764 kB' 'VmallocChunk: 0 kB' 'Percpu: 103104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3564916 kB' 'DirectMap2M: 42252288 kB' 'DirectMap1G: 90177536 kB' 00:03:24.625 11:59:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.627 11:59:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.627 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.627 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.627 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.627 11:59:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.627 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.627 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.627 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.627 11:59:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.627 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.627 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.627 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.627 11:59:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.627 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.627 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.627 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.627 11:59:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.627 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.627 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.627 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.627 11:59:25 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.627 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.627 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.627 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.627 11:59:25 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.627 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.627 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.627 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.627 11:59:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.627 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.627 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.627 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.627 11:59:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.627 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.627 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.627 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.627 11:59:25 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.627 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.627 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.627 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.627 11:59:25 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.627 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.627 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.627 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.627 11:59:25 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.627 11:59:25 -- setup/common.sh@33 -- # echo 0 00:03:24.627 11:59:25 -- setup/common.sh@33 -- # return 0 00:03:24.627 11:59:25 -- setup/hugepages.sh@99 -- # surp=0 00:03:24.627 11:59:25 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:24.627 11:59:25 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:24.627 11:59:25 -- setup/common.sh@18 -- # local node= 00:03:24.627 11:59:25 -- setup/common.sh@19 -- # local var val 00:03:24.627 11:59:25 -- setup/common.sh@20 -- # local mem_f mem 00:03:24.627 11:59:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.627 11:59:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.627 11:59:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.627 11:59:25 -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.627 11:59:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.627 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.627 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.627 11:59:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109673396 kB' 'MemAvailable: 113198300 kB' 'Buffers: 4124 kB' 'Cached: 10150140 kB' 'SwapCached: 0 kB' 'Active: 7256132 kB' 'Inactive: 3515708 kB' 'Active(anon): 6565736 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515708 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 620904 kB' 'Mapped: 175160 kB' 'Shmem: 5948160 kB' 'KReclaimable: 286264 kB' 'Slab: 1036716 kB' 'SReclaimable: 286264 kB' 'SUnreclaim: 750452 kB' 'KernelStack: 26992 kB' 'PageTables: 8504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 7927068 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234764 kB' 'VmallocChunk: 0 kB' 'Percpu: 103104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3564916 kB' 'DirectMap2M: 42252288 kB' 'DirectMap1G: 90177536 kB' 00:03:24.627 11:59:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.627 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.627 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.627 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.627 11:59:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.627 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.627 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.627 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.627 11:59:25 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.627 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.627 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.627 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.627 11:59:25 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.627 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.627 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.627 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.627 11:59:25 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.627 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.627 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.627 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.627 11:59:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.627 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.627 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.627 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.627 11:59:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.627 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.627 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.627 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.627 11:59:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.627 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.627 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.627 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.627 11:59:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.627 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.627 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.627 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.627 11:59:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.627 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.627 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.627 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.627 11:59:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.627 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.627 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.627 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.627 11:59:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.627 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.627 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.627 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.627 11:59:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.627 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.627 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.627 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.627 11:59:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.627 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.627 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.627 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.627 11:59:25 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.627 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.627 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.627 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.627 11:59:25 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.627 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.627 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.627 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.627 11:59:25 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.627 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.627 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.627 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.627 11:59:25 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.627 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.627 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.627 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.627 11:59:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.627 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.627 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.627 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.627 11:59:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.627 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.627 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.627 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.627 11:59:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.627 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.628 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.628 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.628 11:59:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.628 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.628 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.628 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.628 11:59:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.628 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.628 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.628 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.628 11:59:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.628 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.628 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.628 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.628 11:59:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.628 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.628 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.628 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.628 11:59:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.628 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.628 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.628 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.628 11:59:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.628 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.628 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.628 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.628 11:59:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.628 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.628 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.628 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.628 11:59:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.628 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.628 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.628 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.628 11:59:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.628 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.628 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.628 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.628 11:59:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.628 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.628 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.628 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.628 11:59:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.628 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.628 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.628 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.628 11:59:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.628 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.628 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.628 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.628 11:59:25 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.628 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.628 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.628 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.628 11:59:25 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.628 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.628 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.628 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.628 11:59:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.628 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.628 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.628 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.628 11:59:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.628 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.628 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.628 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.628 11:59:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.628 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.628 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.628 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.628 11:59:25 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.628 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.628 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.628 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.628 11:59:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.628 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.628 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.628 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.628 11:59:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.628 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.628 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.628 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.628 11:59:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.628 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.628 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.628 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.628 11:59:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.628 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.628 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.628 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.628 11:59:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.628 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.628 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.628 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.628 11:59:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.628 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.628 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.628 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.628 11:59:25 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.628 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.628 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.628 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.628 11:59:25 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.628 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.628 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.628 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.628 11:59:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.628 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.628 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.628 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.628 11:59:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.628 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.628 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.628 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.628 11:59:25 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.628 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.628 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.628 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.628 11:59:25 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.628 11:59:25 -- setup/common.sh@33 -- # echo 0 00:03:24.628 11:59:25 -- setup/common.sh@33 -- # return 0 00:03:24.628 11:59:25 -- setup/hugepages.sh@100 -- # resv=0 00:03:24.628 11:59:25 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:24.628 nr_hugepages=1024 00:03:24.628 11:59:25 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:24.628 resv_hugepages=0 00:03:24.628 11:59:25 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:24.628 surplus_hugepages=0 00:03:24.628 11:59:25 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:24.628 anon_hugepages=0 00:03:24.628 11:59:25 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:24.628 11:59:25 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:24.628 11:59:25 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:24.628 11:59:25 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:24.628 11:59:25 -- setup/common.sh@18 -- # local node= 00:03:24.628 11:59:25 -- setup/common.sh@19 -- # local var val 00:03:24.628 11:59:25 -- setup/common.sh@20 -- # local mem_f mem 00:03:24.628 11:59:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.628 11:59:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.628 11:59:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.628 11:59:25 -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.628 11:59:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.628 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.628 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.628 11:59:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109673396 kB' 'MemAvailable: 113198300 kB' 'Buffers: 4124 kB' 'Cached: 10150156 kB' 'SwapCached: 0 kB' 'Active: 7255892 kB' 'Inactive: 3515708 kB' 'Active(anon): 6565496 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515708 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 620568 kB' 'Mapped: 175160 kB' 'Shmem: 5948176 kB' 'KReclaimable: 286264 kB' 'Slab: 1036716 kB' 'SReclaimable: 286264 kB' 'SUnreclaim: 750452 kB' 'KernelStack: 26992 kB' 'PageTables: 8500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 7927084 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234780 kB' 'VmallocChunk: 0 kB' 'Percpu: 103104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3564916 kB' 'DirectMap2M: 42252288 kB' 'DirectMap1G: 90177536 kB' 00:03:24.628 11:59:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.628 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.629 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.629 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.629 11:59:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.629 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.629 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.629 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.629 11:59:25 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.629 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.629 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.629 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.629 11:59:25 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.629 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.629 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.629 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.629 11:59:25 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.629 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.629 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.629 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.629 11:59:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.629 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.629 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.629 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.629 11:59:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.629 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.629 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.629 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.629 11:59:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.629 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.629 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.629 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.629 11:59:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.629 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.629 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.629 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.629 11:59:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.629 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.629 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.629 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.629 11:59:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.629 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.629 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.629 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.629 11:59:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.629 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.629 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.629 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.629 11:59:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.629 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.629 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.629 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.629 11:59:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.629 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.629 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.629 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.629 11:59:25 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.629 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.629 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.629 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.629 11:59:25 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.629 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.629 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.629 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.629 11:59:25 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.629 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.629 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.629 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.629 11:59:25 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.629 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.629 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.629 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.629 11:59:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.629 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.629 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.629 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.629 11:59:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.629 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.629 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.629 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.629 11:59:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.629 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.629 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.629 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.629 11:59:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.629 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.629 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.629 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.629 11:59:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.629 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.629 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.629 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.629 11:59:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.629 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.629 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.629 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.629 11:59:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.629 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.629 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.629 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.629 11:59:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.629 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.629 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.629 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.629 11:59:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.629 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.629 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.629 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.629 11:59:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.629 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.629 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.629 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.629 11:59:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.629 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.629 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.629 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.629 11:59:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.629 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.629 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.629 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.629 11:59:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.629 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.629 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.629 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.629 11:59:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.629 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.629 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.629 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.629 11:59:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.629 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.629 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.629 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.629 11:59:25 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.629 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.629 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.629 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.630 11:59:25 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.630 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.630 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.630 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.630 11:59:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.630 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.630 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.630 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.630 11:59:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.630 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.630 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.630 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.630 11:59:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.630 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.630 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.630 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.630 11:59:25 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.630 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.630 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.630 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.630 11:59:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.630 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.630 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.630 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.630 11:59:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.630 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.630 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.630 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.630 11:59:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.630 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.630 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.630 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.630 11:59:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.630 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.630 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.630 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.630 11:59:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.630 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.630 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.630 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.630 11:59:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.630 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.630 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.630 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.630 11:59:25 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.630 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.630 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.630 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.630 11:59:25 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.630 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.630 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.630 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.630 11:59:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.630 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.630 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.630 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.630 11:59:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.630 11:59:25 -- setup/common.sh@33 -- # echo 1024 00:03:24.630 11:59:25 -- setup/common.sh@33 -- # return 0 00:03:24.630 11:59:25 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:24.630 11:59:25 -- setup/hugepages.sh@112 -- # get_nodes 00:03:24.630 11:59:25 -- setup/hugepages.sh@27 -- # local node 00:03:24.630 11:59:25 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:24.630 11:59:25 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:24.630 11:59:25 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:24.630 11:59:25 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:24.630 11:59:25 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:24.630 11:59:25 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:24.630 11:59:25 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:24.630 11:59:25 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:24.630 11:59:25 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:24.630 11:59:25 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.630 11:59:25 -- setup/common.sh@18 -- # local node=0 00:03:24.630 11:59:25 -- setup/common.sh@19 -- # local var val 00:03:24.630 11:59:25 -- setup/common.sh@20 -- # local mem_f mem 00:03:24.630 11:59:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.630 11:59:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:24.630 11:59:25 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:24.630 11:59:25 -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.630 11:59:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.630 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.630 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.630 11:59:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 60165544 kB' 'MemUsed: 5493464 kB' 'SwapCached: 0 kB' 'Active: 2284808 kB' 'Inactive: 108924 kB' 'Active(anon): 1975288 kB' 'Inactive(anon): 0 kB' 'Active(file): 309520 kB' 'Inactive(file): 108924 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2284792 kB' 'Mapped: 104464 kB' 'AnonPages: 112148 kB' 'Shmem: 1866348 kB' 'KernelStack: 12120 kB' 'PageTables: 3456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 157176 kB' 'Slab: 522992 kB' 'SReclaimable: 157176 kB' 'SUnreclaim: 365816 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:24.630 11:59:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.630 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.630 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.630 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.630 11:59:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.630 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.630 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.630 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.630 11:59:25 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.630 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.630 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.630 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.630 11:59:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.630 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.630 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.630 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.630 11:59:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.630 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.630 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.630 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.630 11:59:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.630 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.630 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.630 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.630 11:59:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.630 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.630 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.630 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.630 11:59:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.630 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.630 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.630 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.630 11:59:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.630 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.630 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.630 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.630 11:59:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.630 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.630 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.630 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.630 11:59:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.630 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.630 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.630 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.630 11:59:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.630 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.630 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.630 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.630 11:59:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.630 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.630 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.630 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.630 11:59:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.630 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.630 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.630 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.630 11:59:25 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.630 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.630 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.630 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.630 11:59:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.630 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.630 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.630 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.630 11:59:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.630 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.631 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.631 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.631 11:59:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.631 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.631 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.631 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.631 11:59:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.631 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.631 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.631 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.631 11:59:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.631 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.631 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.631 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.631 11:59:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.631 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.631 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.631 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.631 11:59:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.631 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.631 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.631 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.631 11:59:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.631 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.631 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.631 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.631 11:59:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.631 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.631 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.631 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.631 11:59:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.631 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.631 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.631 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.631 11:59:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.631 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.631 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.631 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.631 11:59:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.631 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.631 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.631 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.631 11:59:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.631 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.631 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.631 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.631 11:59:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.631 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.631 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.631 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.631 11:59:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.631 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.631 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.631 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.631 11:59:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.631 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.631 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.631 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.631 11:59:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.631 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.631 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.631 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.631 11:59:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.631 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.631 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.631 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.631 11:59:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.631 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.631 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.631 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.631 11:59:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.631 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.631 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.631 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.631 11:59:25 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.631 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.631 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.631 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.631 11:59:25 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.631 11:59:25 -- setup/common.sh@33 -- # echo 0 00:03:24.631 11:59:25 -- setup/common.sh@33 -- # return 0 00:03:24.631 11:59:25 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:24.631 11:59:25 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:24.631 11:59:25 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:24.631 11:59:25 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:24.631 11:59:25 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.631 11:59:25 -- setup/common.sh@18 -- # local node=1 00:03:24.631 11:59:25 -- setup/common.sh@19 -- # local var val 00:03:24.631 11:59:25 -- setup/common.sh@20 -- # local mem_f mem 00:03:24.631 11:59:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.631 11:59:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:24.631 11:59:25 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:24.631 11:59:25 -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.631 11:59:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.631 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.631 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.631 11:59:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679860 kB' 'MemFree: 49508020 kB' 'MemUsed: 11171840 kB' 'SwapCached: 0 kB' 'Active: 4971020 kB' 'Inactive: 3406784 kB' 'Active(anon): 4590144 kB' 'Inactive(anon): 0 kB' 'Active(file): 380876 kB' 'Inactive(file): 3406784 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7869504 kB' 'Mapped: 70696 kB' 'AnonPages: 508416 kB' 'Shmem: 4081844 kB' 'KernelStack: 14872 kB' 'PageTables: 5044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 129088 kB' 'Slab: 513724 kB' 'SReclaimable: 129088 kB' 'SUnreclaim: 384636 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:24.631 11:59:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.631 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.631 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.631 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.631 11:59:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.631 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.631 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.631 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.631 11:59:25 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.631 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.631 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.631 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.631 11:59:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.631 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.631 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.631 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.631 11:59:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.631 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.631 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.631 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.631 11:59:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.631 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.631 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.631 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.631 11:59:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.631 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.631 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.631 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.631 11:59:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.631 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.631 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.631 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.631 11:59:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.631 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.631 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.631 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.631 11:59:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.631 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.631 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.631 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.631 11:59:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.631 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.631 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.631 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.631 11:59:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.631 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.631 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.631 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.631 11:59:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.631 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.631 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.631 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.631 11:59:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.631 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.632 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.632 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.632 11:59:25 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.632 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.632 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.632 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.632 11:59:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.632 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.632 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.632 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.632 11:59:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.632 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.632 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.632 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.632 11:59:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.632 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.632 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.632 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.632 11:59:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.632 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.632 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.632 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.632 11:59:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.632 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.632 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.632 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.632 11:59:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.632 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.632 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.632 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.632 11:59:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.632 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.632 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.632 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.632 11:59:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.632 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.632 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.632 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.632 11:59:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.632 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.632 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.632 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.632 11:59:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.632 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.632 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.632 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.632 11:59:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.632 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.632 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.632 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.632 11:59:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.632 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.632 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.632 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.632 11:59:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.632 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.632 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.632 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.632 11:59:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.632 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.632 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.632 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.632 11:59:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.632 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.632 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.632 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.632 11:59:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.632 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.632 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.632 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.632 11:59:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.632 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.632 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.632 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.632 11:59:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.632 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.632 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.632 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.632 11:59:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.632 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.632 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.632 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.632 11:59:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.632 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.632 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.632 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.632 11:59:25 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.632 11:59:25 -- setup/common.sh@32 -- # continue 00:03:24.632 11:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.632 11:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.632 11:59:25 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.632 11:59:25 -- setup/common.sh@33 -- # echo 0 00:03:24.632 11:59:25 -- setup/common.sh@33 -- # return 0 00:03:24.632 11:59:25 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:24.632 11:59:25 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:24.632 11:59:25 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:24.632 11:59:25 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:24.632 11:59:25 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:24.632 node0=512 expecting 512 00:03:24.632 11:59:25 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:24.632 11:59:25 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:24.632 11:59:25 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:24.632 11:59:25 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:24.632 node1=512 expecting 512 00:03:24.632 11:59:25 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:24.632 00:03:24.632 real 0m3.787s 00:03:24.632 user 0m1.419s 00:03:24.632 sys 0m2.390s 00:03:24.632 11:59:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:24.632 11:59:25 -- common/autotest_common.sh@10 -- # set +x 00:03:24.632 ************************************ 00:03:24.632 END TEST even_2G_alloc 00:03:24.632 ************************************ 00:03:24.632 11:59:25 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:24.632 11:59:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:24.632 11:59:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:24.632 11:59:25 -- common/autotest_common.sh@10 -- # set +x 00:03:24.892 ************************************ 00:03:24.892 START TEST odd_alloc 00:03:24.892 ************************************ 00:03:24.892 11:59:25 -- common/autotest_common.sh@1111 -- # odd_alloc 00:03:24.892 11:59:25 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:24.892 11:59:25 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:24.892 11:59:25 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:24.892 11:59:25 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:24.892 11:59:25 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:24.892 11:59:25 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:24.892 11:59:25 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:24.892 11:59:25 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:24.892 11:59:25 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:24.892 11:59:25 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:24.892 11:59:25 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:24.892 11:59:25 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:24.892 11:59:25 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:24.892 11:59:25 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:24.892 11:59:25 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:24.892 11:59:25 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:24.892 11:59:25 -- setup/hugepages.sh@83 -- # : 513 00:03:24.892 11:59:25 -- setup/hugepages.sh@84 -- # : 1 00:03:24.892 11:59:25 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:24.892 11:59:25 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:24.892 11:59:25 -- setup/hugepages.sh@83 -- # : 0 00:03:24.892 11:59:25 -- setup/hugepages.sh@84 -- # : 0 00:03:24.892 11:59:25 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:24.892 11:59:25 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:24.892 11:59:25 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:24.892 11:59:25 -- setup/hugepages.sh@160 -- # setup output 00:03:24.892 11:59:25 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:24.892 11:59:25 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:28.196 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:28.196 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:28.196 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:28.196 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:28.196 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:28.196 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:28.196 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:28.196 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:28.196 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:28.196 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:28.196 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:28.196 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:28.196 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:28.196 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:28.196 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:28.196 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:28.196 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:28.196 11:59:29 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:28.196 11:59:29 -- setup/hugepages.sh@89 -- # local node 00:03:28.196 11:59:29 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:28.196 11:59:29 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:28.196 11:59:29 -- setup/hugepages.sh@92 -- # local surp 00:03:28.196 11:59:29 -- setup/hugepages.sh@93 -- # local resv 00:03:28.196 11:59:29 -- setup/hugepages.sh@94 -- # local anon 00:03:28.196 11:59:29 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:28.196 11:59:29 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:28.196 11:59:29 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:28.196 11:59:29 -- setup/common.sh@18 -- # local node= 00:03:28.196 11:59:29 -- setup/common.sh@19 -- # local var val 00:03:28.196 11:59:29 -- setup/common.sh@20 -- # local mem_f mem 00:03:28.196 11:59:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.196 11:59:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.196 11:59:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.196 11:59:29 -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.196 11:59:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.196 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.196 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.197 11:59:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109651752 kB' 'MemAvailable: 113176656 kB' 'Buffers: 4124 kB' 'Cached: 10150272 kB' 'SwapCached: 0 kB' 'Active: 7257980 kB' 'Inactive: 3515708 kB' 'Active(anon): 6567584 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515708 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 622536 kB' 'Mapped: 175296 kB' 'Shmem: 5948292 kB' 'KReclaimable: 286264 kB' 'Slab: 1037944 kB' 'SReclaimable: 286264 kB' 'SUnreclaim: 751680 kB' 'KernelStack: 26976 kB' 'PageTables: 8452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508436 kB' 'Committed_AS: 7930756 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234796 kB' 'VmallocChunk: 0 kB' 'Percpu: 103104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3564916 kB' 'DirectMap2M: 42252288 kB' 'DirectMap1G: 90177536 kB' 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.197 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.197 11:59:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.197 11:59:29 -- setup/common.sh@33 -- # echo 0 00:03:28.198 11:59:29 -- setup/common.sh@33 -- # return 0 00:03:28.198 11:59:29 -- setup/hugepages.sh@97 -- # anon=0 00:03:28.198 11:59:29 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:28.198 11:59:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:28.198 11:59:29 -- setup/common.sh@18 -- # local node= 00:03:28.198 11:59:29 -- setup/common.sh@19 -- # local var val 00:03:28.198 11:59:29 -- setup/common.sh@20 -- # local mem_f mem 00:03:28.198 11:59:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.198 11:59:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.198 11:59:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.198 11:59:29 -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.198 11:59:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.198 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.198 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.198 11:59:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109651992 kB' 'MemAvailable: 113176896 kB' 'Buffers: 4124 kB' 'Cached: 10150276 kB' 'SwapCached: 0 kB' 'Active: 7257360 kB' 'Inactive: 3515708 kB' 'Active(anon): 6566964 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515708 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 621872 kB' 'Mapped: 175256 kB' 'Shmem: 5948296 kB' 'KReclaimable: 286264 kB' 'Slab: 1037960 kB' 'SReclaimable: 286264 kB' 'SUnreclaim: 751696 kB' 'KernelStack: 27024 kB' 'PageTables: 8708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508436 kB' 'Committed_AS: 7930760 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234844 kB' 'VmallocChunk: 0 kB' 'Percpu: 103104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3564916 kB' 'DirectMap2M: 42252288 kB' 'DirectMap1G: 90177536 kB' 00:03:28.198 11:59:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.198 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.198 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.198 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.198 11:59:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.198 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.198 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.198 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.198 11:59:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.198 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.198 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.198 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.198 11:59:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.198 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.198 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.198 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.198 11:59:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.198 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.198 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.198 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.198 11:59:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.198 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.198 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.198 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.198 11:59:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.198 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.198 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.198 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.198 11:59:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.198 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.198 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.198 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.198 11:59:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.198 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.198 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.198 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.198 11:59:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.198 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.198 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.198 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.198 11:59:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.198 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.198 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.198 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.198 11:59:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.198 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.198 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.198 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.198 11:59:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.198 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.198 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.198 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.198 11:59:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.198 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.198 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.198 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.198 11:59:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.198 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.198 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.198 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.198 11:59:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.198 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.198 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.198 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.198 11:59:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.198 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.198 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.198 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.198 11:59:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.198 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.198 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.198 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.198 11:59:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.198 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.198 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.198 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.198 11:59:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.198 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.198 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.198 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.198 11:59:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.198 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.198 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.198 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.198 11:59:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.198 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.198 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.198 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.198 11:59:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.198 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.198 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.198 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.198 11:59:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.198 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.198 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.198 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.198 11:59:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.198 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.198 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.198 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.198 11:59:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.198 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.198 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.198 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.198 11:59:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.198 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.198 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.198 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.198 11:59:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.198 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.198 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.198 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.198 11:59:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.198 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.198 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.198 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.198 11:59:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.198 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.198 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.198 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.198 11:59:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.198 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.198 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.198 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.198 11:59:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.198 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.198 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.198 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.198 11:59:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.198 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.198 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.199 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.199 11:59:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.199 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.199 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.199 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.199 11:59:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.199 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.199 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.199 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.199 11:59:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.199 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.199 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.199 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.199 11:59:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.199 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.199 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.199 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.199 11:59:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.199 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.199 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.199 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.199 11:59:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.199 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.199 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.199 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.199 11:59:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.199 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.199 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.199 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.199 11:59:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.199 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.199 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.199 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.199 11:59:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.199 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.199 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.199 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.199 11:59:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.199 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.199 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.199 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.199 11:59:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.199 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.199 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.199 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.199 11:59:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.199 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.199 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.199 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.199 11:59:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.199 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.199 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.199 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.199 11:59:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.199 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.199 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.199 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.199 11:59:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.199 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.199 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.199 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.199 11:59:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.199 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.199 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.199 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.199 11:59:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.199 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.199 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.199 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.199 11:59:29 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.199 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.199 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.199 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.199 11:59:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.199 11:59:29 -- setup/common.sh@33 -- # echo 0 00:03:28.199 11:59:29 -- setup/common.sh@33 -- # return 0 00:03:28.199 11:59:29 -- setup/hugepages.sh@99 -- # surp=0 00:03:28.199 11:59:29 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:28.199 11:59:29 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:28.199 11:59:29 -- setup/common.sh@18 -- # local node= 00:03:28.199 11:59:29 -- setup/common.sh@19 -- # local var val 00:03:28.199 11:59:29 -- setup/common.sh@20 -- # local mem_f mem 00:03:28.199 11:59:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.199 11:59:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.199 11:59:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.199 11:59:29 -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.199 11:59:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.199 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.199 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.199 11:59:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109653948 kB' 'MemAvailable: 113178852 kB' 'Buffers: 4124 kB' 'Cached: 10150284 kB' 'SwapCached: 0 kB' 'Active: 7259504 kB' 'Inactive: 3515708 kB' 'Active(anon): 6569108 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515708 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 624068 kB' 'Mapped: 175760 kB' 'Shmem: 5948304 kB' 'KReclaimable: 286264 kB' 'Slab: 1037960 kB' 'SReclaimable: 286264 kB' 'SUnreclaim: 751696 kB' 'KernelStack: 26864 kB' 'PageTables: 8388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508436 kB' 'Committed_AS: 7931812 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234764 kB' 'VmallocChunk: 0 kB' 'Percpu: 103104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3564916 kB' 'DirectMap2M: 42252288 kB' 'DirectMap1G: 90177536 kB' 00:03:28.199 11:59:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.199 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.199 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.199 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.199 11:59:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.199 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.199 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.199 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.199 11:59:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.199 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.199 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.199 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.199 11:59:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.199 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.199 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.199 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.199 11:59:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.199 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.199 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.199 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.199 11:59:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.199 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.199 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.199 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.199 11:59:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.199 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.199 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.199 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.199 11:59:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.199 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.199 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.199 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.199 11:59:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.199 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.199 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.199 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.199 11:59:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.199 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.199 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.199 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.199 11:59:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.199 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.199 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.199 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.199 11:59:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.199 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.199 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.199 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.199 11:59:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.199 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.199 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.199 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.199 11:59:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.199 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.199 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.199 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.200 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.200 11:59:29 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.200 11:59:29 -- setup/common.sh@33 -- # echo 0 00:03:28.200 11:59:29 -- setup/common.sh@33 -- # return 0 00:03:28.200 11:59:29 -- setup/hugepages.sh@100 -- # resv=0 00:03:28.200 11:59:29 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:28.200 nr_hugepages=1025 00:03:28.200 11:59:29 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:28.200 resv_hugepages=0 00:03:28.200 11:59:29 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:28.200 surplus_hugepages=0 00:03:28.200 11:59:29 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:28.200 anon_hugepages=0 00:03:28.200 11:59:29 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:28.200 11:59:29 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:28.200 11:59:29 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:28.200 11:59:29 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:28.200 11:59:29 -- setup/common.sh@18 -- # local node= 00:03:28.200 11:59:29 -- setup/common.sh@19 -- # local var val 00:03:28.200 11:59:29 -- setup/common.sh@20 -- # local mem_f mem 00:03:28.200 11:59:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.200 11:59:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.200 11:59:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.200 11:59:29 -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.201 11:59:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.201 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.201 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.201 11:59:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109647472 kB' 'MemAvailable: 113172376 kB' 'Buffers: 4124 kB' 'Cached: 10150300 kB' 'SwapCached: 0 kB' 'Active: 7262940 kB' 'Inactive: 3515708 kB' 'Active(anon): 6572544 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515708 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 627468 kB' 'Mapped: 176020 kB' 'Shmem: 5948320 kB' 'KReclaimable: 286264 kB' 'Slab: 1037952 kB' 'SReclaimable: 286264 kB' 'SUnreclaim: 751688 kB' 'KernelStack: 26912 kB' 'PageTables: 8208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508436 kB' 'Committed_AS: 7934900 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234688 kB' 'VmallocChunk: 0 kB' 'Percpu: 103104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3564916 kB' 'DirectMap2M: 42252288 kB' 'DirectMap1G: 90177536 kB' 00:03:28.201 11:59:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.201 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.201 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.201 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.201 11:59:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.201 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.201 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.201 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.201 11:59:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.201 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.201 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.201 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.201 11:59:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.201 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.201 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.201 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.201 11:59:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.201 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.201 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.201 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.201 11:59:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.201 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.201 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.201 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.201 11:59:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.201 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.201 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.201 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.201 11:59:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.201 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.201 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.201 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.201 11:59:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.201 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.201 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.201 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.201 11:59:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.201 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.201 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.201 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.201 11:59:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.201 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.201 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.201 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.201 11:59:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.201 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.201 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.201 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.201 11:59:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.201 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.201 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.201 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.201 11:59:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.201 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.201 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.201 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.201 11:59:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.201 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.201 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.201 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.201 11:59:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.201 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.201 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.201 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.201 11:59:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.201 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.201 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.201 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.201 11:59:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.201 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.201 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.201 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.201 11:59:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.201 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.201 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.201 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.201 11:59:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.201 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.201 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.201 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.201 11:59:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.201 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.201 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.201 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.201 11:59:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.201 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.201 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.201 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.201 11:59:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.201 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.201 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.201 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.201 11:59:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.201 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.201 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.201 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.201 11:59:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.201 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.201 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.201 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.201 11:59:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.201 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.201 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.201 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.201 11:59:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.201 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.201 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.201 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.201 11:59:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.201 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.201 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.201 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.202 11:59:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.202 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.202 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.202 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.202 11:59:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.202 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.202 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.202 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.202 11:59:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.202 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.202 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.202 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.202 11:59:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.202 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.202 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.202 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.202 11:59:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.202 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.202 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.202 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.202 11:59:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.202 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.202 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.202 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.202 11:59:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.202 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.202 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.202 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.202 11:59:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.202 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.464 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.464 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.464 11:59:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.464 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.464 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.464 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.464 11:59:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.464 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.464 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.464 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.464 11:59:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.464 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.464 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.464 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.464 11:59:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.464 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.464 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.464 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.464 11:59:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.464 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.464 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.464 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.464 11:59:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.464 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.464 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.464 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.464 11:59:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.464 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.464 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.464 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.464 11:59:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.464 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.464 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.464 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.464 11:59:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.464 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.464 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.464 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.464 11:59:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.464 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.464 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.464 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.464 11:59:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.464 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.464 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.464 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.464 11:59:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.464 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.464 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.464 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.464 11:59:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.464 11:59:29 -- setup/common.sh@33 -- # echo 1025 00:03:28.464 11:59:29 -- setup/common.sh@33 -- # return 0 00:03:28.464 11:59:29 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:28.464 11:59:29 -- setup/hugepages.sh@112 -- # get_nodes 00:03:28.464 11:59:29 -- setup/hugepages.sh@27 -- # local node 00:03:28.464 11:59:29 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:28.464 11:59:29 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:28.464 11:59:29 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:28.464 11:59:29 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:28.464 11:59:29 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:28.464 11:59:29 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:28.464 11:59:29 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:28.464 11:59:29 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:28.464 11:59:29 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:28.464 11:59:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:28.464 11:59:29 -- setup/common.sh@18 -- # local node=0 00:03:28.464 11:59:29 -- setup/common.sh@19 -- # local var val 00:03:28.464 11:59:29 -- setup/common.sh@20 -- # local mem_f mem 00:03:28.464 11:59:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.464 11:59:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:28.464 11:59:29 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:28.464 11:59:29 -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.464 11:59:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.464 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.464 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 11:59:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 60132280 kB' 'MemUsed: 5526728 kB' 'SwapCached: 0 kB' 'Active: 2283444 kB' 'Inactive: 108924 kB' 'Active(anon): 1973924 kB' 'Inactive(anon): 0 kB' 'Active(file): 309520 kB' 'Inactive(file): 108924 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2284888 kB' 'Mapped: 104580 kB' 'AnonPages: 110676 kB' 'Shmem: 1866444 kB' 'KernelStack: 12088 kB' 'PageTables: 3404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 157176 kB' 'Slab: 523972 kB' 'SReclaimable: 157176 kB' 'SUnreclaim: 366796 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.465 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.465 11:59:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.465 11:59:29 -- setup/common.sh@33 -- # echo 0 00:03:28.465 11:59:29 -- setup/common.sh@33 -- # return 0 00:03:28.465 11:59:29 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:28.465 11:59:29 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:28.465 11:59:29 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:28.465 11:59:29 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:28.465 11:59:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:28.465 11:59:29 -- setup/common.sh@18 -- # local node=1 00:03:28.465 11:59:29 -- setup/common.sh@19 -- # local var val 00:03:28.465 11:59:29 -- setup/common.sh@20 -- # local mem_f mem 00:03:28.465 11:59:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.466 11:59:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:28.466 11:59:29 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:28.466 11:59:29 -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.466 11:59:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 11:59:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679860 kB' 'MemFree: 49514964 kB' 'MemUsed: 11164896 kB' 'SwapCached: 0 kB' 'Active: 4973500 kB' 'Inactive: 3406784 kB' 'Active(anon): 4592624 kB' 'Inactive(anon): 0 kB' 'Active(file): 380876 kB' 'Inactive(file): 3406784 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7869552 kB' 'Mapped: 70784 kB' 'AnonPages: 510776 kB' 'Shmem: 4081892 kB' 'KernelStack: 14744 kB' 'PageTables: 4748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 129088 kB' 'Slab: 513980 kB' 'SReclaimable: 129088 kB' 'SUnreclaim: 384892 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # continue 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.466 11:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.466 11:59:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.467 11:59:29 -- setup/common.sh@33 -- # echo 0 00:03:28.467 11:59:29 -- setup/common.sh@33 -- # return 0 00:03:28.467 11:59:29 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:28.467 11:59:29 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:28.467 11:59:29 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:28.467 11:59:29 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:28.467 11:59:29 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:28.467 node0=512 expecting 513 00:03:28.467 11:59:29 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:28.467 11:59:29 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:28.467 11:59:29 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:28.467 11:59:29 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:28.467 node1=513 expecting 512 00:03:28.467 11:59:29 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:28.467 00:03:28.467 real 0m3.553s 00:03:28.467 user 0m1.230s 00:03:28.467 sys 0m2.268s 00:03:28.467 11:59:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:28.467 11:59:29 -- common/autotest_common.sh@10 -- # set +x 00:03:28.467 ************************************ 00:03:28.467 END TEST odd_alloc 00:03:28.467 ************************************ 00:03:28.467 11:59:29 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:28.467 11:59:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:28.467 11:59:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:28.467 11:59:29 -- common/autotest_common.sh@10 -- # set +x 00:03:28.467 ************************************ 00:03:28.467 START TEST custom_alloc 00:03:28.467 ************************************ 00:03:28.467 11:59:29 -- common/autotest_common.sh@1111 -- # custom_alloc 00:03:28.467 11:59:29 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:28.467 11:59:29 -- setup/hugepages.sh@169 -- # local node 00:03:28.467 11:59:29 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:28.467 11:59:29 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:28.467 11:59:29 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:28.467 11:59:29 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:28.467 11:59:29 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:28.467 11:59:29 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:28.467 11:59:29 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:28.467 11:59:29 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:28.467 11:59:29 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:28.467 11:59:29 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:28.467 11:59:29 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:28.467 11:59:29 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:28.467 11:59:29 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:28.467 11:59:29 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:28.467 11:59:29 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:28.467 11:59:29 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:28.467 11:59:29 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:28.467 11:59:29 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:28.467 11:59:29 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:28.467 11:59:29 -- setup/hugepages.sh@83 -- # : 256 00:03:28.467 11:59:29 -- setup/hugepages.sh@84 -- # : 1 00:03:28.467 11:59:29 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:28.467 11:59:29 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:28.467 11:59:29 -- setup/hugepages.sh@83 -- # : 0 00:03:28.467 11:59:29 -- setup/hugepages.sh@84 -- # : 0 00:03:28.467 11:59:29 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:28.467 11:59:29 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:28.467 11:59:29 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:28.467 11:59:29 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:28.467 11:59:29 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:28.467 11:59:29 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:28.467 11:59:29 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:28.467 11:59:29 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:28.467 11:59:29 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:28.467 11:59:29 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:28.467 11:59:29 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:28.467 11:59:29 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:28.467 11:59:29 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:28.467 11:59:29 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:28.467 11:59:29 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:28.467 11:59:29 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:28.467 11:59:29 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:28.467 11:59:29 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:28.467 11:59:29 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:28.467 11:59:29 -- setup/hugepages.sh@78 -- # return 0 00:03:28.467 11:59:29 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:28.467 11:59:29 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:28.467 11:59:29 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:28.467 11:59:29 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:28.467 11:59:29 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:28.467 11:59:29 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:28.467 11:59:29 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:28.467 11:59:29 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:28.467 11:59:29 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:28.467 11:59:29 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:28.467 11:59:29 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:28.467 11:59:29 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:28.467 11:59:29 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:28.467 11:59:29 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:28.467 11:59:29 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:28.467 11:59:29 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:28.467 11:59:29 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:28.467 11:59:29 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:28.467 11:59:29 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:28.467 11:59:29 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:28.467 11:59:29 -- setup/hugepages.sh@78 -- # return 0 00:03:28.467 11:59:29 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:28.467 11:59:29 -- setup/hugepages.sh@187 -- # setup output 00:03:28.467 11:59:29 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:28.467 11:59:29 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:31.768 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:31.768 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:31.768 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:31.768 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:31.768 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:31.768 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:31.768 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:31.768 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:31.768 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:31.768 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:31.768 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:31.768 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:31.768 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:31.768 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:32.030 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:32.030 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:32.030 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:32.316 11:59:33 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:32.316 11:59:33 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:32.316 11:59:33 -- setup/hugepages.sh@89 -- # local node 00:03:32.316 11:59:33 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:32.316 11:59:33 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:32.316 11:59:33 -- setup/hugepages.sh@92 -- # local surp 00:03:32.316 11:59:33 -- setup/hugepages.sh@93 -- # local resv 00:03:32.316 11:59:33 -- setup/hugepages.sh@94 -- # local anon 00:03:32.316 11:59:33 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:32.316 11:59:33 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:32.316 11:59:33 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:32.316 11:59:33 -- setup/common.sh@18 -- # local node= 00:03:32.316 11:59:33 -- setup/common.sh@19 -- # local var val 00:03:32.316 11:59:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:32.316 11:59:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.316 11:59:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.316 11:59:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.316 11:59:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.316 11:59:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.316 11:59:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 108626636 kB' 'MemAvailable: 112151540 kB' 'Buffers: 4124 kB' 'Cached: 10150432 kB' 'SwapCached: 0 kB' 'Active: 7259516 kB' 'Inactive: 3515708 kB' 'Active(anon): 6569120 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515708 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 623412 kB' 'Mapped: 175392 kB' 'Shmem: 5948452 kB' 'KReclaimable: 286264 kB' 'Slab: 1036988 kB' 'SReclaimable: 286264 kB' 'SUnreclaim: 750724 kB' 'KernelStack: 27024 kB' 'PageTables: 8376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985172 kB' 'Committed_AS: 7930172 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234892 kB' 'VmallocChunk: 0 kB' 'Percpu: 103104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3564916 kB' 'DirectMap2M: 42252288 kB' 'DirectMap1G: 90177536 kB' 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.316 11:59:33 -- setup/common.sh@33 -- # echo 0 00:03:32.316 11:59:33 -- setup/common.sh@33 -- # return 0 00:03:32.316 11:59:33 -- setup/hugepages.sh@97 -- # anon=0 00:03:32.316 11:59:33 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:32.316 11:59:33 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:32.316 11:59:33 -- setup/common.sh@18 -- # local node= 00:03:32.316 11:59:33 -- setup/common.sh@19 -- # local var val 00:03:32.316 11:59:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:32.316 11:59:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.316 11:59:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.316 11:59:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.316 11:59:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.316 11:59:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.316 11:59:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 108627068 kB' 'MemAvailable: 112151972 kB' 'Buffers: 4124 kB' 'Cached: 10150432 kB' 'SwapCached: 0 kB' 'Active: 7259132 kB' 'Inactive: 3515708 kB' 'Active(anon): 6568736 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515708 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 623068 kB' 'Mapped: 175376 kB' 'Shmem: 5948452 kB' 'KReclaimable: 286264 kB' 'Slab: 1036944 kB' 'SReclaimable: 286264 kB' 'SUnreclaim: 750680 kB' 'KernelStack: 27168 kB' 'PageTables: 9104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985172 kB' 'Committed_AS: 7931828 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234844 kB' 'VmallocChunk: 0 kB' 'Percpu: 103104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3564916 kB' 'DirectMap2M: 42252288 kB' 'DirectMap1G: 90177536 kB' 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.316 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.316 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.317 11:59:33 -- setup/common.sh@33 -- # echo 0 00:03:32.317 11:59:33 -- setup/common.sh@33 -- # return 0 00:03:32.317 11:59:33 -- setup/hugepages.sh@99 -- # surp=0 00:03:32.317 11:59:33 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:32.317 11:59:33 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:32.317 11:59:33 -- setup/common.sh@18 -- # local node= 00:03:32.317 11:59:33 -- setup/common.sh@19 -- # local var val 00:03:32.317 11:59:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:32.317 11:59:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.317 11:59:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.317 11:59:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.317 11:59:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.317 11:59:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.317 11:59:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 108626628 kB' 'MemAvailable: 112151532 kB' 'Buffers: 4124 kB' 'Cached: 10150444 kB' 'SwapCached: 0 kB' 'Active: 7258184 kB' 'Inactive: 3515708 kB' 'Active(anon): 6567788 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515708 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 622528 kB' 'Mapped: 175304 kB' 'Shmem: 5948464 kB' 'KReclaimable: 286264 kB' 'Slab: 1036908 kB' 'SReclaimable: 286264 kB' 'SUnreclaim: 750644 kB' 'KernelStack: 26880 kB' 'PageTables: 8448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985172 kB' 'Committed_AS: 7928924 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234716 kB' 'VmallocChunk: 0 kB' 'Percpu: 103104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3564916 kB' 'DirectMap2M: 42252288 kB' 'DirectMap1G: 90177536 kB' 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.317 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.317 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.318 11:59:33 -- setup/common.sh@33 -- # echo 0 00:03:32.318 11:59:33 -- setup/common.sh@33 -- # return 0 00:03:32.318 11:59:33 -- setup/hugepages.sh@100 -- # resv=0 00:03:32.318 11:59:33 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:32.318 nr_hugepages=1536 00:03:32.318 11:59:33 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:32.318 resv_hugepages=0 00:03:32.318 11:59:33 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:32.318 surplus_hugepages=0 00:03:32.318 11:59:33 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:32.318 anon_hugepages=0 00:03:32.318 11:59:33 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:32.318 11:59:33 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:32.318 11:59:33 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:32.318 11:59:33 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:32.318 11:59:33 -- setup/common.sh@18 -- # local node= 00:03:32.318 11:59:33 -- setup/common.sh@19 -- # local var val 00:03:32.318 11:59:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:32.318 11:59:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.318 11:59:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.318 11:59:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.318 11:59:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.318 11:59:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.318 11:59:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 108626004 kB' 'MemAvailable: 112150908 kB' 'Buffers: 4124 kB' 'Cached: 10150460 kB' 'SwapCached: 0 kB' 'Active: 7258124 kB' 'Inactive: 3515708 kB' 'Active(anon): 6567728 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515708 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 622508 kB' 'Mapped: 175304 kB' 'Shmem: 5948480 kB' 'KReclaimable: 286264 kB' 'Slab: 1036972 kB' 'SReclaimable: 286264 kB' 'SUnreclaim: 750708 kB' 'KernelStack: 26976 kB' 'PageTables: 8464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985172 kB' 'Committed_AS: 7928936 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234684 kB' 'VmallocChunk: 0 kB' 'Percpu: 103104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3564916 kB' 'DirectMap2M: 42252288 kB' 'DirectMap1G: 90177536 kB' 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.318 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.318 11:59:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.319 11:59:33 -- setup/common.sh@33 -- # echo 1536 00:03:32.319 11:59:33 -- setup/common.sh@33 -- # return 0 00:03:32.319 11:59:33 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:32.319 11:59:33 -- setup/hugepages.sh@112 -- # get_nodes 00:03:32.319 11:59:33 -- setup/hugepages.sh@27 -- # local node 00:03:32.319 11:59:33 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:32.319 11:59:33 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:32.319 11:59:33 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:32.319 11:59:33 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:32.319 11:59:33 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:32.319 11:59:33 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:32.319 11:59:33 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:32.319 11:59:33 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:32.319 11:59:33 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:32.319 11:59:33 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:32.319 11:59:33 -- setup/common.sh@18 -- # local node=0 00:03:32.319 11:59:33 -- setup/common.sh@19 -- # local var val 00:03:32.319 11:59:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:32.319 11:59:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.319 11:59:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:32.319 11:59:33 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:32.319 11:59:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.319 11:59:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.319 11:59:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 60139556 kB' 'MemUsed: 5519452 kB' 'SwapCached: 0 kB' 'Active: 2283320 kB' 'Inactive: 108924 kB' 'Active(anon): 1973800 kB' 'Inactive(anon): 0 kB' 'Active(file): 309520 kB' 'Inactive(file): 108924 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2284948 kB' 'Mapped: 104488 kB' 'AnonPages: 110428 kB' 'Shmem: 1866504 kB' 'KernelStack: 12104 kB' 'PageTables: 3408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 157176 kB' 'Slab: 523592 kB' 'SReclaimable: 157176 kB' 'SUnreclaim: 366416 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.319 11:59:33 -- setup/common.sh@33 -- # echo 0 00:03:32.319 11:59:33 -- setup/common.sh@33 -- # return 0 00:03:32.319 11:59:33 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:32.319 11:59:33 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:32.319 11:59:33 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:32.319 11:59:33 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:32.319 11:59:33 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:32.319 11:59:33 -- setup/common.sh@18 -- # local node=1 00:03:32.319 11:59:33 -- setup/common.sh@19 -- # local var val 00:03:32.319 11:59:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:32.319 11:59:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.319 11:59:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:32.319 11:59:33 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:32.319 11:59:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.319 11:59:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.319 11:59:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679860 kB' 'MemFree: 48490300 kB' 'MemUsed: 12189560 kB' 'SwapCached: 0 kB' 'Active: 4974812 kB' 'Inactive: 3406784 kB' 'Active(anon): 4593936 kB' 'Inactive(anon): 0 kB' 'Active(file): 380876 kB' 'Inactive(file): 3406784 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7869664 kB' 'Mapped: 70816 kB' 'AnonPages: 512096 kB' 'Shmem: 4082004 kB' 'KernelStack: 14872 kB' 'PageTables: 5056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 129088 kB' 'Slab: 513364 kB' 'SReclaimable: 129088 kB' 'SUnreclaim: 384276 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.319 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.319 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.320 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.320 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.320 11:59:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.320 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.320 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.320 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.320 11:59:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.320 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.320 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.320 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.320 11:59:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.320 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.320 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.320 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.320 11:59:33 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.320 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.320 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.320 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.320 11:59:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.320 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.320 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.320 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.320 11:59:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.320 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.320 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.320 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.320 11:59:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.320 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.320 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.320 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.320 11:59:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.320 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.320 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.320 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.320 11:59:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.320 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.320 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.320 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.320 11:59:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.661 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.661 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.661 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.661 11:59:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.661 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.661 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.661 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.661 11:59:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.661 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.661 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.661 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.661 11:59:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.661 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.661 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.661 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.661 11:59:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.661 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.661 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.661 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.661 11:59:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.661 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.661 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.661 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.661 11:59:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.661 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.661 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.661 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.661 11:59:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.661 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.661 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.661 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.661 11:59:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.661 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.661 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.661 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.661 11:59:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.661 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.661 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.661 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.661 11:59:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.661 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.661 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.661 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.661 11:59:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.661 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.661 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.661 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.661 11:59:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.661 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.661 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.661 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.661 11:59:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.661 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.661 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.661 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.661 11:59:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.661 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.661 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.661 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.661 11:59:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.661 11:59:33 -- setup/common.sh@32 -- # continue 00:03:32.661 11:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.661 11:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.661 11:59:33 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.661 11:59:33 -- setup/common.sh@33 -- # echo 0 00:03:32.661 11:59:33 -- setup/common.sh@33 -- # return 0 00:03:32.661 11:59:33 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:32.661 11:59:33 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:32.661 11:59:33 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:32.661 11:59:33 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:32.661 11:59:33 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:32.661 node0=512 expecting 512 00:03:32.661 11:59:33 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:32.661 11:59:33 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:32.661 11:59:33 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:32.661 11:59:33 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:32.661 node1=1024 expecting 1024 00:03:32.661 11:59:33 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:32.661 00:03:32.661 real 0m3.875s 00:03:32.661 user 0m1.593s 00:03:32.661 sys 0m2.333s 00:03:32.661 11:59:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:32.661 11:59:33 -- common/autotest_common.sh@10 -- # set +x 00:03:32.661 ************************************ 00:03:32.661 END TEST custom_alloc 00:03:32.661 ************************************ 00:03:32.661 11:59:33 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:32.661 11:59:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:32.661 11:59:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:32.661 11:59:33 -- common/autotest_common.sh@10 -- # set +x 00:03:32.661 ************************************ 00:03:32.661 START TEST no_shrink_alloc 00:03:32.661 ************************************ 00:03:32.661 11:59:33 -- common/autotest_common.sh@1111 -- # no_shrink_alloc 00:03:32.661 11:59:33 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:32.661 11:59:33 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:32.661 11:59:33 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:32.661 11:59:33 -- setup/hugepages.sh@51 -- # shift 00:03:32.661 11:59:33 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:32.661 11:59:33 -- setup/hugepages.sh@52 -- # local node_ids 00:03:32.661 11:59:33 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:32.661 11:59:33 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:32.661 11:59:33 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:32.661 11:59:33 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:32.661 11:59:33 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:32.661 11:59:33 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:32.661 11:59:33 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:32.661 11:59:33 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:32.661 11:59:33 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:32.661 11:59:33 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:32.661 11:59:33 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:32.661 11:59:33 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:32.661 11:59:33 -- setup/hugepages.sh@73 -- # return 0 00:03:32.661 11:59:33 -- setup/hugepages.sh@198 -- # setup output 00:03:32.661 11:59:33 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:32.661 11:59:33 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:35.958 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:35.958 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:35.958 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:35.958 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:35.958 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:35.958 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:35.958 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:35.958 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:35.958 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:35.958 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:35.958 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:35.958 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:35.958 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:35.958 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:35.958 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:35.958 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:35.958 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:36.221 11:59:37 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:36.221 11:59:37 -- setup/hugepages.sh@89 -- # local node 00:03:36.221 11:59:37 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:36.221 11:59:37 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:36.221 11:59:37 -- setup/hugepages.sh@92 -- # local surp 00:03:36.221 11:59:37 -- setup/hugepages.sh@93 -- # local resv 00:03:36.221 11:59:37 -- setup/hugepages.sh@94 -- # local anon 00:03:36.221 11:59:37 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:36.221 11:59:37 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:36.221 11:59:37 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:36.221 11:59:37 -- setup/common.sh@18 -- # local node= 00:03:36.221 11:59:37 -- setup/common.sh@19 -- # local var val 00:03:36.221 11:59:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:36.221 11:59:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.221 11:59:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.221 11:59:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.221 11:59:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.221 11:59:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.221 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.221 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.221 11:59:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109670516 kB' 'MemAvailable: 113195420 kB' 'Buffers: 4124 kB' 'Cached: 10150572 kB' 'SwapCached: 0 kB' 'Active: 7259180 kB' 'Inactive: 3515708 kB' 'Active(anon): 6568784 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515708 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 623440 kB' 'Mapped: 175356 kB' 'Shmem: 5948592 kB' 'KReclaimable: 286264 kB' 'Slab: 1036856 kB' 'SReclaimable: 286264 kB' 'SUnreclaim: 750592 kB' 'KernelStack: 26976 kB' 'PageTables: 8488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 7929340 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234732 kB' 'VmallocChunk: 0 kB' 'Percpu: 103104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3564916 kB' 'DirectMap2M: 42252288 kB' 'DirectMap1G: 90177536 kB' 00:03:36.221 11:59:37 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.221 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.221 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.221 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.221 11:59:37 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.221 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.221 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.221 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.221 11:59:37 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.221 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.221 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.221 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.221 11:59:37 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.221 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.221 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.221 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.221 11:59:37 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.221 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.221 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.221 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.221 11:59:37 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.221 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.221 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.221 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.221 11:59:37 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.221 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.221 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.221 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.221 11:59:37 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.221 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.221 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.221 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.221 11:59:37 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.221 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.221 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.221 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.221 11:59:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.221 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.221 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.221 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.221 11:59:37 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.221 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.221 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.221 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.221 11:59:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.221 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.221 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.221 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.221 11:59:37 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.221 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.221 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.221 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.221 11:59:37 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.221 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.221 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.221 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.221 11:59:37 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.221 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.221 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.221 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.221 11:59:37 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.222 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.222 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.222 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.222 11:59:37 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.222 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.222 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.222 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.222 11:59:37 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.222 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.222 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.222 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.222 11:59:37 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.222 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.222 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.222 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.222 11:59:37 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.222 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.222 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.222 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.222 11:59:37 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.222 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.222 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.222 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.222 11:59:37 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.222 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.222 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.222 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.222 11:59:37 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.222 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.222 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.222 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.222 11:59:37 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.222 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.222 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.222 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.222 11:59:37 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.222 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.222 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.222 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.222 11:59:37 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.222 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.222 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.222 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.222 11:59:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.222 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.222 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.222 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.222 11:59:37 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.222 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.222 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.222 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.222 11:59:37 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.222 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.222 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.222 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.222 11:59:37 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.222 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.222 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.222 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.222 11:59:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.222 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.222 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.222 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.222 11:59:37 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.222 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.222 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.222 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.222 11:59:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.222 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.222 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.222 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.222 11:59:37 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.222 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.222 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.222 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.222 11:59:37 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.222 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.222 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.222 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.222 11:59:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.222 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.222 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.222 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.222 11:59:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.222 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.222 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.222 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.222 11:59:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.222 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.222 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.222 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.222 11:59:37 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.222 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.222 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.222 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.222 11:59:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.222 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.222 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.222 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.222 11:59:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.222 11:59:37 -- setup/common.sh@33 -- # echo 0 00:03:36.222 11:59:37 -- setup/common.sh@33 -- # return 0 00:03:36.222 11:59:37 -- setup/hugepages.sh@97 -- # anon=0 00:03:36.222 11:59:37 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:36.222 11:59:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.222 11:59:37 -- setup/common.sh@18 -- # local node= 00:03:36.222 11:59:37 -- setup/common.sh@19 -- # local var val 00:03:36.222 11:59:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:36.222 11:59:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.222 11:59:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.222 11:59:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.222 11:59:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.222 11:59:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.222 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.222 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.222 11:59:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109670860 kB' 'MemAvailable: 113195764 kB' 'Buffers: 4124 kB' 'Cached: 10150576 kB' 'SwapCached: 0 kB' 'Active: 7258840 kB' 'Inactive: 3515708 kB' 'Active(anon): 6568444 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515708 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 623128 kB' 'Mapped: 175332 kB' 'Shmem: 5948596 kB' 'KReclaimable: 286264 kB' 'Slab: 1036840 kB' 'SReclaimable: 286264 kB' 'SUnreclaim: 750576 kB' 'KernelStack: 26960 kB' 'PageTables: 8428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 7929352 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234716 kB' 'VmallocChunk: 0 kB' 'Percpu: 103104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3564916 kB' 'DirectMap2M: 42252288 kB' 'DirectMap1G: 90177536 kB' 00:03:36.222 11:59:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.222 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.222 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.222 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.222 11:59:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.222 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.222 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.222 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.222 11:59:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.222 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.222 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.222 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.222 11:59:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.222 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.222 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.222 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.222 11:59:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.222 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.222 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.222 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.222 11:59:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.222 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.222 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.222 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.222 11:59:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.222 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.222 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.222 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.222 11:59:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.222 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.222 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.222 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.223 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.223 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.224 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.224 11:59:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.224 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.224 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.224 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.224 11:59:37 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.224 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.224 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.224 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.224 11:59:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.224 11:59:37 -- setup/common.sh@33 -- # echo 0 00:03:36.224 11:59:37 -- setup/common.sh@33 -- # return 0 00:03:36.224 11:59:37 -- setup/hugepages.sh@99 -- # surp=0 00:03:36.224 11:59:37 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:36.224 11:59:37 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:36.224 11:59:37 -- setup/common.sh@18 -- # local node= 00:03:36.224 11:59:37 -- setup/common.sh@19 -- # local var val 00:03:36.224 11:59:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:36.224 11:59:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.224 11:59:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.224 11:59:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.224 11:59:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.224 11:59:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.224 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.224 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.224 11:59:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109670160 kB' 'MemAvailable: 113195064 kB' 'Buffers: 4124 kB' 'Cached: 10150588 kB' 'SwapCached: 0 kB' 'Active: 7258812 kB' 'Inactive: 3515708 kB' 'Active(anon): 6568416 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515708 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 623100 kB' 'Mapped: 175332 kB' 'Shmem: 5948608 kB' 'KReclaimable: 286264 kB' 'Slab: 1036868 kB' 'SReclaimable: 286264 kB' 'SUnreclaim: 750604 kB' 'KernelStack: 26960 kB' 'PageTables: 8432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 7929368 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234716 kB' 'VmallocChunk: 0 kB' 'Percpu: 103104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3564916 kB' 'DirectMap2M: 42252288 kB' 'DirectMap1G: 90177536 kB' 00:03:36.224 11:59:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.224 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.224 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.224 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.224 11:59:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.224 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.224 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.224 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.224 11:59:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.224 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.224 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.224 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.224 11:59:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.224 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.224 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.224 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.224 11:59:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.224 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.224 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.224 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.224 11:59:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.224 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.224 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.224 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.224 11:59:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.224 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.224 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.224 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.224 11:59:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.224 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.224 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.224 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.224 11:59:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.224 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.224 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.224 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.224 11:59:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.224 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.224 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.224 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.224 11:59:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.224 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.224 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.224 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.224 11:59:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.224 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.224 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.224 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.224 11:59:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.224 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.224 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.224 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.224 11:59:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.224 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.224 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.224 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.224 11:59:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.224 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.224 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.224 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.224 11:59:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.224 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.224 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.224 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.224 11:59:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.224 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.224 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.224 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.224 11:59:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.224 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.224 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.224 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.224 11:59:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.224 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.224 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.224 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.224 11:59:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.224 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.224 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.224 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.224 11:59:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.224 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.224 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.224 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.224 11:59:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.224 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.224 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.224 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.224 11:59:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.224 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.224 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.224 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.224 11:59:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.224 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.224 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.224 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.224 11:59:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.224 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.224 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.224 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.224 11:59:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.224 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.224 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.224 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.224 11:59:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.224 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.224 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.224 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.224 11:59:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.224 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.224 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.224 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.224 11:59:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.224 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.224 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.224 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.224 11:59:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.224 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.224 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.224 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.224 11:59:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.225 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.225 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.225 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.225 11:59:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.225 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.225 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.225 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.225 11:59:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.225 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.225 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.225 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.225 11:59:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.225 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.225 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.225 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.225 11:59:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.225 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.225 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.225 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.225 11:59:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.225 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.225 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.225 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.225 11:59:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.225 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.225 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.225 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.225 11:59:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.225 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.225 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.225 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.225 11:59:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.225 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.225 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.225 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.225 11:59:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.225 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.225 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.225 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.225 11:59:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.225 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.225 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.225 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.225 11:59:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.225 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.225 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.225 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.225 11:59:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.225 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.225 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.225 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.225 11:59:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.225 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.225 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.225 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.225 11:59:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.225 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.225 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.225 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.225 11:59:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.225 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.225 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.225 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.225 11:59:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.225 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.225 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.225 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.225 11:59:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.225 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.225 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.225 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.225 11:59:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.225 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.225 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.225 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.225 11:59:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.225 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.225 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.225 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.225 11:59:37 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.225 11:59:37 -- setup/common.sh@33 -- # echo 0 00:03:36.225 11:59:37 -- setup/common.sh@33 -- # return 0 00:03:36.225 11:59:37 -- setup/hugepages.sh@100 -- # resv=0 00:03:36.225 11:59:37 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:36.225 nr_hugepages=1024 00:03:36.225 11:59:37 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:36.225 resv_hugepages=0 00:03:36.225 11:59:37 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:36.225 surplus_hugepages=0 00:03:36.225 11:59:37 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:36.225 anon_hugepages=0 00:03:36.225 11:59:37 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:36.225 11:59:37 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:36.225 11:59:37 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:36.225 11:59:37 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:36.225 11:59:37 -- setup/common.sh@18 -- # local node= 00:03:36.225 11:59:37 -- setup/common.sh@19 -- # local var val 00:03:36.225 11:59:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:36.225 11:59:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.225 11:59:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.225 11:59:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.225 11:59:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.225 11:59:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.225 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.225 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.225 11:59:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109670492 kB' 'MemAvailable: 113195396 kB' 'Buffers: 4124 kB' 'Cached: 10150600 kB' 'SwapCached: 0 kB' 'Active: 7258824 kB' 'Inactive: 3515708 kB' 'Active(anon): 6568428 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515708 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 623096 kB' 'Mapped: 175332 kB' 'Shmem: 5948620 kB' 'KReclaimable: 286264 kB' 'Slab: 1036868 kB' 'SReclaimable: 286264 kB' 'SUnreclaim: 750604 kB' 'KernelStack: 26960 kB' 'PageTables: 8432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 7929380 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234716 kB' 'VmallocChunk: 0 kB' 'Percpu: 103104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3564916 kB' 'DirectMap2M: 42252288 kB' 'DirectMap1G: 90177536 kB' 00:03:36.225 11:59:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.225 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.225 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.225 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.225 11:59:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.225 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.225 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.225 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.225 11:59:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.225 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.225 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.225 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.225 11:59:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.225 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.225 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.225 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.225 11:59:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.225 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.225 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.225 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.487 11:59:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.487 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.487 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.487 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.487 11:59:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.487 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.487 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.487 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.487 11:59:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.487 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.487 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.487 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.487 11:59:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.487 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.487 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.487 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.487 11:59:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.487 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.487 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.487 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.487 11:59:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.487 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.487 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.487 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.487 11:59:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.487 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.487 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.487 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.487 11:59:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.487 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.487 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.487 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.487 11:59:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.487 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.487 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.487 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.487 11:59:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.487 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.487 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.487 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.487 11:59:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.487 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.487 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.487 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.487 11:59:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.487 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.487 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.487 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.487 11:59:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.487 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.487 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.487 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.487 11:59:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.487 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.487 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.487 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.487 11:59:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.487 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.487 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.487 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.487 11:59:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.487 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.487 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.487 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.487 11:59:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.487 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.487 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.487 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.487 11:59:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.487 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.487 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.487 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.487 11:59:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.487 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.487 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.487 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.487 11:59:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.487 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.487 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.487 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.487 11:59:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.487 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.487 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.487 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.487 11:59:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.487 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.487 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.487 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.487 11:59:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.487 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.487 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.487 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.488 11:59:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.488 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.488 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.488 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.488 11:59:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.488 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.488 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.488 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.488 11:59:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.488 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.488 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.488 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.488 11:59:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.488 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.488 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.488 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.488 11:59:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.488 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.488 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.488 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.488 11:59:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.488 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.488 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.488 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.488 11:59:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.488 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.488 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.488 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.488 11:59:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.488 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.488 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.488 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.488 11:59:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.488 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.488 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.488 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.488 11:59:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.488 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.488 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.488 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.488 11:59:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.488 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.488 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.488 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.488 11:59:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.488 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.488 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.488 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.488 11:59:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.488 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.488 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.488 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.488 11:59:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.488 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.488 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.488 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.488 11:59:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.488 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.488 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.488 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.488 11:59:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.488 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.488 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.488 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.488 11:59:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.488 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.488 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.488 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.488 11:59:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.488 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.488 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.488 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.488 11:59:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.488 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.488 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.488 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.488 11:59:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.488 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.488 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.488 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.488 11:59:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.488 11:59:37 -- setup/common.sh@33 -- # echo 1024 00:03:36.488 11:59:37 -- setup/common.sh@33 -- # return 0 00:03:36.488 11:59:37 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:36.488 11:59:37 -- setup/hugepages.sh@112 -- # get_nodes 00:03:36.488 11:59:37 -- setup/hugepages.sh@27 -- # local node 00:03:36.488 11:59:37 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:36.488 11:59:37 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:36.488 11:59:37 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:36.488 11:59:37 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:36.488 11:59:37 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:36.488 11:59:37 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:36.488 11:59:37 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:36.488 11:59:37 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:36.488 11:59:37 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:36.488 11:59:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.488 11:59:37 -- setup/common.sh@18 -- # local node=0 00:03:36.488 11:59:37 -- setup/common.sh@19 -- # local var val 00:03:36.488 11:59:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:36.488 11:59:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.488 11:59:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:36.488 11:59:37 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:36.488 11:59:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.488 11:59:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.488 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.488 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.488 11:59:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59092236 kB' 'MemUsed: 6566772 kB' 'SwapCached: 0 kB' 'Active: 2285620 kB' 'Inactive: 108924 kB' 'Active(anon): 1976100 kB' 'Inactive(anon): 0 kB' 'Active(file): 309520 kB' 'Inactive(file): 108924 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2285040 kB' 'Mapped: 104488 kB' 'AnonPages: 112676 kB' 'Shmem: 1866596 kB' 'KernelStack: 12088 kB' 'PageTables: 3420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 157176 kB' 'Slab: 523592 kB' 'SReclaimable: 157176 kB' 'SUnreclaim: 366416 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:36.488 11:59:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.488 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.488 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.488 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.488 11:59:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.488 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.488 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.488 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.489 11:59:37 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.489 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.489 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.489 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.489 11:59:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.489 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.489 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.489 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.489 11:59:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.489 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.489 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.489 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.489 11:59:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.489 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.489 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.489 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.489 11:59:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.489 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.489 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.489 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.489 11:59:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.489 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.489 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.489 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.489 11:59:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.489 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.489 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.489 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.489 11:59:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.489 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.489 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.489 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.489 11:59:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.489 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.489 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.489 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.489 11:59:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.489 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.489 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.489 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.489 11:59:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.489 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.489 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.489 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.489 11:59:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.489 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.489 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.489 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.489 11:59:37 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.489 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.489 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.489 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.489 11:59:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.489 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.489 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.489 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.489 11:59:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.489 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.489 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.489 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.489 11:59:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.489 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.489 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.489 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.489 11:59:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.489 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.489 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.489 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.489 11:59:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.489 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.489 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.489 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.489 11:59:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.489 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.489 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.489 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.489 11:59:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.489 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.489 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.489 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.489 11:59:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.489 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.489 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.489 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.489 11:59:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.489 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.489 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.489 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.489 11:59:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.489 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.489 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.489 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.489 11:59:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.489 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.489 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.489 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.489 11:59:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.489 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.489 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.489 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.489 11:59:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.489 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.489 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.489 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.489 11:59:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.489 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.489 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.489 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.489 11:59:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.489 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.489 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.489 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.489 11:59:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.489 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.489 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.489 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.489 11:59:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.489 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.489 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.489 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.489 11:59:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.489 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.489 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.489 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.489 11:59:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.489 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.489 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.489 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.489 11:59:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.489 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.489 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.489 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.489 11:59:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.489 11:59:37 -- setup/common.sh@32 -- # continue 00:03:36.489 11:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.489 11:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.489 11:59:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.489 11:59:37 -- setup/common.sh@33 -- # echo 0 00:03:36.489 11:59:37 -- setup/common.sh@33 -- # return 0 00:03:36.489 11:59:37 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:36.489 11:59:37 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:36.490 11:59:37 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:36.490 11:59:37 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:36.490 11:59:37 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:36.490 node0=1024 expecting 1024 00:03:36.490 11:59:37 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:36.490 11:59:37 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:36.490 11:59:37 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:36.490 11:59:37 -- setup/hugepages.sh@202 -- # setup output 00:03:36.490 11:59:37 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:36.490 11:59:37 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:39.789 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:39.789 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:39.789 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:39.789 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:39.789 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:39.789 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:39.789 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:39.789 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:39.789 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:39.789 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:39.789 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:39.789 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:39.789 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:39.789 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:39.789 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:39.789 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:39.789 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:40.055 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:40.055 11:59:41 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:40.055 11:59:41 -- setup/hugepages.sh@89 -- # local node 00:03:40.055 11:59:41 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:40.055 11:59:41 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:40.055 11:59:41 -- setup/hugepages.sh@92 -- # local surp 00:03:40.055 11:59:41 -- setup/hugepages.sh@93 -- # local resv 00:03:40.055 11:59:41 -- setup/hugepages.sh@94 -- # local anon 00:03:40.055 11:59:41 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:40.055 11:59:41 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:40.055 11:59:41 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:40.055 11:59:41 -- setup/common.sh@18 -- # local node= 00:03:40.055 11:59:41 -- setup/common.sh@19 -- # local var val 00:03:40.055 11:59:41 -- setup/common.sh@20 -- # local mem_f mem 00:03:40.055 11:59:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.055 11:59:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.055 11:59:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.055 11:59:41 -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.055 11:59:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.055 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.055 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.056 11:59:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109665488 kB' 'MemAvailable: 113190392 kB' 'Buffers: 4124 kB' 'Cached: 10150692 kB' 'SwapCached: 0 kB' 'Active: 7261676 kB' 'Inactive: 3515708 kB' 'Active(anon): 6571280 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515708 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 625484 kB' 'Mapped: 175436 kB' 'Shmem: 5948712 kB' 'KReclaimable: 286264 kB' 'Slab: 1037568 kB' 'SReclaimable: 286264 kB' 'SUnreclaim: 751304 kB' 'KernelStack: 26960 kB' 'PageTables: 8432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 7930272 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234716 kB' 'VmallocChunk: 0 kB' 'Percpu: 103104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3564916 kB' 'DirectMap2M: 42252288 kB' 'DirectMap1G: 90177536 kB' 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.056 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.056 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.057 11:59:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.057 11:59:41 -- setup/common.sh@33 -- # echo 0 00:03:40.057 11:59:41 -- setup/common.sh@33 -- # return 0 00:03:40.057 11:59:41 -- setup/hugepages.sh@97 -- # anon=0 00:03:40.057 11:59:41 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:40.057 11:59:41 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:40.057 11:59:41 -- setup/common.sh@18 -- # local node= 00:03:40.057 11:59:41 -- setup/common.sh@19 -- # local var val 00:03:40.057 11:59:41 -- setup/common.sh@20 -- # local mem_f mem 00:03:40.057 11:59:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.057 11:59:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.057 11:59:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.057 11:59:41 -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.057 11:59:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.057 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.057 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.057 11:59:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109666460 kB' 'MemAvailable: 113191364 kB' 'Buffers: 4124 kB' 'Cached: 10150696 kB' 'SwapCached: 0 kB' 'Active: 7261232 kB' 'Inactive: 3515708 kB' 'Active(anon): 6570836 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515708 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 625060 kB' 'Mapped: 175428 kB' 'Shmem: 5948716 kB' 'KReclaimable: 286264 kB' 'Slab: 1037568 kB' 'SReclaimable: 286264 kB' 'SUnreclaim: 751304 kB' 'KernelStack: 26960 kB' 'PageTables: 8428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 7930284 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234700 kB' 'VmallocChunk: 0 kB' 'Percpu: 103104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3564916 kB' 'DirectMap2M: 42252288 kB' 'DirectMap1G: 90177536 kB' 00:03:40.057 11:59:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.057 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.057 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.057 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.057 11:59:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.057 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.057 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.057 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.057 11:59:41 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.057 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.057 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.057 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.057 11:59:41 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.057 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.057 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.057 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.057 11:59:41 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.057 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.057 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.057 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.057 11:59:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.057 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.057 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.057 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.057 11:59:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.057 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.057 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.057 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.057 11:59:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.057 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.057 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.057 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.057 11:59:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.057 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.057 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.057 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.057 11:59:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.057 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.057 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.057 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.057 11:59:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.057 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.057 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.057 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.057 11:59:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.057 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.057 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.057 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.057 11:59:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.057 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.057 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.057 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.057 11:59:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.057 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.057 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.057 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.057 11:59:41 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.057 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.057 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.057 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.057 11:59:41 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.057 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.057 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.057 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.057 11:59:41 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.057 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.057 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.057 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.057 11:59:41 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.057 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.057 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.057 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.057 11:59:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.057 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.057 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.057 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.057 11:59:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.057 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.057 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.057 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.057 11:59:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.057 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.057 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.057 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.057 11:59:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.057 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.057 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.057 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.057 11:59:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.057 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.057 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.057 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.057 11:59:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.057 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.057 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.057 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.057 11:59:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.057 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.057 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.057 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.057 11:59:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.057 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.057 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.057 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.057 11:59:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.057 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.057 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.057 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.057 11:59:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.057 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.057 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.057 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.057 11:59:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.057 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.057 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.057 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.057 11:59:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.057 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.057 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.057 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.057 11:59:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.057 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.057 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.057 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.057 11:59:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.057 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.057 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.058 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.058 11:59:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.058 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.058 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.058 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.058 11:59:41 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.058 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.058 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.058 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.058 11:59:41 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.058 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.058 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.058 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.058 11:59:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.058 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.058 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.058 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.058 11:59:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.058 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.058 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.058 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.058 11:59:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.058 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.058 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.058 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.058 11:59:41 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.058 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.058 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.058 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.058 11:59:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.058 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.058 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.058 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.058 11:59:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.058 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.058 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.058 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.058 11:59:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.058 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.058 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.058 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.058 11:59:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.058 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.058 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.058 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.058 11:59:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.058 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.058 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.058 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.058 11:59:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.058 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.058 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.058 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.058 11:59:41 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.058 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.058 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.058 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.058 11:59:41 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.058 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.058 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.058 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.058 11:59:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.058 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.058 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.058 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.058 11:59:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.058 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.058 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.058 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.058 11:59:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.058 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.058 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.058 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.058 11:59:41 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.058 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.058 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.058 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.058 11:59:41 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.058 11:59:41 -- setup/common.sh@33 -- # echo 0 00:03:40.058 11:59:41 -- setup/common.sh@33 -- # return 0 00:03:40.058 11:59:41 -- setup/hugepages.sh@99 -- # surp=0 00:03:40.058 11:59:41 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:40.058 11:59:41 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:40.058 11:59:41 -- setup/common.sh@18 -- # local node= 00:03:40.058 11:59:41 -- setup/common.sh@19 -- # local var val 00:03:40.058 11:59:41 -- setup/common.sh@20 -- # local mem_f mem 00:03:40.058 11:59:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.058 11:59:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.058 11:59:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.058 11:59:41 -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.058 11:59:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.058 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.058 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.058 11:59:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109667228 kB' 'MemAvailable: 113192132 kB' 'Buffers: 4124 kB' 'Cached: 10150696 kB' 'SwapCached: 0 kB' 'Active: 7260484 kB' 'Inactive: 3515708 kB' 'Active(anon): 6570088 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515708 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 624744 kB' 'Mapped: 175348 kB' 'Shmem: 5948716 kB' 'KReclaimable: 286264 kB' 'Slab: 1037484 kB' 'SReclaimable: 286264 kB' 'SUnreclaim: 751220 kB' 'KernelStack: 26944 kB' 'PageTables: 8360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 7930300 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234684 kB' 'VmallocChunk: 0 kB' 'Percpu: 103104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3564916 kB' 'DirectMap2M: 42252288 kB' 'DirectMap1G: 90177536 kB' 00:03:40.058 11:59:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.058 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.058 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.058 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.058 11:59:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.058 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.058 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.058 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.058 11:59:41 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.058 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.058 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.058 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.058 11:59:41 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.058 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.058 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.058 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.058 11:59:41 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.058 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.058 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.058 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.058 11:59:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.058 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.058 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.058 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.058 11:59:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.058 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.058 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.058 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.058 11:59:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.058 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.058 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.058 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.058 11:59:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.058 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.058 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.058 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.058 11:59:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.058 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.058 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.058 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.058 11:59:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.058 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.058 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.058 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.058 11:59:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.058 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.059 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.059 11:59:41 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.059 11:59:41 -- setup/common.sh@33 -- # echo 0 00:03:40.059 11:59:41 -- setup/common.sh@33 -- # return 0 00:03:40.059 11:59:41 -- setup/hugepages.sh@100 -- # resv=0 00:03:40.059 11:59:41 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:40.059 nr_hugepages=1024 00:03:40.059 11:59:41 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:40.059 resv_hugepages=0 00:03:40.059 11:59:41 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:40.059 surplus_hugepages=0 00:03:40.059 11:59:41 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:40.060 anon_hugepages=0 00:03:40.060 11:59:41 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:40.060 11:59:41 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:40.060 11:59:41 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:40.060 11:59:41 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:40.060 11:59:41 -- setup/common.sh@18 -- # local node= 00:03:40.060 11:59:41 -- setup/common.sh@19 -- # local var val 00:03:40.060 11:59:41 -- setup/common.sh@20 -- # local mem_f mem 00:03:40.060 11:59:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.060 11:59:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.060 11:59:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.060 11:59:41 -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.060 11:59:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.060 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.060 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.060 11:59:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109674772 kB' 'MemAvailable: 113199676 kB' 'Buffers: 4124 kB' 'Cached: 10150720 kB' 'SwapCached: 0 kB' 'Active: 7260416 kB' 'Inactive: 3515708 kB' 'Active(anon): 6570020 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515708 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 624672 kB' 'Mapped: 175348 kB' 'Shmem: 5948740 kB' 'KReclaimable: 286264 kB' 'Slab: 1037476 kB' 'SReclaimable: 286264 kB' 'SUnreclaim: 751212 kB' 'KernelStack: 26960 kB' 'PageTables: 8412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 7930312 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234684 kB' 'VmallocChunk: 0 kB' 'Percpu: 103104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3564916 kB' 'DirectMap2M: 42252288 kB' 'DirectMap1G: 90177536 kB' 00:03:40.060 11:59:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.060 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.060 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.060 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.060 11:59:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.060 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.060 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.060 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.060 11:59:41 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.060 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.060 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.060 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.060 11:59:41 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.060 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.060 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.060 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.060 11:59:41 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.060 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.060 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.060 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.060 11:59:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.060 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.060 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.060 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.060 11:59:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.060 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.060 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.060 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.060 11:59:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.060 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.060 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.060 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.060 11:59:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.060 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.060 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.060 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.060 11:59:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.060 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.060 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.060 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.060 11:59:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.060 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.060 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.060 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.060 11:59:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.060 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.060 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.060 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.060 11:59:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.060 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.060 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.060 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.060 11:59:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.060 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.060 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.060 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.060 11:59:41 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.060 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.060 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.060 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.060 11:59:41 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.060 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.060 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.060 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.060 11:59:41 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.060 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.060 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.060 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.060 11:59:41 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.060 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.060 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.060 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.060 11:59:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.060 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.060 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.060 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.060 11:59:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.060 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.060 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.060 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.060 11:59:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.060 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.060 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.060 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.060 11:59:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.060 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.060 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.060 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.060 11:59:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.060 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.060 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.060 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.060 11:59:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.060 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.060 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.060 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.060 11:59:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.061 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.061 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.061 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.061 11:59:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.061 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.061 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.061 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.061 11:59:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.061 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.061 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.061 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.061 11:59:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.061 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.061 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.061 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.061 11:59:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.061 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.061 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.061 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.061 11:59:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.061 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.061 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.061 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.061 11:59:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.061 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.061 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.061 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.061 11:59:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.061 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.061 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.061 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.061 11:59:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.061 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.061 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.061 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.061 11:59:41 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.061 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.061 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.061 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.061 11:59:41 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.061 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.061 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.061 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.061 11:59:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.061 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.061 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.061 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.061 11:59:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.061 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.061 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.061 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.061 11:59:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.061 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.061 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.061 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.061 11:59:41 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.061 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.061 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.061 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.061 11:59:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.061 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.061 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.061 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.061 11:59:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.061 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.061 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.061 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.061 11:59:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.061 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.061 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.061 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.061 11:59:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.061 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.061 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.061 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.061 11:59:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.061 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.061 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.061 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.061 11:59:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.061 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.061 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.061 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.061 11:59:41 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.061 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.061 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.061 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.061 11:59:41 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.061 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.061 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.061 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.061 11:59:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.061 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.061 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.061 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.061 11:59:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.061 11:59:41 -- setup/common.sh@33 -- # echo 1024 00:03:40.061 11:59:41 -- setup/common.sh@33 -- # return 0 00:03:40.061 11:59:41 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:40.061 11:59:41 -- setup/hugepages.sh@112 -- # get_nodes 00:03:40.061 11:59:41 -- setup/hugepages.sh@27 -- # local node 00:03:40.061 11:59:41 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:40.061 11:59:41 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:40.061 11:59:41 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:40.061 11:59:41 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:40.061 11:59:41 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:40.061 11:59:41 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:40.061 11:59:41 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:40.061 11:59:41 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:40.061 11:59:41 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:40.061 11:59:41 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:40.061 11:59:41 -- setup/common.sh@18 -- # local node=0 00:03:40.061 11:59:41 -- setup/common.sh@19 -- # local var val 00:03:40.061 11:59:41 -- setup/common.sh@20 -- # local mem_f mem 00:03:40.061 11:59:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.061 11:59:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:40.061 11:59:41 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:40.061 11:59:41 -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.061 11:59:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.061 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.061 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.061 11:59:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59085368 kB' 'MemUsed: 6573640 kB' 'SwapCached: 0 kB' 'Active: 2284664 kB' 'Inactive: 108924 kB' 'Active(anon): 1975144 kB' 'Inactive(anon): 0 kB' 'Active(file): 309520 kB' 'Inactive(file): 108924 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2285128 kB' 'Mapped: 104492 kB' 'AnonPages: 111704 kB' 'Shmem: 1866684 kB' 'KernelStack: 12088 kB' 'PageTables: 3452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 157176 kB' 'Slab: 523892 kB' 'SReclaimable: 157176 kB' 'SUnreclaim: 366716 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:40.061 11:59:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.061 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.061 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.061 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.061 11:59:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.061 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.061 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.061 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.061 11:59:41 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.061 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.061 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.061 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.061 11:59:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.061 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.061 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.061 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.061 11:59:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.061 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.061 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.061 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.061 11:59:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.061 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.061 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.061 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.061 11:59:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.061 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.061 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.061 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.062 11:59:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.062 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.062 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.062 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.062 11:59:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.062 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.062 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.062 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.062 11:59:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.062 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.062 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.062 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.062 11:59:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.062 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.062 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.062 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.062 11:59:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.062 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.062 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.062 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.062 11:59:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.062 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.062 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.062 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.062 11:59:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.062 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.062 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.062 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.062 11:59:41 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.062 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.062 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.062 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.062 11:59:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.062 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.062 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.062 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.062 11:59:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.062 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.062 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.062 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.062 11:59:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.062 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.062 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.062 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.062 11:59:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.062 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.062 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.062 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.062 11:59:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.062 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.062 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.062 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.062 11:59:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.062 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.062 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.062 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.062 11:59:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.062 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.062 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.062 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.062 11:59:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.062 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.062 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.062 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.062 11:59:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.062 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.062 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.062 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.062 11:59:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.062 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.062 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.062 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.062 11:59:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.062 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.062 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.062 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.062 11:59:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.062 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.062 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.062 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.062 11:59:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.062 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.062 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.062 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.062 11:59:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.062 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.062 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.062 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.062 11:59:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.062 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.062 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.062 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.062 11:59:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.062 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.062 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.062 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.062 11:59:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.062 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.062 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.062 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.062 11:59:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.062 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.062 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.062 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.062 11:59:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.062 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.062 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.062 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.062 11:59:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.062 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.062 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.062 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.062 11:59:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.062 11:59:41 -- setup/common.sh@32 -- # continue 00:03:40.062 11:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.062 11:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.062 11:59:41 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.062 11:59:41 -- setup/common.sh@33 -- # echo 0 00:03:40.062 11:59:41 -- setup/common.sh@33 -- # return 0 00:03:40.062 11:59:41 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:40.062 11:59:41 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:40.062 11:59:41 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:40.062 11:59:41 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:40.062 11:59:41 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:40.062 node0=1024 expecting 1024 00:03:40.062 11:59:41 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:40.062 00:03:40.062 real 0m7.497s 00:03:40.062 user 0m2.921s 00:03:40.062 sys 0m4.665s 00:03:40.062 11:59:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:40.062 11:59:41 -- common/autotest_common.sh@10 -- # set +x 00:03:40.062 ************************************ 00:03:40.062 END TEST no_shrink_alloc 00:03:40.062 ************************************ 00:03:40.062 11:59:41 -- setup/hugepages.sh@217 -- # clear_hp 00:03:40.062 11:59:41 -- setup/hugepages.sh@37 -- # local node hp 00:03:40.062 11:59:41 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:40.062 11:59:41 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:40.062 11:59:41 -- setup/hugepages.sh@41 -- # echo 0 00:03:40.062 11:59:41 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:40.062 11:59:41 -- setup/hugepages.sh@41 -- # echo 0 00:03:40.062 11:59:41 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:40.062 11:59:41 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:40.062 11:59:41 -- setup/hugepages.sh@41 -- # echo 0 00:03:40.062 11:59:41 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:40.062 11:59:41 -- setup/hugepages.sh@41 -- # echo 0 00:03:40.062 11:59:41 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:40.062 11:59:41 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:40.062 00:03:40.062 real 0m27.751s 00:03:40.062 user 0m10.674s 00:03:40.062 sys 0m17.142s 00:03:40.062 11:59:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:40.062 11:59:41 -- common/autotest_common.sh@10 -- # set +x 00:03:40.062 ************************************ 00:03:40.062 END TEST hugepages 00:03:40.062 ************************************ 00:03:40.325 11:59:41 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:40.325 11:59:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:40.325 11:59:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:40.325 11:59:41 -- common/autotest_common.sh@10 -- # set +x 00:03:40.325 ************************************ 00:03:40.325 START TEST driver 00:03:40.325 ************************************ 00:03:40.325 11:59:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:40.586 * Looking for test storage... 00:03:40.586 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:40.586 11:59:41 -- setup/driver.sh@68 -- # setup reset 00:03:40.586 11:59:41 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:40.586 11:59:41 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:45.878 11:59:46 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:45.878 11:59:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:45.878 11:59:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:45.878 11:59:46 -- common/autotest_common.sh@10 -- # set +x 00:03:45.878 ************************************ 00:03:45.878 START TEST guess_driver 00:03:45.878 ************************************ 00:03:45.878 11:59:46 -- common/autotest_common.sh@1111 -- # guess_driver 00:03:45.878 11:59:46 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:45.878 11:59:46 -- setup/driver.sh@47 -- # local fail=0 00:03:45.878 11:59:46 -- setup/driver.sh@49 -- # pick_driver 00:03:45.878 11:59:46 -- setup/driver.sh@36 -- # vfio 00:03:45.878 11:59:46 -- setup/driver.sh@21 -- # local iommu_grups 00:03:45.878 11:59:46 -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:45.878 11:59:46 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:45.878 11:59:46 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:45.878 11:59:46 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:45.878 11:59:46 -- setup/driver.sh@29 -- # (( 322 > 0 )) 00:03:45.878 11:59:46 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:45.878 11:59:46 -- setup/driver.sh@14 -- # mod vfio_pci 00:03:45.878 11:59:46 -- setup/driver.sh@12 -- # dep vfio_pci 00:03:45.878 11:59:46 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:45.878 11:59:46 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:45.878 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:45.878 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:45.878 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:45.878 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:45.878 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:45.878 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:45.878 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:45.878 11:59:46 -- setup/driver.sh@30 -- # return 0 00:03:45.878 11:59:46 -- setup/driver.sh@37 -- # echo vfio-pci 00:03:45.878 11:59:46 -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:45.878 11:59:46 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:45.878 11:59:46 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:45.878 Looking for driver=vfio-pci 00:03:45.878 11:59:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.878 11:59:46 -- setup/driver.sh@45 -- # setup output config 00:03:45.878 11:59:46 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:45.878 11:59:46 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:49.182 11:59:49 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:49.182 11:59:49 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:49.182 11:59:49 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:49.182 11:59:49 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:49.182 11:59:49 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:49.182 11:59:49 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:49.182 11:59:49 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:49.182 11:59:49 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:49.182 11:59:49 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:49.182 11:59:49 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:49.182 11:59:49 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:49.182 11:59:49 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:49.182 11:59:50 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:49.182 11:59:50 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:49.182 11:59:50 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:49.182 11:59:50 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:49.182 11:59:50 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:49.182 11:59:50 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:49.182 11:59:50 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:49.182 11:59:50 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:49.182 11:59:50 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:49.182 11:59:50 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:49.182 11:59:50 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:49.182 11:59:50 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:49.182 11:59:50 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:49.182 11:59:50 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:49.182 11:59:50 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:49.182 11:59:50 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:49.182 11:59:50 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:49.182 11:59:50 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:49.182 11:59:50 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:49.182 11:59:50 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:49.182 11:59:50 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:49.182 11:59:50 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:49.182 11:59:50 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:49.182 11:59:50 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:49.182 11:59:50 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:49.182 11:59:50 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:49.182 11:59:50 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:49.182 11:59:50 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:49.182 11:59:50 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:49.182 11:59:50 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:49.182 11:59:50 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:49.182 11:59:50 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:49.182 11:59:50 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:49.182 11:59:50 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:49.182 11:59:50 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:49.182 11:59:50 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:49.182 11:59:50 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:49.182 11:59:50 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:49.182 11:59:50 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:49.444 11:59:50 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:49.444 11:59:50 -- setup/driver.sh@65 -- # setup reset 00:03:49.444 11:59:50 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:49.444 11:59:50 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:53.651 00:03:53.651 real 0m8.135s 00:03:53.651 user 0m2.547s 00:03:53.651 sys 0m4.727s 00:03:53.651 11:59:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:53.651 11:59:54 -- common/autotest_common.sh@10 -- # set +x 00:03:53.651 ************************************ 00:03:53.651 END TEST guess_driver 00:03:53.651 ************************************ 00:03:53.651 00:03:53.651 real 0m13.312s 00:03:53.651 user 0m4.129s 00:03:53.651 sys 0m7.552s 00:03:53.651 11:59:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:53.651 11:59:54 -- common/autotest_common.sh@10 -- # set +x 00:03:53.651 ************************************ 00:03:53.651 END TEST driver 00:03:53.651 ************************************ 00:03:53.651 11:59:54 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:53.651 11:59:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:53.651 11:59:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:53.651 11:59:54 -- common/autotest_common.sh@10 -- # set +x 00:03:53.912 ************************************ 00:03:53.912 START TEST devices 00:03:53.912 ************************************ 00:03:53.912 11:59:54 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:53.912 * Looking for test storage... 00:03:53.912 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:53.912 11:59:55 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:53.912 11:59:55 -- setup/devices.sh@192 -- # setup reset 00:03:53.912 11:59:55 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:53.912 11:59:55 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:58.117 11:59:59 -- setup/devices.sh@194 -- # get_zoned_devs 00:03:58.117 11:59:59 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:58.117 11:59:59 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:58.117 11:59:59 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:58.117 11:59:59 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:58.117 11:59:59 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:58.117 11:59:59 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:58.117 11:59:59 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:58.117 11:59:59 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:58.117 11:59:59 -- setup/devices.sh@196 -- # blocks=() 00:03:58.117 11:59:59 -- setup/devices.sh@196 -- # declare -a blocks 00:03:58.117 11:59:59 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:58.117 11:59:59 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:58.117 11:59:59 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:58.117 11:59:59 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:58.117 11:59:59 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:58.117 11:59:59 -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:58.117 11:59:59 -- setup/devices.sh@202 -- # pci=0000:65:00.0 00:03:58.117 11:59:59 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:03:58.117 11:59:59 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:58.117 11:59:59 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:58.117 11:59:59 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:58.117 No valid GPT data, bailing 00:03:58.117 11:59:59 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:58.117 11:59:59 -- scripts/common.sh@391 -- # pt= 00:03:58.117 11:59:59 -- scripts/common.sh@392 -- # return 1 00:03:58.117 11:59:59 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:58.117 11:59:59 -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:58.117 11:59:59 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:58.117 11:59:59 -- setup/common.sh@80 -- # echo 1920383410176 00:03:58.117 11:59:59 -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:03:58.117 11:59:59 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:58.117 11:59:59 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:65:00.0 00:03:58.117 11:59:59 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:58.117 11:59:59 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:58.117 11:59:59 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:58.117 11:59:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:58.117 11:59:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:58.117 11:59:59 -- common/autotest_common.sh@10 -- # set +x 00:03:58.117 ************************************ 00:03:58.117 START TEST nvme_mount 00:03:58.117 ************************************ 00:03:58.117 11:59:59 -- common/autotest_common.sh@1111 -- # nvme_mount 00:03:58.117 11:59:59 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:58.117 11:59:59 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:58.117 11:59:59 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:58.117 11:59:59 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:58.117 11:59:59 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:58.117 11:59:59 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:58.117 11:59:59 -- setup/common.sh@40 -- # local part_no=1 00:03:58.117 11:59:59 -- setup/common.sh@41 -- # local size=1073741824 00:03:58.378 11:59:59 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:58.378 11:59:59 -- setup/common.sh@44 -- # parts=() 00:03:58.378 11:59:59 -- setup/common.sh@44 -- # local parts 00:03:58.378 11:59:59 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:58.378 11:59:59 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:58.378 11:59:59 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:58.378 11:59:59 -- setup/common.sh@46 -- # (( part++ )) 00:03:58.378 11:59:59 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:58.378 11:59:59 -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:58.378 11:59:59 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:58.378 11:59:59 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:59.319 Creating new GPT entries in memory. 00:03:59.319 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:59.319 other utilities. 00:03:59.319 12:00:00 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:59.319 12:00:00 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:59.319 12:00:00 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:59.320 12:00:00 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:59.320 12:00:00 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:00.260 Creating new GPT entries in memory. 00:04:00.260 The operation has completed successfully. 00:04:00.260 12:00:01 -- setup/common.sh@57 -- # (( part++ )) 00:04:00.260 12:00:01 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:00.260 12:00:01 -- setup/common.sh@62 -- # wait 3173880 00:04:00.260 12:00:01 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:00.260 12:00:01 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:00.260 12:00:01 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:00.260 12:00:01 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:00.260 12:00:01 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:00.260 12:00:01 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:00.260 12:00:01 -- setup/devices.sh@105 -- # verify 0000:65:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:00.260 12:00:01 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:00.260 12:00:01 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:00.260 12:00:01 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:00.260 12:00:01 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:00.260 12:00:01 -- setup/devices.sh@53 -- # local found=0 00:04:00.260 12:00:01 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:00.260 12:00:01 -- setup/devices.sh@56 -- # : 00:04:00.260 12:00:01 -- setup/devices.sh@59 -- # local pci status 00:04:00.260 12:00:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.260 12:00:01 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:00.260 12:00:01 -- setup/devices.sh@47 -- # setup output config 00:04:00.260 12:00:01 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:00.260 12:00:01 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:03.556 12:00:04 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:03.556 12:00:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.556 12:00:04 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:03.556 12:00:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.556 12:00:04 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:03.556 12:00:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.556 12:00:04 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:03.556 12:00:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.556 12:00:04 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:03.556 12:00:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.556 12:00:04 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:03.556 12:00:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.556 12:00:04 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:03.556 12:00:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.556 12:00:04 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:03.556 12:00:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.556 12:00:04 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:03.556 12:00:04 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:03.556 12:00:04 -- setup/devices.sh@63 -- # found=1 00:04:03.556 12:00:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.556 12:00:04 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:03.556 12:00:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.557 12:00:04 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:03.557 12:00:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.557 12:00:04 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:03.557 12:00:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.557 12:00:04 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:03.557 12:00:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.557 12:00:04 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:03.557 12:00:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.557 12:00:04 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:03.557 12:00:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.557 12:00:04 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:03.557 12:00:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.557 12:00:04 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:03.557 12:00:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.126 12:00:05 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:04.126 12:00:05 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:04.126 12:00:05 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:04.126 12:00:05 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:04.126 12:00:05 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:04.126 12:00:05 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:04.126 12:00:05 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:04.126 12:00:05 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:04.126 12:00:05 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:04.126 12:00:05 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:04.126 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:04.126 12:00:05 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:04.126 12:00:05 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:04.385 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:04.385 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:04.385 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:04.385 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:04.385 12:00:05 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:04.385 12:00:05 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:04.385 12:00:05 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:04.385 12:00:05 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:04.385 12:00:05 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:04.385 12:00:05 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:04.385 12:00:05 -- setup/devices.sh@116 -- # verify 0000:65:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:04.385 12:00:05 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:04.385 12:00:05 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:04.385 12:00:05 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:04.385 12:00:05 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:04.385 12:00:05 -- setup/devices.sh@53 -- # local found=0 00:04:04.385 12:00:05 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:04.385 12:00:05 -- setup/devices.sh@56 -- # : 00:04:04.385 12:00:05 -- setup/devices.sh@59 -- # local pci status 00:04:04.385 12:00:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.385 12:00:05 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:04.385 12:00:05 -- setup/devices.sh@47 -- # setup output config 00:04:04.385 12:00:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:04.385 12:00:05 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:07.681 12:00:08 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:07.681 12:00:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.681 12:00:08 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:07.681 12:00:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.681 12:00:08 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:07.681 12:00:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.681 12:00:08 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:07.681 12:00:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.681 12:00:08 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:07.681 12:00:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.681 12:00:08 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:07.681 12:00:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.681 12:00:08 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:07.681 12:00:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.681 12:00:08 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:07.681 12:00:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.681 12:00:08 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:07.681 12:00:08 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:07.681 12:00:08 -- setup/devices.sh@63 -- # found=1 00:04:07.681 12:00:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.681 12:00:08 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:07.681 12:00:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.681 12:00:08 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:07.681 12:00:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.681 12:00:08 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:07.681 12:00:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.681 12:00:08 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:07.681 12:00:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.681 12:00:08 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:07.681 12:00:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.681 12:00:08 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:07.681 12:00:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.681 12:00:08 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:07.681 12:00:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.681 12:00:08 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:07.681 12:00:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.941 12:00:09 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:07.941 12:00:09 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:07.941 12:00:09 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:07.941 12:00:09 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:07.941 12:00:09 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:07.941 12:00:09 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:07.941 12:00:09 -- setup/devices.sh@125 -- # verify 0000:65:00.0 data@nvme0n1 '' '' 00:04:07.941 12:00:09 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:07.941 12:00:09 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:07.941 12:00:09 -- setup/devices.sh@50 -- # local mount_point= 00:04:07.941 12:00:09 -- setup/devices.sh@51 -- # local test_file= 00:04:07.941 12:00:09 -- setup/devices.sh@53 -- # local found=0 00:04:07.941 12:00:09 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:07.941 12:00:09 -- setup/devices.sh@59 -- # local pci status 00:04:07.941 12:00:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.941 12:00:09 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:07.941 12:00:09 -- setup/devices.sh@47 -- # setup output config 00:04:07.941 12:00:09 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:07.941 12:00:09 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:11.311 12:00:11 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:11.311 12:00:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.311 12:00:11 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:11.311 12:00:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.311 12:00:11 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:11.311 12:00:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.311 12:00:11 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:11.311 12:00:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.311 12:00:11 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:11.311 12:00:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.311 12:00:11 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:11.311 12:00:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.311 12:00:11 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:11.311 12:00:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.311 12:00:11 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:11.311 12:00:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.311 12:00:12 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:11.311 12:00:12 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:11.311 12:00:12 -- setup/devices.sh@63 -- # found=1 00:04:11.311 12:00:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.311 12:00:12 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:11.311 12:00:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.311 12:00:12 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:11.311 12:00:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.311 12:00:12 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:11.311 12:00:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.311 12:00:12 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:11.311 12:00:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.311 12:00:12 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:11.311 12:00:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.311 12:00:12 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:11.311 12:00:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.311 12:00:12 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:11.311 12:00:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.311 12:00:12 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:11.311 12:00:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.311 12:00:12 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:11.311 12:00:12 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:11.311 12:00:12 -- setup/devices.sh@68 -- # return 0 00:04:11.311 12:00:12 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:11.311 12:00:12 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:11.311 12:00:12 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:11.311 12:00:12 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:11.311 12:00:12 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:11.311 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:11.311 00:04:11.311 real 0m13.189s 00:04:11.311 user 0m4.027s 00:04:11.311 sys 0m6.987s 00:04:11.311 12:00:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:11.311 12:00:12 -- common/autotest_common.sh@10 -- # set +x 00:04:11.311 ************************************ 00:04:11.311 END TEST nvme_mount 00:04:11.311 ************************************ 00:04:11.572 12:00:12 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:11.572 12:00:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:11.572 12:00:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:11.572 12:00:12 -- common/autotest_common.sh@10 -- # set +x 00:04:11.572 ************************************ 00:04:11.572 START TEST dm_mount 00:04:11.572 ************************************ 00:04:11.572 12:00:12 -- common/autotest_common.sh@1111 -- # dm_mount 00:04:11.572 12:00:12 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:11.572 12:00:12 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:11.572 12:00:12 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:11.572 12:00:12 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:11.572 12:00:12 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:11.572 12:00:12 -- setup/common.sh@40 -- # local part_no=2 00:04:11.572 12:00:12 -- setup/common.sh@41 -- # local size=1073741824 00:04:11.572 12:00:12 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:11.572 12:00:12 -- setup/common.sh@44 -- # parts=() 00:04:11.572 12:00:12 -- setup/common.sh@44 -- # local parts 00:04:11.572 12:00:12 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:11.572 12:00:12 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:11.572 12:00:12 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:11.572 12:00:12 -- setup/common.sh@46 -- # (( part++ )) 00:04:11.572 12:00:12 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:11.572 12:00:12 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:11.572 12:00:12 -- setup/common.sh@46 -- # (( part++ )) 00:04:11.572 12:00:12 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:11.572 12:00:12 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:11.572 12:00:12 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:11.572 12:00:12 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:12.512 Creating new GPT entries in memory. 00:04:12.512 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:12.512 other utilities. 00:04:12.512 12:00:13 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:12.512 12:00:13 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:12.512 12:00:13 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:12.512 12:00:13 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:12.512 12:00:13 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:13.895 Creating new GPT entries in memory. 00:04:13.895 The operation has completed successfully. 00:04:13.895 12:00:14 -- setup/common.sh@57 -- # (( part++ )) 00:04:13.895 12:00:14 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:13.895 12:00:14 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:13.895 12:00:14 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:13.895 12:00:14 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:14.837 The operation has completed successfully. 00:04:14.837 12:00:15 -- setup/common.sh@57 -- # (( part++ )) 00:04:14.837 12:00:15 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:14.837 12:00:15 -- setup/common.sh@62 -- # wait 3179721 00:04:14.837 12:00:15 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:14.837 12:00:15 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:14.837 12:00:15 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:14.837 12:00:15 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:14.837 12:00:15 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:14.837 12:00:15 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:14.837 12:00:15 -- setup/devices.sh@161 -- # break 00:04:14.837 12:00:15 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:14.837 12:00:15 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:14.837 12:00:15 -- setup/devices.sh@165 -- # dm=/dev/dm-1 00:04:14.837 12:00:15 -- setup/devices.sh@166 -- # dm=dm-1 00:04:14.837 12:00:15 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-1 ]] 00:04:14.838 12:00:15 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-1 ]] 00:04:14.838 12:00:15 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:14.838 12:00:15 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:14.838 12:00:15 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:14.838 12:00:15 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:14.838 12:00:15 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:14.838 12:00:15 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:14.838 12:00:15 -- setup/devices.sh@174 -- # verify 0000:65:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:14.838 12:00:15 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:14.838 12:00:15 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:14.838 12:00:15 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:14.838 12:00:15 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:14.838 12:00:15 -- setup/devices.sh@53 -- # local found=0 00:04:14.838 12:00:15 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:14.838 12:00:15 -- setup/devices.sh@56 -- # : 00:04:14.838 12:00:15 -- setup/devices.sh@59 -- # local pci status 00:04:14.838 12:00:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.838 12:00:15 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:14.838 12:00:15 -- setup/devices.sh@47 -- # setup output config 00:04:14.838 12:00:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:14.838 12:00:15 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:18.137 12:00:19 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:18.138 12:00:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.138 12:00:19 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:18.138 12:00:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.138 12:00:19 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:18.138 12:00:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.138 12:00:19 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:18.138 12:00:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.138 12:00:19 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:18.138 12:00:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.138 12:00:19 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:18.138 12:00:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.138 12:00:19 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:18.138 12:00:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.138 12:00:19 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:18.138 12:00:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.138 12:00:19 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:18.138 12:00:19 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:18.138 12:00:19 -- setup/devices.sh@63 -- # found=1 00:04:18.138 12:00:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.138 12:00:19 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:18.138 12:00:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.138 12:00:19 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:18.138 12:00:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.138 12:00:19 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:18.138 12:00:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.138 12:00:19 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:18.138 12:00:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.138 12:00:19 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:18.138 12:00:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.138 12:00:19 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:18.138 12:00:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.138 12:00:19 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:18.138 12:00:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.138 12:00:19 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:18.138 12:00:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.408 12:00:19 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:18.409 12:00:19 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:18.409 12:00:19 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:18.409 12:00:19 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:18.409 12:00:19 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:18.409 12:00:19 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:18.409 12:00:19 -- setup/devices.sh@184 -- # verify 0000:65:00.0 holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1 '' '' 00:04:18.409 12:00:19 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:18.409 12:00:19 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1 00:04:18.409 12:00:19 -- setup/devices.sh@50 -- # local mount_point= 00:04:18.409 12:00:19 -- setup/devices.sh@51 -- # local test_file= 00:04:18.409 12:00:19 -- setup/devices.sh@53 -- # local found=0 00:04:18.409 12:00:19 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:18.409 12:00:19 -- setup/devices.sh@59 -- # local pci status 00:04:18.409 12:00:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.409 12:00:19 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:18.409 12:00:19 -- setup/devices.sh@47 -- # setup output config 00:04:18.409 12:00:19 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:18.409 12:00:19 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:21.708 12:00:22 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.708 12:00:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.708 12:00:22 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.708 12:00:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.708 12:00:22 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.708 12:00:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.708 12:00:22 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.708 12:00:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.708 12:00:22 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.708 12:00:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.708 12:00:22 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.708 12:00:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.708 12:00:22 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.708 12:00:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.708 12:00:22 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.708 12:00:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.708 12:00:22 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.708 12:00:22 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\1\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\1* ]] 00:04:21.708 12:00:22 -- setup/devices.sh@63 -- # found=1 00:04:21.708 12:00:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.708 12:00:22 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.708 12:00:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.708 12:00:22 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.708 12:00:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.708 12:00:22 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.708 12:00:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.708 12:00:22 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.708 12:00:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.708 12:00:22 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.708 12:00:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.708 12:00:22 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.708 12:00:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.708 12:00:22 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.708 12:00:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.708 12:00:22 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.708 12:00:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.968 12:00:23 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:21.968 12:00:23 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:21.968 12:00:23 -- setup/devices.sh@68 -- # return 0 00:04:21.968 12:00:23 -- setup/devices.sh@187 -- # cleanup_dm 00:04:21.968 12:00:23 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:21.968 12:00:23 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:21.968 12:00:23 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:21.968 12:00:23 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:21.968 12:00:23 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:21.968 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:21.969 12:00:23 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:21.969 12:00:23 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:21.969 00:04:21.969 real 0m10.377s 00:04:21.969 user 0m2.659s 00:04:21.969 sys 0m4.725s 00:04:21.969 12:00:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:21.969 12:00:23 -- common/autotest_common.sh@10 -- # set +x 00:04:21.969 ************************************ 00:04:21.969 END TEST dm_mount 00:04:21.969 ************************************ 00:04:21.969 12:00:23 -- setup/devices.sh@1 -- # cleanup 00:04:21.969 12:00:23 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:21.969 12:00:23 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:21.969 12:00:23 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:21.969 12:00:23 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:21.969 12:00:23 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:21.969 12:00:23 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:22.228 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:22.228 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:22.228 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:22.228 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:22.228 12:00:23 -- setup/devices.sh@12 -- # cleanup_dm 00:04:22.228 12:00:23 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:22.228 12:00:23 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:22.228 12:00:23 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:22.228 12:00:23 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:22.228 12:00:23 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:22.228 12:00:23 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:22.228 00:04:22.228 real 0m28.429s 00:04:22.228 user 0m8.385s 00:04:22.228 sys 0m14.707s 00:04:22.228 12:00:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:22.228 12:00:23 -- common/autotest_common.sh@10 -- # set +x 00:04:22.228 ************************************ 00:04:22.228 END TEST devices 00:04:22.228 ************************************ 00:04:22.228 00:04:22.228 real 1m36.111s 00:04:22.228 user 0m31.916s 00:04:22.228 sys 0m54.841s 00:04:22.228 12:00:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:22.228 12:00:23 -- common/autotest_common.sh@10 -- # set +x 00:04:22.228 ************************************ 00:04:22.228 END TEST setup.sh 00:04:22.228 ************************************ 00:04:22.488 12:00:23 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:25.788 Hugepages 00:04:25.788 node hugesize free / total 00:04:25.788 node0 1048576kB 0 / 0 00:04:25.788 node0 2048kB 2048 / 2048 00:04:25.788 node1 1048576kB 0 / 0 00:04:25.788 node1 2048kB 0 / 0 00:04:25.788 00:04:25.788 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:25.788 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:04:25.788 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:04:25.788 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:04:25.788 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:04:25.788 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:04:25.788 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:04:25.788 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:04:25.788 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:04:25.788 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:04:25.788 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:04:25.788 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:04:25.788 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:04:25.788 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:04:25.788 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:04:25.788 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:04:25.788 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:04:25.788 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:04:25.788 12:00:26 -- spdk/autotest.sh@130 -- # uname -s 00:04:25.788 12:00:26 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:25.788 12:00:26 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:25.788 12:00:26 -- common/autotest_common.sh@1517 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:29.991 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:29.991 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:29.991 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:29.991 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:29.991 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:29.991 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:29.991 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:29.991 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:29.991 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:29.991 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:29.991 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:29.991 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:29.991 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:29.991 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:29.991 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:29.991 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:31.371 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:31.630 12:00:32 -- common/autotest_common.sh@1518 -- # sleep 1 00:04:32.571 12:00:33 -- common/autotest_common.sh@1519 -- # bdfs=() 00:04:32.571 12:00:33 -- common/autotest_common.sh@1519 -- # local bdfs 00:04:32.571 12:00:33 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:32.571 12:00:33 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:32.571 12:00:33 -- common/autotest_common.sh@1499 -- # bdfs=() 00:04:32.571 12:00:33 -- common/autotest_common.sh@1499 -- # local bdfs 00:04:32.571 12:00:33 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:32.571 12:00:33 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:04:32.571 12:00:33 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:32.571 12:00:33 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:04:32.571 12:00:33 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:65:00.0 00:04:32.571 12:00:33 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:35.877 Waiting for block devices as requested 00:04:35.877 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:36.137 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:36.137 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:36.137 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:36.397 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:36.398 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:36.398 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:36.398 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:36.658 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:04:36.658 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:36.920 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:36.920 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:36.920 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:37.181 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:37.181 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:37.181 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:37.181 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:37.441 12:00:38 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:37.441 12:00:38 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:04:37.441 12:00:38 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 00:04:37.441 12:00:38 -- common/autotest_common.sh@1488 -- # grep 0000:65:00.0/nvme/nvme 00:04:37.441 12:00:38 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:37.441 12:00:38 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:04:37.441 12:00:38 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:37.441 12:00:38 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme0 00:04:37.441 12:00:38 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:37.441 12:00:38 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:37.441 12:00:38 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:37.441 12:00:38 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:37.441 12:00:38 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:37.441 12:00:38 -- common/autotest_common.sh@1531 -- # oacs=' 0x5f' 00:04:37.441 12:00:38 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:37.441 12:00:38 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:37.441 12:00:38 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:37.441 12:00:38 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:37.441 12:00:38 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:37.442 12:00:38 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:37.442 12:00:38 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:37.702 12:00:38 -- common/autotest_common.sh@1543 -- # continue 00:04:37.702 12:00:38 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:37.702 12:00:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:37.702 12:00:38 -- common/autotest_common.sh@10 -- # set +x 00:04:37.702 12:00:38 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:37.702 12:00:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:37.702 12:00:38 -- common/autotest_common.sh@10 -- # set +x 00:04:37.702 12:00:38 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:41.013 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:41.013 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:41.013 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:41.013 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:41.013 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:41.013 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:41.013 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:41.013 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:41.013 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:41.013 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:41.013 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:41.013 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:41.273 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:41.273 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:41.273 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:41.273 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:41.273 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:41.598 12:00:42 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:41.598 12:00:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:41.598 12:00:42 -- common/autotest_common.sh@10 -- # set +x 00:04:41.598 12:00:42 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:41.598 12:00:42 -- common/autotest_common.sh@1577 -- # mapfile -t bdfs 00:04:41.598 12:00:42 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs_by_id 0x0a54 00:04:41.598 12:00:42 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:41.598 12:00:42 -- common/autotest_common.sh@1563 -- # local bdfs 00:04:41.598 12:00:42 -- common/autotest_common.sh@1565 -- # get_nvme_bdfs 00:04:41.598 12:00:42 -- common/autotest_common.sh@1499 -- # bdfs=() 00:04:41.598 12:00:42 -- common/autotest_common.sh@1499 -- # local bdfs 00:04:41.598 12:00:42 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:41.598 12:00:42 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:41.598 12:00:42 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:04:41.598 12:00:42 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:04:41.598 12:00:42 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:65:00.0 00:04:41.598 12:00:42 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:04:41.598 12:00:42 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:04:41.598 12:00:42 -- common/autotest_common.sh@1566 -- # device=0xa80a 00:04:41.598 12:00:42 -- common/autotest_common.sh@1567 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:04:41.598 12:00:42 -- common/autotest_common.sh@1572 -- # printf '%s\n' 00:04:41.598 12:00:42 -- common/autotest_common.sh@1578 -- # [[ -z '' ]] 00:04:41.598 12:00:42 -- common/autotest_common.sh@1579 -- # return 0 00:04:41.598 12:00:42 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:41.598 12:00:42 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:41.598 12:00:42 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:41.598 12:00:42 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:41.598 12:00:42 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:41.598 12:00:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:41.598 12:00:42 -- common/autotest_common.sh@10 -- # set +x 00:04:41.598 12:00:42 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:41.598 12:00:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:41.598 12:00:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:41.598 12:00:42 -- common/autotest_common.sh@10 -- # set +x 00:04:41.859 ************************************ 00:04:41.859 START TEST env 00:04:41.859 ************************************ 00:04:41.859 12:00:42 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:41.859 * Looking for test storage... 00:04:41.859 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:41.859 12:00:43 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:41.859 12:00:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:41.859 12:00:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:41.859 12:00:43 -- common/autotest_common.sh@10 -- # set +x 00:04:42.120 ************************************ 00:04:42.120 START TEST env_memory 00:04:42.120 ************************************ 00:04:42.120 12:00:43 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:42.120 00:04:42.120 00:04:42.120 CUnit - A unit testing framework for C - Version 2.1-3 00:04:42.120 http://cunit.sourceforge.net/ 00:04:42.120 00:04:42.120 00:04:42.120 Suite: memory 00:04:42.120 Test: alloc and free memory map ...[2024-04-26 12:00:43.235584] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:42.120 passed 00:04:42.120 Test: mem map translation ...[2024-04-26 12:00:43.260962] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:42.120 [2024-04-26 12:00:43.260981] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:42.120 [2024-04-26 12:00:43.261026] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:42.120 [2024-04-26 12:00:43.261033] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:42.120 passed 00:04:42.120 Test: mem map registration ...[2024-04-26 12:00:43.316054] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:42.120 [2024-04-26 12:00:43.316069] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:42.120 passed 00:04:42.380 Test: mem map adjacent registrations ...passed 00:04:42.380 00:04:42.380 Run Summary: Type Total Ran Passed Failed Inactive 00:04:42.380 suites 1 1 n/a 0 0 00:04:42.380 tests 4 4 4 0 0 00:04:42.380 asserts 152 152 152 0 n/a 00:04:42.380 00:04:42.380 Elapsed time = 0.192 seconds 00:04:42.380 00:04:42.380 real 0m0.205s 00:04:42.380 user 0m0.193s 00:04:42.380 sys 0m0.011s 00:04:42.380 12:00:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:42.381 12:00:43 -- common/autotest_common.sh@10 -- # set +x 00:04:42.381 ************************************ 00:04:42.381 END TEST env_memory 00:04:42.381 ************************************ 00:04:42.381 12:00:43 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:42.381 12:00:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:42.381 12:00:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:42.381 12:00:43 -- common/autotest_common.sh@10 -- # set +x 00:04:42.381 ************************************ 00:04:42.381 START TEST env_vtophys 00:04:42.381 ************************************ 00:04:42.381 12:00:43 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:42.381 EAL: lib.eal log level changed from notice to debug 00:04:42.381 EAL: Detected lcore 0 as core 0 on socket 0 00:04:42.381 EAL: Detected lcore 1 as core 1 on socket 0 00:04:42.381 EAL: Detected lcore 2 as core 2 on socket 0 00:04:42.381 EAL: Detected lcore 3 as core 3 on socket 0 00:04:42.381 EAL: Detected lcore 4 as core 4 on socket 0 00:04:42.381 EAL: Detected lcore 5 as core 5 on socket 0 00:04:42.381 EAL: Detected lcore 6 as core 6 on socket 0 00:04:42.381 EAL: Detected lcore 7 as core 7 on socket 0 00:04:42.381 EAL: Detected lcore 8 as core 8 on socket 0 00:04:42.381 EAL: Detected lcore 9 as core 9 on socket 0 00:04:42.381 EAL: Detected lcore 10 as core 10 on socket 0 00:04:42.381 EAL: Detected lcore 11 as core 11 on socket 0 00:04:42.381 EAL: Detected lcore 12 as core 12 on socket 0 00:04:42.381 EAL: Detected lcore 13 as core 13 on socket 0 00:04:42.381 EAL: Detected lcore 14 as core 14 on socket 0 00:04:42.381 EAL: Detected lcore 15 as core 15 on socket 0 00:04:42.381 EAL: Detected lcore 16 as core 16 on socket 0 00:04:42.381 EAL: Detected lcore 17 as core 17 on socket 0 00:04:42.381 EAL: Detected lcore 18 as core 18 on socket 0 00:04:42.381 EAL: Detected lcore 19 as core 19 on socket 0 00:04:42.381 EAL: Detected lcore 20 as core 20 on socket 0 00:04:42.381 EAL: Detected lcore 21 as core 21 on socket 0 00:04:42.381 EAL: Detected lcore 22 as core 22 on socket 0 00:04:42.381 EAL: Detected lcore 23 as core 23 on socket 0 00:04:42.381 EAL: Detected lcore 24 as core 24 on socket 0 00:04:42.381 EAL: Detected lcore 25 as core 25 on socket 0 00:04:42.381 EAL: Detected lcore 26 as core 26 on socket 0 00:04:42.381 EAL: Detected lcore 27 as core 27 on socket 0 00:04:42.381 EAL: Detected lcore 28 as core 28 on socket 0 00:04:42.381 EAL: Detected lcore 29 as core 29 on socket 0 00:04:42.381 EAL: Detected lcore 30 as core 30 on socket 0 00:04:42.381 EAL: Detected lcore 31 as core 31 on socket 0 00:04:42.381 EAL: Detected lcore 32 as core 32 on socket 0 00:04:42.381 EAL: Detected lcore 33 as core 33 on socket 0 00:04:42.381 EAL: Detected lcore 34 as core 34 on socket 0 00:04:42.381 EAL: Detected lcore 35 as core 35 on socket 0 00:04:42.381 EAL: Detected lcore 36 as core 0 on socket 1 00:04:42.381 EAL: Detected lcore 37 as core 1 on socket 1 00:04:42.381 EAL: Detected lcore 38 as core 2 on socket 1 00:04:42.381 EAL: Detected lcore 39 as core 3 on socket 1 00:04:42.381 EAL: Detected lcore 40 as core 4 on socket 1 00:04:42.381 EAL: Detected lcore 41 as core 5 on socket 1 00:04:42.381 EAL: Detected lcore 42 as core 6 on socket 1 00:04:42.381 EAL: Detected lcore 43 as core 7 on socket 1 00:04:42.381 EAL: Detected lcore 44 as core 8 on socket 1 00:04:42.381 EAL: Detected lcore 45 as core 9 on socket 1 00:04:42.381 EAL: Detected lcore 46 as core 10 on socket 1 00:04:42.381 EAL: Detected lcore 47 as core 11 on socket 1 00:04:42.381 EAL: Detected lcore 48 as core 12 on socket 1 00:04:42.381 EAL: Detected lcore 49 as core 13 on socket 1 00:04:42.381 EAL: Detected lcore 50 as core 14 on socket 1 00:04:42.381 EAL: Detected lcore 51 as core 15 on socket 1 00:04:42.381 EAL: Detected lcore 52 as core 16 on socket 1 00:04:42.381 EAL: Detected lcore 53 as core 17 on socket 1 00:04:42.381 EAL: Detected lcore 54 as core 18 on socket 1 00:04:42.381 EAL: Detected lcore 55 as core 19 on socket 1 00:04:42.381 EAL: Detected lcore 56 as core 20 on socket 1 00:04:42.381 EAL: Detected lcore 57 as core 21 on socket 1 00:04:42.381 EAL: Detected lcore 58 as core 22 on socket 1 00:04:42.381 EAL: Detected lcore 59 as core 23 on socket 1 00:04:42.381 EAL: Detected lcore 60 as core 24 on socket 1 00:04:42.381 EAL: Detected lcore 61 as core 25 on socket 1 00:04:42.381 EAL: Detected lcore 62 as core 26 on socket 1 00:04:42.381 EAL: Detected lcore 63 as core 27 on socket 1 00:04:42.381 EAL: Detected lcore 64 as core 28 on socket 1 00:04:42.381 EAL: Detected lcore 65 as core 29 on socket 1 00:04:42.381 EAL: Detected lcore 66 as core 30 on socket 1 00:04:42.381 EAL: Detected lcore 67 as core 31 on socket 1 00:04:42.381 EAL: Detected lcore 68 as core 32 on socket 1 00:04:42.381 EAL: Detected lcore 69 as core 33 on socket 1 00:04:42.381 EAL: Detected lcore 70 as core 34 on socket 1 00:04:42.381 EAL: Detected lcore 71 as core 35 on socket 1 00:04:42.381 EAL: Detected lcore 72 as core 0 on socket 0 00:04:42.381 EAL: Detected lcore 73 as core 1 on socket 0 00:04:42.381 EAL: Detected lcore 74 as core 2 on socket 0 00:04:42.381 EAL: Detected lcore 75 as core 3 on socket 0 00:04:42.381 EAL: Detected lcore 76 as core 4 on socket 0 00:04:42.381 EAL: Detected lcore 77 as core 5 on socket 0 00:04:42.381 EAL: Detected lcore 78 as core 6 on socket 0 00:04:42.381 EAL: Detected lcore 79 as core 7 on socket 0 00:04:42.381 EAL: Detected lcore 80 as core 8 on socket 0 00:04:42.381 EAL: Detected lcore 81 as core 9 on socket 0 00:04:42.381 EAL: Detected lcore 82 as core 10 on socket 0 00:04:42.381 EAL: Detected lcore 83 as core 11 on socket 0 00:04:42.381 EAL: Detected lcore 84 as core 12 on socket 0 00:04:42.381 EAL: Detected lcore 85 as core 13 on socket 0 00:04:42.381 EAL: Detected lcore 86 as core 14 on socket 0 00:04:42.381 EAL: Detected lcore 87 as core 15 on socket 0 00:04:42.381 EAL: Detected lcore 88 as core 16 on socket 0 00:04:42.381 EAL: Detected lcore 89 as core 17 on socket 0 00:04:42.381 EAL: Detected lcore 90 as core 18 on socket 0 00:04:42.381 EAL: Detected lcore 91 as core 19 on socket 0 00:04:42.381 EAL: Detected lcore 92 as core 20 on socket 0 00:04:42.381 EAL: Detected lcore 93 as core 21 on socket 0 00:04:42.381 EAL: Detected lcore 94 as core 22 on socket 0 00:04:42.642 EAL: Detected lcore 95 as core 23 on socket 0 00:04:42.642 EAL: Detected lcore 96 as core 24 on socket 0 00:04:42.642 EAL: Detected lcore 97 as core 25 on socket 0 00:04:42.642 EAL: Detected lcore 98 as core 26 on socket 0 00:04:42.642 EAL: Detected lcore 99 as core 27 on socket 0 00:04:42.642 EAL: Detected lcore 100 as core 28 on socket 0 00:04:42.642 EAL: Detected lcore 101 as core 29 on socket 0 00:04:42.642 EAL: Detected lcore 102 as core 30 on socket 0 00:04:42.642 EAL: Detected lcore 103 as core 31 on socket 0 00:04:42.642 EAL: Detected lcore 104 as core 32 on socket 0 00:04:42.642 EAL: Detected lcore 105 as core 33 on socket 0 00:04:42.642 EAL: Detected lcore 106 as core 34 on socket 0 00:04:42.642 EAL: Detected lcore 107 as core 35 on socket 0 00:04:42.642 EAL: Detected lcore 108 as core 0 on socket 1 00:04:42.642 EAL: Detected lcore 109 as core 1 on socket 1 00:04:42.642 EAL: Detected lcore 110 as core 2 on socket 1 00:04:42.642 EAL: Detected lcore 111 as core 3 on socket 1 00:04:42.642 EAL: Detected lcore 112 as core 4 on socket 1 00:04:42.642 EAL: Detected lcore 113 as core 5 on socket 1 00:04:42.642 EAL: Detected lcore 114 as core 6 on socket 1 00:04:42.642 EAL: Detected lcore 115 as core 7 on socket 1 00:04:42.642 EAL: Detected lcore 116 as core 8 on socket 1 00:04:42.642 EAL: Detected lcore 117 as core 9 on socket 1 00:04:42.642 EAL: Detected lcore 118 as core 10 on socket 1 00:04:42.642 EAL: Detected lcore 119 as core 11 on socket 1 00:04:42.642 EAL: Detected lcore 120 as core 12 on socket 1 00:04:42.642 EAL: Detected lcore 121 as core 13 on socket 1 00:04:42.642 EAL: Detected lcore 122 as core 14 on socket 1 00:04:42.642 EAL: Detected lcore 123 as core 15 on socket 1 00:04:42.642 EAL: Detected lcore 124 as core 16 on socket 1 00:04:42.642 EAL: Detected lcore 125 as core 17 on socket 1 00:04:42.642 EAL: Detected lcore 126 as core 18 on socket 1 00:04:42.642 EAL: Detected lcore 127 as core 19 on socket 1 00:04:42.642 EAL: Skipped lcore 128 as core 20 on socket 1 00:04:42.642 EAL: Skipped lcore 129 as core 21 on socket 1 00:04:42.642 EAL: Skipped lcore 130 as core 22 on socket 1 00:04:42.642 EAL: Skipped lcore 131 as core 23 on socket 1 00:04:42.642 EAL: Skipped lcore 132 as core 24 on socket 1 00:04:42.642 EAL: Skipped lcore 133 as core 25 on socket 1 00:04:42.642 EAL: Skipped lcore 134 as core 26 on socket 1 00:04:42.642 EAL: Skipped lcore 135 as core 27 on socket 1 00:04:42.642 EAL: Skipped lcore 136 as core 28 on socket 1 00:04:42.642 EAL: Skipped lcore 137 as core 29 on socket 1 00:04:42.642 EAL: Skipped lcore 138 as core 30 on socket 1 00:04:42.642 EAL: Skipped lcore 139 as core 31 on socket 1 00:04:42.642 EAL: Skipped lcore 140 as core 32 on socket 1 00:04:42.642 EAL: Skipped lcore 141 as core 33 on socket 1 00:04:42.642 EAL: Skipped lcore 142 as core 34 on socket 1 00:04:42.642 EAL: Skipped lcore 143 as core 35 on socket 1 00:04:42.642 EAL: Maximum logical cores by configuration: 128 00:04:42.642 EAL: Detected CPU lcores: 128 00:04:42.642 EAL: Detected NUMA nodes: 2 00:04:42.642 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:42.642 EAL: Detected shared linkage of DPDK 00:04:42.642 EAL: No shared files mode enabled, IPC will be disabled 00:04:42.642 EAL: Bus pci wants IOVA as 'DC' 00:04:42.642 EAL: Buses did not request a specific IOVA mode. 00:04:42.642 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:42.642 EAL: Selected IOVA mode 'VA' 00:04:42.642 EAL: No free 2048 kB hugepages reported on node 1 00:04:42.642 EAL: Probing VFIO support... 00:04:42.642 EAL: IOMMU type 1 (Type 1) is supported 00:04:42.642 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:42.642 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:42.642 EAL: VFIO support initialized 00:04:42.642 EAL: Ask a virtual area of 0x2e000 bytes 00:04:42.642 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:42.642 EAL: Setting up physically contiguous memory... 00:04:42.642 EAL: Setting maximum number of open files to 524288 00:04:42.642 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:42.642 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:42.642 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:42.642 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.642 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:42.642 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:42.642 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.642 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:42.642 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:42.642 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.642 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:42.642 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:42.642 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.642 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:42.642 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:42.642 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.642 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:42.642 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:42.642 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.642 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:42.642 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:42.642 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.642 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:42.642 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:42.642 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.642 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:42.642 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:42.642 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:42.642 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.642 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:42.642 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:42.642 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.642 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:42.642 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:42.642 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.642 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:42.642 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:42.642 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.642 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:42.642 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:42.642 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.642 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:42.642 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:42.642 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.642 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:42.642 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:42.642 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.642 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:42.642 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:42.643 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.643 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:42.643 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:42.643 EAL: Hugepages will be freed exactly as allocated. 00:04:42.643 EAL: No shared files mode enabled, IPC is disabled 00:04:42.643 EAL: No shared files mode enabled, IPC is disabled 00:04:42.643 EAL: TSC frequency is ~2400000 KHz 00:04:42.643 EAL: Main lcore 0 is ready (tid=7f0a86233a00;cpuset=[0]) 00:04:42.643 EAL: Trying to obtain current memory policy. 00:04:42.643 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.643 EAL: Restoring previous memory policy: 0 00:04:42.643 EAL: request: mp_malloc_sync 00:04:42.643 EAL: No shared files mode enabled, IPC is disabled 00:04:42.643 EAL: Heap on socket 0 was expanded by 2MB 00:04:42.643 EAL: No shared files mode enabled, IPC is disabled 00:04:42.643 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:42.643 EAL: Mem event callback 'spdk:(nil)' registered 00:04:42.643 00:04:42.643 00:04:42.643 CUnit - A unit testing framework for C - Version 2.1-3 00:04:42.643 http://cunit.sourceforge.net/ 00:04:42.643 00:04:42.643 00:04:42.643 Suite: components_suite 00:04:42.643 Test: vtophys_malloc_test ...passed 00:04:42.643 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:42.643 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.643 EAL: Restoring previous memory policy: 4 00:04:42.643 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.643 EAL: request: mp_malloc_sync 00:04:42.643 EAL: No shared files mode enabled, IPC is disabled 00:04:42.643 EAL: Heap on socket 0 was expanded by 4MB 00:04:42.643 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.643 EAL: request: mp_malloc_sync 00:04:42.643 EAL: No shared files mode enabled, IPC is disabled 00:04:42.643 EAL: Heap on socket 0 was shrunk by 4MB 00:04:42.643 EAL: Trying to obtain current memory policy. 00:04:42.643 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.643 EAL: Restoring previous memory policy: 4 00:04:42.643 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.643 EAL: request: mp_malloc_sync 00:04:42.643 EAL: No shared files mode enabled, IPC is disabled 00:04:42.643 EAL: Heap on socket 0 was expanded by 6MB 00:04:42.643 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.643 EAL: request: mp_malloc_sync 00:04:42.643 EAL: No shared files mode enabled, IPC is disabled 00:04:42.643 EAL: Heap on socket 0 was shrunk by 6MB 00:04:42.643 EAL: Trying to obtain current memory policy. 00:04:42.643 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.643 EAL: Restoring previous memory policy: 4 00:04:42.643 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.643 EAL: request: mp_malloc_sync 00:04:42.643 EAL: No shared files mode enabled, IPC is disabled 00:04:42.643 EAL: Heap on socket 0 was expanded by 10MB 00:04:42.643 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.643 EAL: request: mp_malloc_sync 00:04:42.643 EAL: No shared files mode enabled, IPC is disabled 00:04:42.643 EAL: Heap on socket 0 was shrunk by 10MB 00:04:42.643 EAL: Trying to obtain current memory policy. 00:04:42.643 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.643 EAL: Restoring previous memory policy: 4 00:04:42.643 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.643 EAL: request: mp_malloc_sync 00:04:42.643 EAL: No shared files mode enabled, IPC is disabled 00:04:42.643 EAL: Heap on socket 0 was expanded by 18MB 00:04:42.643 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.643 EAL: request: mp_malloc_sync 00:04:42.643 EAL: No shared files mode enabled, IPC is disabled 00:04:42.643 EAL: Heap on socket 0 was shrunk by 18MB 00:04:42.643 EAL: Trying to obtain current memory policy. 00:04:42.643 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.643 EAL: Restoring previous memory policy: 4 00:04:42.643 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.643 EAL: request: mp_malloc_sync 00:04:42.643 EAL: No shared files mode enabled, IPC is disabled 00:04:42.643 EAL: Heap on socket 0 was expanded by 34MB 00:04:42.643 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.643 EAL: request: mp_malloc_sync 00:04:42.643 EAL: No shared files mode enabled, IPC is disabled 00:04:42.643 EAL: Heap on socket 0 was shrunk by 34MB 00:04:42.643 EAL: Trying to obtain current memory policy. 00:04:42.643 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.643 EAL: Restoring previous memory policy: 4 00:04:42.643 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.643 EAL: request: mp_malloc_sync 00:04:42.643 EAL: No shared files mode enabled, IPC is disabled 00:04:42.643 EAL: Heap on socket 0 was expanded by 66MB 00:04:42.643 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.643 EAL: request: mp_malloc_sync 00:04:42.643 EAL: No shared files mode enabled, IPC is disabled 00:04:42.643 EAL: Heap on socket 0 was shrunk by 66MB 00:04:42.643 EAL: Trying to obtain current memory policy. 00:04:42.643 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.643 EAL: Restoring previous memory policy: 4 00:04:42.643 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.643 EAL: request: mp_malloc_sync 00:04:42.643 EAL: No shared files mode enabled, IPC is disabled 00:04:42.643 EAL: Heap on socket 0 was expanded by 130MB 00:04:42.643 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.643 EAL: request: mp_malloc_sync 00:04:42.643 EAL: No shared files mode enabled, IPC is disabled 00:04:42.643 EAL: Heap on socket 0 was shrunk by 130MB 00:04:42.643 EAL: Trying to obtain current memory policy. 00:04:42.643 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.643 EAL: Restoring previous memory policy: 4 00:04:42.643 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.643 EAL: request: mp_malloc_sync 00:04:42.643 EAL: No shared files mode enabled, IPC is disabled 00:04:42.643 EAL: Heap on socket 0 was expanded by 258MB 00:04:42.643 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.643 EAL: request: mp_malloc_sync 00:04:42.643 EAL: No shared files mode enabled, IPC is disabled 00:04:42.643 EAL: Heap on socket 0 was shrunk by 258MB 00:04:42.643 EAL: Trying to obtain current memory policy. 00:04:42.643 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.903 EAL: Restoring previous memory policy: 4 00:04:42.903 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.903 EAL: request: mp_malloc_sync 00:04:42.903 EAL: No shared files mode enabled, IPC is disabled 00:04:42.903 EAL: Heap on socket 0 was expanded by 514MB 00:04:42.903 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.903 EAL: request: mp_malloc_sync 00:04:42.903 EAL: No shared files mode enabled, IPC is disabled 00:04:42.903 EAL: Heap on socket 0 was shrunk by 514MB 00:04:42.903 EAL: Trying to obtain current memory policy. 00:04:42.903 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.163 EAL: Restoring previous memory policy: 4 00:04:43.163 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.163 EAL: request: mp_malloc_sync 00:04:43.163 EAL: No shared files mode enabled, IPC is disabled 00:04:43.163 EAL: Heap on socket 0 was expanded by 1026MB 00:04:43.163 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.163 EAL: request: mp_malloc_sync 00:04:43.163 EAL: No shared files mode enabled, IPC is disabled 00:04:43.163 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:43.163 passed 00:04:43.163 00:04:43.163 Run Summary: Type Total Ran Passed Failed Inactive 00:04:43.163 suites 1 1 n/a 0 0 00:04:43.163 tests 2 2 2 0 0 00:04:43.163 asserts 497 497 497 0 n/a 00:04:43.163 00:04:43.163 Elapsed time = 0.650 seconds 00:04:43.163 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.163 EAL: request: mp_malloc_sync 00:04:43.163 EAL: No shared files mode enabled, IPC is disabled 00:04:43.163 EAL: Heap on socket 0 was shrunk by 2MB 00:04:43.163 EAL: No shared files mode enabled, IPC is disabled 00:04:43.163 EAL: No shared files mode enabled, IPC is disabled 00:04:43.163 EAL: No shared files mode enabled, IPC is disabled 00:04:43.163 00:04:43.163 real 0m0.766s 00:04:43.163 user 0m0.399s 00:04:43.163 sys 0m0.343s 00:04:43.163 12:00:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:43.163 12:00:44 -- common/autotest_common.sh@10 -- # set +x 00:04:43.163 ************************************ 00:04:43.163 END TEST env_vtophys 00:04:43.163 ************************************ 00:04:43.163 12:00:44 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:43.163 12:00:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:43.163 12:00:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:43.163 12:00:44 -- common/autotest_common.sh@10 -- # set +x 00:04:43.423 ************************************ 00:04:43.423 START TEST env_pci 00:04:43.423 ************************************ 00:04:43.423 12:00:44 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:43.423 00:04:43.423 00:04:43.423 CUnit - A unit testing framework for C - Version 2.1-3 00:04:43.423 http://cunit.sourceforge.net/ 00:04:43.423 00:04:43.423 00:04:43.423 Suite: pci 00:04:43.423 Test: pci_hook ...[2024-04-26 12:00:44.538243] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3191054 has claimed it 00:04:43.423 EAL: Cannot find device (10000:00:01.0) 00:04:43.423 EAL: Failed to attach device on primary process 00:04:43.423 passed 00:04:43.423 00:04:43.423 Run Summary: Type Total Ran Passed Failed Inactive 00:04:43.423 suites 1 1 n/a 0 0 00:04:43.423 tests 1 1 1 0 0 00:04:43.423 asserts 25 25 25 0 n/a 00:04:43.423 00:04:43.423 Elapsed time = 0.029 seconds 00:04:43.423 00:04:43.423 real 0m0.049s 00:04:43.423 user 0m0.018s 00:04:43.423 sys 0m0.031s 00:04:43.423 12:00:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:43.423 12:00:44 -- common/autotest_common.sh@10 -- # set +x 00:04:43.423 ************************************ 00:04:43.423 END TEST env_pci 00:04:43.423 ************************************ 00:04:43.423 12:00:44 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:43.423 12:00:44 -- env/env.sh@15 -- # uname 00:04:43.423 12:00:44 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:43.423 12:00:44 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:43.423 12:00:44 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:43.423 12:00:44 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:04:43.423 12:00:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:43.423 12:00:44 -- common/autotest_common.sh@10 -- # set +x 00:04:43.683 ************************************ 00:04:43.683 START TEST env_dpdk_post_init 00:04:43.683 ************************************ 00:04:43.683 12:00:44 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:43.683 EAL: Detected CPU lcores: 128 00:04:43.683 EAL: Detected NUMA nodes: 2 00:04:43.683 EAL: Detected shared linkage of DPDK 00:04:43.683 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:43.683 EAL: Selected IOVA mode 'VA' 00:04:43.683 EAL: No free 2048 kB hugepages reported on node 1 00:04:43.683 EAL: VFIO support initialized 00:04:43.683 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:43.683 EAL: Using IOMMU type 1 (Type 1) 00:04:43.942 EAL: Ignore mapping IO port bar(1) 00:04:43.942 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:04:44.202 EAL: Ignore mapping IO port bar(1) 00:04:44.202 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:04:44.462 EAL: Ignore mapping IO port bar(1) 00:04:44.462 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:04:44.462 EAL: Ignore mapping IO port bar(1) 00:04:44.721 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:04:44.721 EAL: Ignore mapping IO port bar(1) 00:04:44.981 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:04:44.981 EAL: Ignore mapping IO port bar(1) 00:04:45.247 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:04:45.247 EAL: Ignore mapping IO port bar(1) 00:04:45.248 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:04:45.509 EAL: Ignore mapping IO port bar(1) 00:04:45.509 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:04:45.768 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:04:46.027 EAL: Ignore mapping IO port bar(1) 00:04:46.027 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:04:46.027 EAL: Ignore mapping IO port bar(1) 00:04:46.287 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:04:46.287 EAL: Ignore mapping IO port bar(1) 00:04:46.547 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:04:46.547 EAL: Ignore mapping IO port bar(1) 00:04:46.807 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:04:46.807 EAL: Ignore mapping IO port bar(1) 00:04:46.807 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:04:47.067 EAL: Ignore mapping IO port bar(1) 00:04:47.067 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:04:47.326 EAL: Ignore mapping IO port bar(1) 00:04:47.326 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:04:47.600 EAL: Ignore mapping IO port bar(1) 00:04:47.600 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:04:47.600 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:04:47.600 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:04:47.600 Starting DPDK initialization... 00:04:47.600 Starting SPDK post initialization... 00:04:47.600 SPDK NVMe probe 00:04:47.600 Attaching to 0000:65:00.0 00:04:47.600 Attached to 0000:65:00.0 00:04:47.600 Cleaning up... 00:04:49.512 00:04:49.512 real 0m5.718s 00:04:49.512 user 0m0.195s 00:04:49.512 sys 0m0.067s 00:04:49.512 12:00:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:49.512 12:00:50 -- common/autotest_common.sh@10 -- # set +x 00:04:49.512 ************************************ 00:04:49.512 END TEST env_dpdk_post_init 00:04:49.512 ************************************ 00:04:49.512 12:00:50 -- env/env.sh@26 -- # uname 00:04:49.512 12:00:50 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:49.512 12:00:50 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:49.512 12:00:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:49.512 12:00:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:49.512 12:00:50 -- common/autotest_common.sh@10 -- # set +x 00:04:49.512 ************************************ 00:04:49.512 START TEST env_mem_callbacks 00:04:49.512 ************************************ 00:04:49.512 12:00:50 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:49.512 EAL: Detected CPU lcores: 128 00:04:49.512 EAL: Detected NUMA nodes: 2 00:04:49.512 EAL: Detected shared linkage of DPDK 00:04:49.512 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:49.512 EAL: Selected IOVA mode 'VA' 00:04:49.512 EAL: No free 2048 kB hugepages reported on node 1 00:04:49.512 EAL: VFIO support initialized 00:04:49.512 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:49.512 00:04:49.512 00:04:49.512 CUnit - A unit testing framework for C - Version 2.1-3 00:04:49.512 http://cunit.sourceforge.net/ 00:04:49.512 00:04:49.512 00:04:49.512 Suite: memory 00:04:49.512 Test: test ... 00:04:49.512 register 0x200000200000 2097152 00:04:49.512 malloc 3145728 00:04:49.512 register 0x200000400000 4194304 00:04:49.512 buf 0x200000500000 len 3145728 PASSED 00:04:49.512 malloc 64 00:04:49.512 buf 0x2000004fff40 len 64 PASSED 00:04:49.512 malloc 4194304 00:04:49.512 register 0x200000800000 6291456 00:04:49.512 buf 0x200000a00000 len 4194304 PASSED 00:04:49.512 free 0x200000500000 3145728 00:04:49.512 free 0x2000004fff40 64 00:04:49.512 unregister 0x200000400000 4194304 PASSED 00:04:49.512 free 0x200000a00000 4194304 00:04:49.512 unregister 0x200000800000 6291456 PASSED 00:04:49.512 malloc 8388608 00:04:49.512 register 0x200000400000 10485760 00:04:49.512 buf 0x200000600000 len 8388608 PASSED 00:04:49.512 free 0x200000600000 8388608 00:04:49.512 unregister 0x200000400000 10485760 PASSED 00:04:49.512 passed 00:04:49.512 00:04:49.512 Run Summary: Type Total Ran Passed Failed Inactive 00:04:49.512 suites 1 1 n/a 0 0 00:04:49.512 tests 1 1 1 0 0 00:04:49.512 asserts 15 15 15 0 n/a 00:04:49.512 00:04:49.512 Elapsed time = 0.008 seconds 00:04:49.512 00:04:49.512 real 0m0.064s 00:04:49.512 user 0m0.014s 00:04:49.512 sys 0m0.050s 00:04:49.512 12:00:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:49.512 12:00:50 -- common/autotest_common.sh@10 -- # set +x 00:04:49.512 ************************************ 00:04:49.512 END TEST env_mem_callbacks 00:04:49.512 ************************************ 00:04:49.773 00:04:49.773 real 0m7.839s 00:04:49.773 user 0m1.197s 00:04:49.773 sys 0m1.099s 00:04:49.773 12:00:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:49.773 12:00:50 -- common/autotest_common.sh@10 -- # set +x 00:04:49.773 ************************************ 00:04:49.773 END TEST env 00:04:49.773 ************************************ 00:04:49.773 12:00:50 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:49.773 12:00:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:49.773 12:00:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:49.773 12:00:50 -- common/autotest_common.sh@10 -- # set +x 00:04:49.773 ************************************ 00:04:49.773 START TEST rpc 00:04:49.773 ************************************ 00:04:49.773 12:00:50 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:50.033 * Looking for test storage... 00:04:50.033 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:50.033 12:00:51 -- rpc/rpc.sh@65 -- # spdk_pid=3192431 00:04:50.033 12:00:51 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:50.033 12:00:51 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:50.033 12:00:51 -- rpc/rpc.sh@67 -- # waitforlisten 3192431 00:04:50.033 12:00:51 -- common/autotest_common.sh@817 -- # '[' -z 3192431 ']' 00:04:50.033 12:00:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.033 12:00:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:50.033 12:00:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.033 12:00:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:50.033 12:00:51 -- common/autotest_common.sh@10 -- # set +x 00:04:50.033 [2024-04-26 12:00:51.107790] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:04:50.033 [2024-04-26 12:00:51.107852] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3192431 ] 00:04:50.033 EAL: No free 2048 kB hugepages reported on node 1 00:04:50.033 [2024-04-26 12:00:51.172309] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.033 [2024-04-26 12:00:51.242374] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:50.033 [2024-04-26 12:00:51.242410] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3192431' to capture a snapshot of events at runtime. 00:04:50.033 [2024-04-26 12:00:51.242418] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:50.033 [2024-04-26 12:00:51.242424] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:50.033 [2024-04-26 12:00:51.242430] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3192431 for offline analysis/debug. 00:04:50.033 [2024-04-26 12:00:51.242449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.973 12:00:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:50.973 12:00:51 -- common/autotest_common.sh@850 -- # return 0 00:04:50.973 12:00:51 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:50.974 12:00:51 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:50.974 12:00:51 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:50.974 12:00:51 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:50.974 12:00:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:50.974 12:00:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:50.974 12:00:51 -- common/autotest_common.sh@10 -- # set +x 00:04:50.974 ************************************ 00:04:50.974 START TEST rpc_integrity 00:04:50.974 ************************************ 00:04:50.974 12:00:52 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:04:50.974 12:00:52 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:50.974 12:00:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:50.974 12:00:52 -- common/autotest_common.sh@10 -- # set +x 00:04:50.974 12:00:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:50.974 12:00:52 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:50.974 12:00:52 -- rpc/rpc.sh@13 -- # jq length 00:04:50.974 12:00:52 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:50.974 12:00:52 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:50.974 12:00:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:50.974 12:00:52 -- common/autotest_common.sh@10 -- # set +x 00:04:50.974 12:00:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:50.974 12:00:52 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:50.974 12:00:52 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:50.974 12:00:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:50.974 12:00:52 -- common/autotest_common.sh@10 -- # set +x 00:04:50.974 12:00:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:50.974 12:00:52 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:50.974 { 00:04:50.974 "name": "Malloc0", 00:04:50.974 "aliases": [ 00:04:50.974 "fd5579c4-0d5e-4be2-9c53-46c9557f2fcf" 00:04:50.974 ], 00:04:50.974 "product_name": "Malloc disk", 00:04:50.974 "block_size": 512, 00:04:50.974 "num_blocks": 16384, 00:04:50.974 "uuid": "fd5579c4-0d5e-4be2-9c53-46c9557f2fcf", 00:04:50.974 "assigned_rate_limits": { 00:04:50.974 "rw_ios_per_sec": 0, 00:04:50.974 "rw_mbytes_per_sec": 0, 00:04:50.974 "r_mbytes_per_sec": 0, 00:04:50.974 "w_mbytes_per_sec": 0 00:04:50.974 }, 00:04:50.974 "claimed": false, 00:04:50.974 "zoned": false, 00:04:50.974 "supported_io_types": { 00:04:50.974 "read": true, 00:04:50.974 "write": true, 00:04:50.974 "unmap": true, 00:04:50.974 "write_zeroes": true, 00:04:50.974 "flush": true, 00:04:50.974 "reset": true, 00:04:50.974 "compare": false, 00:04:50.974 "compare_and_write": false, 00:04:50.974 "abort": true, 00:04:50.974 "nvme_admin": false, 00:04:50.974 "nvme_io": false 00:04:50.974 }, 00:04:50.974 "memory_domains": [ 00:04:50.974 { 00:04:50.974 "dma_device_id": "system", 00:04:50.974 "dma_device_type": 1 00:04:50.974 }, 00:04:50.974 { 00:04:50.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:50.974 "dma_device_type": 2 00:04:50.974 } 00:04:50.974 ], 00:04:50.974 "driver_specific": {} 00:04:50.974 } 00:04:50.974 ]' 00:04:50.974 12:00:52 -- rpc/rpc.sh@17 -- # jq length 00:04:50.974 12:00:52 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:50.974 12:00:52 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:50.974 12:00:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:50.974 12:00:52 -- common/autotest_common.sh@10 -- # set +x 00:04:50.974 [2024-04-26 12:00:52.147030] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:50.974 [2024-04-26 12:00:52.147061] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:50.974 [2024-04-26 12:00:52.147073] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15a8b30 00:04:50.974 [2024-04-26 12:00:52.147081] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:50.974 [2024-04-26 12:00:52.148437] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:50.974 [2024-04-26 12:00:52.148457] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:50.974 Passthru0 00:04:50.974 12:00:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:50.974 12:00:52 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:50.974 12:00:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:50.974 12:00:52 -- common/autotest_common.sh@10 -- # set +x 00:04:50.974 12:00:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:50.974 12:00:52 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:50.974 { 00:04:50.974 "name": "Malloc0", 00:04:50.974 "aliases": [ 00:04:50.974 "fd5579c4-0d5e-4be2-9c53-46c9557f2fcf" 00:04:50.974 ], 00:04:50.974 "product_name": "Malloc disk", 00:04:50.974 "block_size": 512, 00:04:50.974 "num_blocks": 16384, 00:04:50.974 "uuid": "fd5579c4-0d5e-4be2-9c53-46c9557f2fcf", 00:04:50.974 "assigned_rate_limits": { 00:04:50.974 "rw_ios_per_sec": 0, 00:04:50.974 "rw_mbytes_per_sec": 0, 00:04:50.974 "r_mbytes_per_sec": 0, 00:04:50.974 "w_mbytes_per_sec": 0 00:04:50.974 }, 00:04:50.974 "claimed": true, 00:04:50.974 "claim_type": "exclusive_write", 00:04:50.974 "zoned": false, 00:04:50.974 "supported_io_types": { 00:04:50.974 "read": true, 00:04:50.974 "write": true, 00:04:50.974 "unmap": true, 00:04:50.974 "write_zeroes": true, 00:04:50.974 "flush": true, 00:04:50.974 "reset": true, 00:04:50.974 "compare": false, 00:04:50.974 "compare_and_write": false, 00:04:50.974 "abort": true, 00:04:50.974 "nvme_admin": false, 00:04:50.974 "nvme_io": false 00:04:50.974 }, 00:04:50.974 "memory_domains": [ 00:04:50.974 { 00:04:50.974 "dma_device_id": "system", 00:04:50.974 "dma_device_type": 1 00:04:50.974 }, 00:04:50.974 { 00:04:50.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:50.974 "dma_device_type": 2 00:04:50.974 } 00:04:50.974 ], 00:04:50.974 "driver_specific": {} 00:04:50.974 }, 00:04:50.974 { 00:04:50.974 "name": "Passthru0", 00:04:50.974 "aliases": [ 00:04:50.974 "16bb6255-06a6-5a7a-9204-a5fb7a5cc8cf" 00:04:50.974 ], 00:04:50.974 "product_name": "passthru", 00:04:50.974 "block_size": 512, 00:04:50.974 "num_blocks": 16384, 00:04:50.974 "uuid": "16bb6255-06a6-5a7a-9204-a5fb7a5cc8cf", 00:04:50.974 "assigned_rate_limits": { 00:04:50.974 "rw_ios_per_sec": 0, 00:04:50.974 "rw_mbytes_per_sec": 0, 00:04:50.974 "r_mbytes_per_sec": 0, 00:04:50.974 "w_mbytes_per_sec": 0 00:04:50.974 }, 00:04:50.974 "claimed": false, 00:04:50.974 "zoned": false, 00:04:50.974 "supported_io_types": { 00:04:50.974 "read": true, 00:04:50.974 "write": true, 00:04:50.974 "unmap": true, 00:04:50.974 "write_zeroes": true, 00:04:50.974 "flush": true, 00:04:50.974 "reset": true, 00:04:50.974 "compare": false, 00:04:50.974 "compare_and_write": false, 00:04:50.974 "abort": true, 00:04:50.974 "nvme_admin": false, 00:04:50.974 "nvme_io": false 00:04:50.974 }, 00:04:50.974 "memory_domains": [ 00:04:50.974 { 00:04:50.974 "dma_device_id": "system", 00:04:50.974 "dma_device_type": 1 00:04:50.974 }, 00:04:50.974 { 00:04:50.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:50.974 "dma_device_type": 2 00:04:50.974 } 00:04:50.974 ], 00:04:50.974 "driver_specific": { 00:04:50.974 "passthru": { 00:04:50.974 "name": "Passthru0", 00:04:50.974 "base_bdev_name": "Malloc0" 00:04:50.974 } 00:04:50.974 } 00:04:50.974 } 00:04:50.974 ]' 00:04:50.974 12:00:52 -- rpc/rpc.sh@21 -- # jq length 00:04:51.234 12:00:52 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:51.234 12:00:52 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:51.234 12:00:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:51.234 12:00:52 -- common/autotest_common.sh@10 -- # set +x 00:04:51.234 12:00:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:51.234 12:00:52 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:51.234 12:00:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:51.234 12:00:52 -- common/autotest_common.sh@10 -- # set +x 00:04:51.234 12:00:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:51.234 12:00:52 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:51.234 12:00:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:51.234 12:00:52 -- common/autotest_common.sh@10 -- # set +x 00:04:51.234 12:00:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:51.234 12:00:52 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:51.234 12:00:52 -- rpc/rpc.sh@26 -- # jq length 00:04:51.234 12:00:52 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:51.234 00:04:51.234 real 0m0.288s 00:04:51.234 user 0m0.189s 00:04:51.234 sys 0m0.036s 00:04:51.234 12:00:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:51.234 12:00:52 -- common/autotest_common.sh@10 -- # set +x 00:04:51.234 ************************************ 00:04:51.234 END TEST rpc_integrity 00:04:51.234 ************************************ 00:04:51.234 12:00:52 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:51.234 12:00:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:51.234 12:00:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:51.234 12:00:52 -- common/autotest_common.sh@10 -- # set +x 00:04:51.494 ************************************ 00:04:51.494 START TEST rpc_plugins 00:04:51.494 ************************************ 00:04:51.494 12:00:52 -- common/autotest_common.sh@1111 -- # rpc_plugins 00:04:51.494 12:00:52 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:51.494 12:00:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:51.494 12:00:52 -- common/autotest_common.sh@10 -- # set +x 00:04:51.494 12:00:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:51.494 12:00:52 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:51.494 12:00:52 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:51.494 12:00:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:51.494 12:00:52 -- common/autotest_common.sh@10 -- # set +x 00:04:51.494 12:00:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:51.494 12:00:52 -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:51.494 { 00:04:51.494 "name": "Malloc1", 00:04:51.494 "aliases": [ 00:04:51.494 "665182cb-27ea-4f60-8a56-60385ac49964" 00:04:51.494 ], 00:04:51.494 "product_name": "Malloc disk", 00:04:51.494 "block_size": 4096, 00:04:51.494 "num_blocks": 256, 00:04:51.494 "uuid": "665182cb-27ea-4f60-8a56-60385ac49964", 00:04:51.494 "assigned_rate_limits": { 00:04:51.494 "rw_ios_per_sec": 0, 00:04:51.494 "rw_mbytes_per_sec": 0, 00:04:51.494 "r_mbytes_per_sec": 0, 00:04:51.494 "w_mbytes_per_sec": 0 00:04:51.495 }, 00:04:51.495 "claimed": false, 00:04:51.495 "zoned": false, 00:04:51.495 "supported_io_types": { 00:04:51.495 "read": true, 00:04:51.495 "write": true, 00:04:51.495 "unmap": true, 00:04:51.495 "write_zeroes": true, 00:04:51.495 "flush": true, 00:04:51.495 "reset": true, 00:04:51.495 "compare": false, 00:04:51.495 "compare_and_write": false, 00:04:51.495 "abort": true, 00:04:51.495 "nvme_admin": false, 00:04:51.495 "nvme_io": false 00:04:51.495 }, 00:04:51.495 "memory_domains": [ 00:04:51.495 { 00:04:51.495 "dma_device_id": "system", 00:04:51.495 "dma_device_type": 1 00:04:51.495 }, 00:04:51.495 { 00:04:51.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:51.495 "dma_device_type": 2 00:04:51.495 } 00:04:51.495 ], 00:04:51.495 "driver_specific": {} 00:04:51.495 } 00:04:51.495 ]' 00:04:51.495 12:00:52 -- rpc/rpc.sh@32 -- # jq length 00:04:51.495 12:00:52 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:51.495 12:00:52 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:51.495 12:00:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:51.495 12:00:52 -- common/autotest_common.sh@10 -- # set +x 00:04:51.495 12:00:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:51.495 12:00:52 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:51.495 12:00:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:51.495 12:00:52 -- common/autotest_common.sh@10 -- # set +x 00:04:51.495 12:00:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:51.495 12:00:52 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:51.495 12:00:52 -- rpc/rpc.sh@36 -- # jq length 00:04:51.495 12:00:52 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:51.495 00:04:51.495 real 0m0.153s 00:04:51.495 user 0m0.095s 00:04:51.495 sys 0m0.020s 00:04:51.495 12:00:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:51.495 12:00:52 -- common/autotest_common.sh@10 -- # set +x 00:04:51.495 ************************************ 00:04:51.495 END TEST rpc_plugins 00:04:51.495 ************************************ 00:04:51.495 12:00:52 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:51.495 12:00:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:51.495 12:00:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:51.495 12:00:52 -- common/autotest_common.sh@10 -- # set +x 00:04:51.756 ************************************ 00:04:51.756 START TEST rpc_trace_cmd_test 00:04:51.756 ************************************ 00:04:51.756 12:00:52 -- common/autotest_common.sh@1111 -- # rpc_trace_cmd_test 00:04:51.756 12:00:52 -- rpc/rpc.sh@40 -- # local info 00:04:51.756 12:00:52 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:51.756 12:00:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:51.756 12:00:52 -- common/autotest_common.sh@10 -- # set +x 00:04:51.756 12:00:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:51.756 12:00:52 -- rpc/rpc.sh@42 -- # info='{ 00:04:51.756 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3192431", 00:04:51.756 "tpoint_group_mask": "0x8", 00:04:51.756 "iscsi_conn": { 00:04:51.756 "mask": "0x2", 00:04:51.756 "tpoint_mask": "0x0" 00:04:51.756 }, 00:04:51.756 "scsi": { 00:04:51.756 "mask": "0x4", 00:04:51.756 "tpoint_mask": "0x0" 00:04:51.756 }, 00:04:51.756 "bdev": { 00:04:51.756 "mask": "0x8", 00:04:51.756 "tpoint_mask": "0xffffffffffffffff" 00:04:51.756 }, 00:04:51.756 "nvmf_rdma": { 00:04:51.756 "mask": "0x10", 00:04:51.756 "tpoint_mask": "0x0" 00:04:51.756 }, 00:04:51.756 "nvmf_tcp": { 00:04:51.756 "mask": "0x20", 00:04:51.756 "tpoint_mask": "0x0" 00:04:51.756 }, 00:04:51.756 "ftl": { 00:04:51.756 "mask": "0x40", 00:04:51.756 "tpoint_mask": "0x0" 00:04:51.756 }, 00:04:51.756 "blobfs": { 00:04:51.756 "mask": "0x80", 00:04:51.756 "tpoint_mask": "0x0" 00:04:51.756 }, 00:04:51.756 "dsa": { 00:04:51.756 "mask": "0x200", 00:04:51.756 "tpoint_mask": "0x0" 00:04:51.756 }, 00:04:51.756 "thread": { 00:04:51.756 "mask": "0x400", 00:04:51.756 "tpoint_mask": "0x0" 00:04:51.756 }, 00:04:51.756 "nvme_pcie": { 00:04:51.756 "mask": "0x800", 00:04:51.756 "tpoint_mask": "0x0" 00:04:51.756 }, 00:04:51.756 "iaa": { 00:04:51.756 "mask": "0x1000", 00:04:51.756 "tpoint_mask": "0x0" 00:04:51.756 }, 00:04:51.756 "nvme_tcp": { 00:04:51.756 "mask": "0x2000", 00:04:51.756 "tpoint_mask": "0x0" 00:04:51.756 }, 00:04:51.756 "bdev_nvme": { 00:04:51.756 "mask": "0x4000", 00:04:51.756 "tpoint_mask": "0x0" 00:04:51.756 }, 00:04:51.756 "sock": { 00:04:51.756 "mask": "0x8000", 00:04:51.756 "tpoint_mask": "0x0" 00:04:51.756 } 00:04:51.756 }' 00:04:51.756 12:00:52 -- rpc/rpc.sh@43 -- # jq length 00:04:51.756 12:00:52 -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:51.756 12:00:52 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:51.756 12:00:52 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:51.756 12:00:52 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:52.016 12:00:52 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:52.016 12:00:52 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:52.016 12:00:53 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:52.016 12:00:53 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:52.016 12:00:53 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:52.016 00:04:52.016 real 0m0.246s 00:04:52.016 user 0m0.206s 00:04:52.016 sys 0m0.033s 00:04:52.016 12:00:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:52.016 12:00:53 -- common/autotest_common.sh@10 -- # set +x 00:04:52.016 ************************************ 00:04:52.016 END TEST rpc_trace_cmd_test 00:04:52.016 ************************************ 00:04:52.016 12:00:53 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:52.016 12:00:53 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:52.016 12:00:53 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:52.016 12:00:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:52.016 12:00:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:52.016 12:00:53 -- common/autotest_common.sh@10 -- # set +x 00:04:52.279 ************************************ 00:04:52.279 START TEST rpc_daemon_integrity 00:04:52.279 ************************************ 00:04:52.279 12:00:53 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:04:52.279 12:00:53 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:52.279 12:00:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:52.279 12:00:53 -- common/autotest_common.sh@10 -- # set +x 00:04:52.279 12:00:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:52.279 12:00:53 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:52.279 12:00:53 -- rpc/rpc.sh@13 -- # jq length 00:04:52.279 12:00:53 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:52.279 12:00:53 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:52.279 12:00:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:52.279 12:00:53 -- common/autotest_common.sh@10 -- # set +x 00:04:52.279 12:00:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:52.279 12:00:53 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:52.279 12:00:53 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:52.279 12:00:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:52.279 12:00:53 -- common/autotest_common.sh@10 -- # set +x 00:04:52.279 12:00:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:52.279 12:00:53 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:52.279 { 00:04:52.279 "name": "Malloc2", 00:04:52.279 "aliases": [ 00:04:52.279 "1c9ff5db-75bc-4512-96d3-05d2a7300094" 00:04:52.279 ], 00:04:52.279 "product_name": "Malloc disk", 00:04:52.279 "block_size": 512, 00:04:52.279 "num_blocks": 16384, 00:04:52.279 "uuid": "1c9ff5db-75bc-4512-96d3-05d2a7300094", 00:04:52.279 "assigned_rate_limits": { 00:04:52.279 "rw_ios_per_sec": 0, 00:04:52.279 "rw_mbytes_per_sec": 0, 00:04:52.279 "r_mbytes_per_sec": 0, 00:04:52.279 "w_mbytes_per_sec": 0 00:04:52.279 }, 00:04:52.279 "claimed": false, 00:04:52.279 "zoned": false, 00:04:52.279 "supported_io_types": { 00:04:52.279 "read": true, 00:04:52.279 "write": true, 00:04:52.279 "unmap": true, 00:04:52.279 "write_zeroes": true, 00:04:52.279 "flush": true, 00:04:52.279 "reset": true, 00:04:52.279 "compare": false, 00:04:52.279 "compare_and_write": false, 00:04:52.279 "abort": true, 00:04:52.279 "nvme_admin": false, 00:04:52.279 "nvme_io": false 00:04:52.279 }, 00:04:52.279 "memory_domains": [ 00:04:52.279 { 00:04:52.279 "dma_device_id": "system", 00:04:52.279 "dma_device_type": 1 00:04:52.279 }, 00:04:52.279 { 00:04:52.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:52.279 "dma_device_type": 2 00:04:52.279 } 00:04:52.279 ], 00:04:52.279 "driver_specific": {} 00:04:52.279 } 00:04:52.279 ]' 00:04:52.279 12:00:53 -- rpc/rpc.sh@17 -- # jq length 00:04:52.279 12:00:53 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:52.279 12:00:53 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:52.279 12:00:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:52.279 12:00:53 -- common/autotest_common.sh@10 -- # set +x 00:04:52.279 [2024-04-26 12:00:53.398432] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:52.279 [2024-04-26 12:00:53.398461] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:52.279 [2024-04-26 12:00:53.398474] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x174c720 00:04:52.279 [2024-04-26 12:00:53.398481] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:52.279 [2024-04-26 12:00:53.399716] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:52.279 [2024-04-26 12:00:53.399741] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:52.279 Passthru0 00:04:52.279 12:00:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:52.279 12:00:53 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:52.279 12:00:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:52.279 12:00:53 -- common/autotest_common.sh@10 -- # set +x 00:04:52.279 12:00:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:52.279 12:00:53 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:52.279 { 00:04:52.279 "name": "Malloc2", 00:04:52.279 "aliases": [ 00:04:52.279 "1c9ff5db-75bc-4512-96d3-05d2a7300094" 00:04:52.279 ], 00:04:52.279 "product_name": "Malloc disk", 00:04:52.279 "block_size": 512, 00:04:52.279 "num_blocks": 16384, 00:04:52.279 "uuid": "1c9ff5db-75bc-4512-96d3-05d2a7300094", 00:04:52.279 "assigned_rate_limits": { 00:04:52.279 "rw_ios_per_sec": 0, 00:04:52.279 "rw_mbytes_per_sec": 0, 00:04:52.279 "r_mbytes_per_sec": 0, 00:04:52.279 "w_mbytes_per_sec": 0 00:04:52.279 }, 00:04:52.279 "claimed": true, 00:04:52.279 "claim_type": "exclusive_write", 00:04:52.279 "zoned": false, 00:04:52.279 "supported_io_types": { 00:04:52.279 "read": true, 00:04:52.279 "write": true, 00:04:52.279 "unmap": true, 00:04:52.279 "write_zeroes": true, 00:04:52.279 "flush": true, 00:04:52.279 "reset": true, 00:04:52.279 "compare": false, 00:04:52.279 "compare_and_write": false, 00:04:52.279 "abort": true, 00:04:52.279 "nvme_admin": false, 00:04:52.279 "nvme_io": false 00:04:52.279 }, 00:04:52.279 "memory_domains": [ 00:04:52.279 { 00:04:52.279 "dma_device_id": "system", 00:04:52.279 "dma_device_type": 1 00:04:52.279 }, 00:04:52.279 { 00:04:52.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:52.279 "dma_device_type": 2 00:04:52.279 } 00:04:52.279 ], 00:04:52.279 "driver_specific": {} 00:04:52.279 }, 00:04:52.279 { 00:04:52.279 "name": "Passthru0", 00:04:52.279 "aliases": [ 00:04:52.279 "846518e8-8f39-5d37-b0a7-0f433abe0b92" 00:04:52.279 ], 00:04:52.279 "product_name": "passthru", 00:04:52.279 "block_size": 512, 00:04:52.279 "num_blocks": 16384, 00:04:52.279 "uuid": "846518e8-8f39-5d37-b0a7-0f433abe0b92", 00:04:52.279 "assigned_rate_limits": { 00:04:52.279 "rw_ios_per_sec": 0, 00:04:52.279 "rw_mbytes_per_sec": 0, 00:04:52.279 "r_mbytes_per_sec": 0, 00:04:52.279 "w_mbytes_per_sec": 0 00:04:52.279 }, 00:04:52.279 "claimed": false, 00:04:52.279 "zoned": false, 00:04:52.279 "supported_io_types": { 00:04:52.279 "read": true, 00:04:52.279 "write": true, 00:04:52.279 "unmap": true, 00:04:52.279 "write_zeroes": true, 00:04:52.279 "flush": true, 00:04:52.279 "reset": true, 00:04:52.279 "compare": false, 00:04:52.279 "compare_and_write": false, 00:04:52.279 "abort": true, 00:04:52.279 "nvme_admin": false, 00:04:52.279 "nvme_io": false 00:04:52.279 }, 00:04:52.279 "memory_domains": [ 00:04:52.279 { 00:04:52.279 "dma_device_id": "system", 00:04:52.279 "dma_device_type": 1 00:04:52.280 }, 00:04:52.280 { 00:04:52.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:52.280 "dma_device_type": 2 00:04:52.280 } 00:04:52.280 ], 00:04:52.280 "driver_specific": { 00:04:52.280 "passthru": { 00:04:52.280 "name": "Passthru0", 00:04:52.280 "base_bdev_name": "Malloc2" 00:04:52.280 } 00:04:52.280 } 00:04:52.280 } 00:04:52.280 ]' 00:04:52.280 12:00:53 -- rpc/rpc.sh@21 -- # jq length 00:04:52.280 12:00:53 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:52.280 12:00:53 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:52.280 12:00:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:52.280 12:00:53 -- common/autotest_common.sh@10 -- # set +x 00:04:52.280 12:00:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:52.280 12:00:53 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:52.280 12:00:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:52.280 12:00:53 -- common/autotest_common.sh@10 -- # set +x 00:04:52.280 12:00:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:52.280 12:00:53 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:52.280 12:00:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:52.280 12:00:53 -- common/autotest_common.sh@10 -- # set +x 00:04:52.541 12:00:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:52.541 12:00:53 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:52.541 12:00:53 -- rpc/rpc.sh@26 -- # jq length 00:04:52.541 12:00:53 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:52.541 00:04:52.541 real 0m0.297s 00:04:52.541 user 0m0.191s 00:04:52.541 sys 0m0.040s 00:04:52.541 12:00:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:52.541 12:00:53 -- common/autotest_common.sh@10 -- # set +x 00:04:52.541 ************************************ 00:04:52.541 END TEST rpc_daemon_integrity 00:04:52.541 ************************************ 00:04:52.541 12:00:53 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:52.541 12:00:53 -- rpc/rpc.sh@84 -- # killprocess 3192431 00:04:52.541 12:00:53 -- common/autotest_common.sh@936 -- # '[' -z 3192431 ']' 00:04:52.541 12:00:53 -- common/autotest_common.sh@940 -- # kill -0 3192431 00:04:52.541 12:00:53 -- common/autotest_common.sh@941 -- # uname 00:04:52.541 12:00:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:52.541 12:00:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3192431 00:04:52.541 12:00:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:52.541 12:00:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:52.541 12:00:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3192431' 00:04:52.541 killing process with pid 3192431 00:04:52.541 12:00:53 -- common/autotest_common.sh@955 -- # kill 3192431 00:04:52.541 12:00:53 -- common/autotest_common.sh@960 -- # wait 3192431 00:04:52.801 00:04:52.801 real 0m2.896s 00:04:52.801 user 0m3.843s 00:04:52.801 sys 0m0.886s 00:04:52.801 12:00:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:52.801 12:00:53 -- common/autotest_common.sh@10 -- # set +x 00:04:52.801 ************************************ 00:04:52.801 END TEST rpc 00:04:52.801 ************************************ 00:04:52.801 12:00:53 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:52.801 12:00:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:52.801 12:00:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:52.801 12:00:53 -- common/autotest_common.sh@10 -- # set +x 00:04:53.061 ************************************ 00:04:53.061 START TEST skip_rpc 00:04:53.061 ************************************ 00:04:53.061 12:00:54 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:53.061 * Looking for test storage... 00:04:53.061 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:53.061 12:00:54 -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:53.061 12:00:54 -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:53.061 12:00:54 -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:53.061 12:00:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:53.061 12:00:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:53.061 12:00:54 -- common/autotest_common.sh@10 -- # set +x 00:04:53.322 ************************************ 00:04:53.322 START TEST skip_rpc 00:04:53.322 ************************************ 00:04:53.322 12:00:54 -- common/autotest_common.sh@1111 -- # test_skip_rpc 00:04:53.322 12:00:54 -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3193314 00:04:53.322 12:00:54 -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:53.322 12:00:54 -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:53.322 12:00:54 -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:53.322 [2024-04-26 12:00:54.339815] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:04:53.322 [2024-04-26 12:00:54.339864] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3193314 ] 00:04:53.322 EAL: No free 2048 kB hugepages reported on node 1 00:04:53.322 [2024-04-26 12:00:54.403531] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.322 [2024-04-26 12:00:54.466792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.606 12:00:59 -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:58.606 12:00:59 -- common/autotest_common.sh@638 -- # local es=0 00:04:58.606 12:00:59 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:58.606 12:00:59 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:04:58.606 12:00:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:58.606 12:00:59 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:04:58.606 12:00:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:58.606 12:00:59 -- common/autotest_common.sh@641 -- # rpc_cmd spdk_get_version 00:04:58.606 12:00:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:58.606 12:00:59 -- common/autotest_common.sh@10 -- # set +x 00:04:58.606 12:00:59 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:04:58.606 12:00:59 -- common/autotest_common.sh@641 -- # es=1 00:04:58.606 12:00:59 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:58.606 12:00:59 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:04:58.606 12:00:59 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:58.606 12:00:59 -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:58.606 12:00:59 -- rpc/skip_rpc.sh@23 -- # killprocess 3193314 00:04:58.606 12:00:59 -- common/autotest_common.sh@936 -- # '[' -z 3193314 ']' 00:04:58.606 12:00:59 -- common/autotest_common.sh@940 -- # kill -0 3193314 00:04:58.606 12:00:59 -- common/autotest_common.sh@941 -- # uname 00:04:58.606 12:00:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:58.606 12:00:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3193314 00:04:58.606 12:00:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:58.606 12:00:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:58.606 12:00:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3193314' 00:04:58.606 killing process with pid 3193314 00:04:58.606 12:00:59 -- common/autotest_common.sh@955 -- # kill 3193314 00:04:58.606 12:00:59 -- common/autotest_common.sh@960 -- # wait 3193314 00:04:58.606 00:04:58.606 real 0m5.279s 00:04:58.606 user 0m5.093s 00:04:58.606 sys 0m0.223s 00:04:58.606 12:00:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:58.606 12:00:59 -- common/autotest_common.sh@10 -- # set +x 00:04:58.606 ************************************ 00:04:58.606 END TEST skip_rpc 00:04:58.606 ************************************ 00:04:58.606 12:00:59 -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:58.606 12:00:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:58.606 12:00:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:58.606 12:00:59 -- common/autotest_common.sh@10 -- # set +x 00:04:58.606 ************************************ 00:04:58.606 START TEST skip_rpc_with_json 00:04:58.606 ************************************ 00:04:58.606 12:00:59 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_json 00:04:58.606 12:00:59 -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:58.606 12:00:59 -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3194536 00:04:58.606 12:00:59 -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:58.606 12:00:59 -- rpc/skip_rpc.sh@31 -- # waitforlisten 3194536 00:04:58.606 12:00:59 -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:58.606 12:00:59 -- common/autotest_common.sh@817 -- # '[' -z 3194536 ']' 00:04:58.606 12:00:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.606 12:00:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:58.606 12:00:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.606 12:00:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:58.606 12:00:59 -- common/autotest_common.sh@10 -- # set +x 00:04:58.606 [2024-04-26 12:00:59.817267] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:04:58.606 [2024-04-26 12:00:59.817325] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3194536 ] 00:04:58.866 EAL: No free 2048 kB hugepages reported on node 1 00:04:58.866 [2024-04-26 12:00:59.884150] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.866 [2024-04-26 12:00:59.957978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.445 12:01:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:59.445 12:01:00 -- common/autotest_common.sh@850 -- # return 0 00:04:59.445 12:01:00 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:59.445 12:01:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:59.445 12:01:00 -- common/autotest_common.sh@10 -- # set +x 00:04:59.445 [2024-04-26 12:01:00.602112] nvmf_rpc.c:2513:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:59.445 request: 00:04:59.445 { 00:04:59.445 "trtype": "tcp", 00:04:59.445 "method": "nvmf_get_transports", 00:04:59.445 "req_id": 1 00:04:59.445 } 00:04:59.445 Got JSON-RPC error response 00:04:59.445 response: 00:04:59.445 { 00:04:59.445 "code": -19, 00:04:59.445 "message": "No such device" 00:04:59.445 } 00:04:59.445 12:01:00 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:04:59.445 12:01:00 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:59.445 12:01:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:59.445 12:01:00 -- common/autotest_common.sh@10 -- # set +x 00:04:59.445 [2024-04-26 12:01:00.614219] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:59.445 12:01:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:59.445 12:01:00 -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:59.445 12:01:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:59.445 12:01:00 -- common/autotest_common.sh@10 -- # set +x 00:04:59.705 12:01:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:59.705 12:01:00 -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:59.705 { 00:04:59.705 "subsystems": [ 00:04:59.705 { 00:04:59.705 "subsystem": "vfio_user_target", 00:04:59.705 "config": null 00:04:59.705 }, 00:04:59.705 { 00:04:59.705 "subsystem": "keyring", 00:04:59.705 "config": [] 00:04:59.705 }, 00:04:59.705 { 00:04:59.705 "subsystem": "iobuf", 00:04:59.705 "config": [ 00:04:59.705 { 00:04:59.705 "method": "iobuf_set_options", 00:04:59.705 "params": { 00:04:59.705 "small_pool_count": 8192, 00:04:59.706 "large_pool_count": 1024, 00:04:59.706 "small_bufsize": 8192, 00:04:59.706 "large_bufsize": 135168 00:04:59.706 } 00:04:59.706 } 00:04:59.706 ] 00:04:59.706 }, 00:04:59.706 { 00:04:59.706 "subsystem": "sock", 00:04:59.706 "config": [ 00:04:59.706 { 00:04:59.706 "method": "sock_impl_set_options", 00:04:59.706 "params": { 00:04:59.706 "impl_name": "posix", 00:04:59.706 "recv_buf_size": 2097152, 00:04:59.706 "send_buf_size": 2097152, 00:04:59.706 "enable_recv_pipe": true, 00:04:59.706 "enable_quickack": false, 00:04:59.706 "enable_placement_id": 0, 00:04:59.706 "enable_zerocopy_send_server": true, 00:04:59.706 "enable_zerocopy_send_client": false, 00:04:59.706 "zerocopy_threshold": 0, 00:04:59.706 "tls_version": 0, 00:04:59.706 "enable_ktls": false 00:04:59.706 } 00:04:59.706 }, 00:04:59.706 { 00:04:59.706 "method": "sock_impl_set_options", 00:04:59.706 "params": { 00:04:59.706 "impl_name": "ssl", 00:04:59.706 "recv_buf_size": 4096, 00:04:59.706 "send_buf_size": 4096, 00:04:59.706 "enable_recv_pipe": true, 00:04:59.706 "enable_quickack": false, 00:04:59.706 "enable_placement_id": 0, 00:04:59.706 "enable_zerocopy_send_server": true, 00:04:59.706 "enable_zerocopy_send_client": false, 00:04:59.706 "zerocopy_threshold": 0, 00:04:59.706 "tls_version": 0, 00:04:59.706 "enable_ktls": false 00:04:59.706 } 00:04:59.706 } 00:04:59.706 ] 00:04:59.706 }, 00:04:59.706 { 00:04:59.706 "subsystem": "vmd", 00:04:59.706 "config": [] 00:04:59.706 }, 00:04:59.706 { 00:04:59.706 "subsystem": "accel", 00:04:59.706 "config": [ 00:04:59.706 { 00:04:59.706 "method": "accel_set_options", 00:04:59.706 "params": { 00:04:59.706 "small_cache_size": 128, 00:04:59.706 "large_cache_size": 16, 00:04:59.706 "task_count": 2048, 00:04:59.706 "sequence_count": 2048, 00:04:59.706 "buf_count": 2048 00:04:59.706 } 00:04:59.706 } 00:04:59.706 ] 00:04:59.706 }, 00:04:59.706 { 00:04:59.706 "subsystem": "bdev", 00:04:59.706 "config": [ 00:04:59.706 { 00:04:59.706 "method": "bdev_set_options", 00:04:59.706 "params": { 00:04:59.706 "bdev_io_pool_size": 65535, 00:04:59.706 "bdev_io_cache_size": 256, 00:04:59.706 "bdev_auto_examine": true, 00:04:59.706 "iobuf_small_cache_size": 128, 00:04:59.706 "iobuf_large_cache_size": 16 00:04:59.706 } 00:04:59.706 }, 00:04:59.706 { 00:04:59.706 "method": "bdev_raid_set_options", 00:04:59.706 "params": { 00:04:59.706 "process_window_size_kb": 1024 00:04:59.706 } 00:04:59.706 }, 00:04:59.706 { 00:04:59.706 "method": "bdev_iscsi_set_options", 00:04:59.706 "params": { 00:04:59.706 "timeout_sec": 30 00:04:59.706 } 00:04:59.706 }, 00:04:59.706 { 00:04:59.706 "method": "bdev_nvme_set_options", 00:04:59.706 "params": { 00:04:59.706 "action_on_timeout": "none", 00:04:59.706 "timeout_us": 0, 00:04:59.706 "timeout_admin_us": 0, 00:04:59.706 "keep_alive_timeout_ms": 10000, 00:04:59.706 "arbitration_burst": 0, 00:04:59.706 "low_priority_weight": 0, 00:04:59.706 "medium_priority_weight": 0, 00:04:59.706 "high_priority_weight": 0, 00:04:59.706 "nvme_adminq_poll_period_us": 10000, 00:04:59.706 "nvme_ioq_poll_period_us": 0, 00:04:59.706 "io_queue_requests": 0, 00:04:59.706 "delay_cmd_submit": true, 00:04:59.706 "transport_retry_count": 4, 00:04:59.706 "bdev_retry_count": 3, 00:04:59.706 "transport_ack_timeout": 0, 00:04:59.706 "ctrlr_loss_timeout_sec": 0, 00:04:59.706 "reconnect_delay_sec": 0, 00:04:59.706 "fast_io_fail_timeout_sec": 0, 00:04:59.706 "disable_auto_failback": false, 00:04:59.706 "generate_uuids": false, 00:04:59.706 "transport_tos": 0, 00:04:59.706 "nvme_error_stat": false, 00:04:59.706 "rdma_srq_size": 0, 00:04:59.706 "io_path_stat": false, 00:04:59.706 "allow_accel_sequence": false, 00:04:59.706 "rdma_max_cq_size": 0, 00:04:59.706 "rdma_cm_event_timeout_ms": 0, 00:04:59.706 "dhchap_digests": [ 00:04:59.706 "sha256", 00:04:59.706 "sha384", 00:04:59.706 "sha512" 00:04:59.706 ], 00:04:59.706 "dhchap_dhgroups": [ 00:04:59.706 "null", 00:04:59.706 "ffdhe2048", 00:04:59.706 "ffdhe3072", 00:04:59.706 "ffdhe4096", 00:04:59.706 "ffdhe6144", 00:04:59.706 "ffdhe8192" 00:04:59.706 ] 00:04:59.706 } 00:04:59.706 }, 00:04:59.706 { 00:04:59.706 "method": "bdev_nvme_set_hotplug", 00:04:59.706 "params": { 00:04:59.706 "period_us": 100000, 00:04:59.706 "enable": false 00:04:59.706 } 00:04:59.706 }, 00:04:59.706 { 00:04:59.706 "method": "bdev_wait_for_examine" 00:04:59.706 } 00:04:59.706 ] 00:04:59.706 }, 00:04:59.706 { 00:04:59.706 "subsystem": "scsi", 00:04:59.706 "config": null 00:04:59.706 }, 00:04:59.706 { 00:04:59.706 "subsystem": "scheduler", 00:04:59.706 "config": [ 00:04:59.706 { 00:04:59.706 "method": "framework_set_scheduler", 00:04:59.706 "params": { 00:04:59.706 "name": "static" 00:04:59.706 } 00:04:59.706 } 00:04:59.706 ] 00:04:59.706 }, 00:04:59.706 { 00:04:59.706 "subsystem": "vhost_scsi", 00:04:59.706 "config": [] 00:04:59.706 }, 00:04:59.706 { 00:04:59.706 "subsystem": "vhost_blk", 00:04:59.706 "config": [] 00:04:59.706 }, 00:04:59.706 { 00:04:59.706 "subsystem": "ublk", 00:04:59.706 "config": [] 00:04:59.706 }, 00:04:59.706 { 00:04:59.706 "subsystem": "nbd", 00:04:59.706 "config": [] 00:04:59.706 }, 00:04:59.706 { 00:04:59.706 "subsystem": "nvmf", 00:04:59.706 "config": [ 00:04:59.706 { 00:04:59.706 "method": "nvmf_set_config", 00:04:59.706 "params": { 00:04:59.706 "discovery_filter": "match_any", 00:04:59.706 "admin_cmd_passthru": { 00:04:59.706 "identify_ctrlr": false 00:04:59.706 } 00:04:59.706 } 00:04:59.706 }, 00:04:59.706 { 00:04:59.706 "method": "nvmf_set_max_subsystems", 00:04:59.706 "params": { 00:04:59.706 "max_subsystems": 1024 00:04:59.706 } 00:04:59.706 }, 00:04:59.706 { 00:04:59.706 "method": "nvmf_set_crdt", 00:04:59.706 "params": { 00:04:59.706 "crdt1": 0, 00:04:59.706 "crdt2": 0, 00:04:59.706 "crdt3": 0 00:04:59.706 } 00:04:59.706 }, 00:04:59.706 { 00:04:59.706 "method": "nvmf_create_transport", 00:04:59.706 "params": { 00:04:59.706 "trtype": "TCP", 00:04:59.706 "max_queue_depth": 128, 00:04:59.706 "max_io_qpairs_per_ctrlr": 127, 00:04:59.706 "in_capsule_data_size": 4096, 00:04:59.706 "max_io_size": 131072, 00:04:59.706 "io_unit_size": 131072, 00:04:59.706 "max_aq_depth": 128, 00:04:59.706 "num_shared_buffers": 511, 00:04:59.706 "buf_cache_size": 4294967295, 00:04:59.706 "dif_insert_or_strip": false, 00:04:59.706 "zcopy": false, 00:04:59.706 "c2h_success": true, 00:04:59.706 "sock_priority": 0, 00:04:59.706 "abort_timeout_sec": 1, 00:04:59.706 "ack_timeout": 0, 00:04:59.706 "data_wr_pool_size": 0 00:04:59.706 } 00:04:59.706 } 00:04:59.706 ] 00:04:59.706 }, 00:04:59.706 { 00:04:59.706 "subsystem": "iscsi", 00:04:59.706 "config": [ 00:04:59.706 { 00:04:59.706 "method": "iscsi_set_options", 00:04:59.706 "params": { 00:04:59.706 "node_base": "iqn.2016-06.io.spdk", 00:04:59.706 "max_sessions": 128, 00:04:59.706 "max_connections_per_session": 2, 00:04:59.706 "max_queue_depth": 64, 00:04:59.706 "default_time2wait": 2, 00:04:59.706 "default_time2retain": 20, 00:04:59.706 "first_burst_length": 8192, 00:04:59.706 "immediate_data": true, 00:04:59.706 "allow_duplicated_isid": false, 00:04:59.706 "error_recovery_level": 0, 00:04:59.706 "nop_timeout": 60, 00:04:59.706 "nop_in_interval": 30, 00:04:59.706 "disable_chap": false, 00:04:59.706 "require_chap": false, 00:04:59.706 "mutual_chap": false, 00:04:59.706 "chap_group": 0, 00:04:59.706 "max_large_datain_per_connection": 64, 00:04:59.706 "max_r2t_per_connection": 4, 00:04:59.706 "pdu_pool_size": 36864, 00:04:59.706 "immediate_data_pool_size": 16384, 00:04:59.706 "data_out_pool_size": 2048 00:04:59.706 } 00:04:59.706 } 00:04:59.706 ] 00:04:59.706 } 00:04:59.706 ] 00:04:59.706 } 00:04:59.706 12:01:00 -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:59.706 12:01:00 -- rpc/skip_rpc.sh@40 -- # killprocess 3194536 00:04:59.706 12:01:00 -- common/autotest_common.sh@936 -- # '[' -z 3194536 ']' 00:04:59.706 12:01:00 -- common/autotest_common.sh@940 -- # kill -0 3194536 00:04:59.706 12:01:00 -- common/autotest_common.sh@941 -- # uname 00:04:59.706 12:01:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:59.706 12:01:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3194536 00:04:59.706 12:01:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:59.706 12:01:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:59.706 12:01:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3194536' 00:04:59.706 killing process with pid 3194536 00:04:59.706 12:01:00 -- common/autotest_common.sh@955 -- # kill 3194536 00:04:59.706 12:01:00 -- common/autotest_common.sh@960 -- # wait 3194536 00:04:59.966 12:01:01 -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3194704 00:04:59.966 12:01:01 -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:59.966 12:01:01 -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:05.248 12:01:06 -- rpc/skip_rpc.sh@50 -- # killprocess 3194704 00:05:05.248 12:01:06 -- common/autotest_common.sh@936 -- # '[' -z 3194704 ']' 00:05:05.248 12:01:06 -- common/autotest_common.sh@940 -- # kill -0 3194704 00:05:05.248 12:01:06 -- common/autotest_common.sh@941 -- # uname 00:05:05.248 12:01:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:05.248 12:01:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3194704 00:05:05.248 12:01:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:05.248 12:01:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:05.248 12:01:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3194704' 00:05:05.248 killing process with pid 3194704 00:05:05.248 12:01:06 -- common/autotest_common.sh@955 -- # kill 3194704 00:05:05.248 12:01:06 -- common/autotest_common.sh@960 -- # wait 3194704 00:05:05.248 12:01:06 -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:05.248 12:01:06 -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:05.248 00:05:05.248 real 0m6.569s 00:05:05.248 user 0m6.452s 00:05:05.248 sys 0m0.536s 00:05:05.248 12:01:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:05.248 12:01:06 -- common/autotest_common.sh@10 -- # set +x 00:05:05.248 ************************************ 00:05:05.248 END TEST skip_rpc_with_json 00:05:05.248 ************************************ 00:05:05.248 12:01:06 -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:05.248 12:01:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:05.248 12:01:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:05.248 12:01:06 -- common/autotest_common.sh@10 -- # set +x 00:05:05.510 ************************************ 00:05:05.510 START TEST skip_rpc_with_delay 00:05:05.510 ************************************ 00:05:05.510 12:01:06 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_delay 00:05:05.510 12:01:06 -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:05.510 12:01:06 -- common/autotest_common.sh@638 -- # local es=0 00:05:05.510 12:01:06 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:05.510 12:01:06 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:05.510 12:01:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:05.510 12:01:06 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:05.510 12:01:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:05.510 12:01:06 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:05.510 12:01:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:05.510 12:01:06 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:05.510 12:01:06 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:05.510 12:01:06 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:05.510 [2024-04-26 12:01:06.567482] app.c: 751:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:05.510 [2024-04-26 12:01:06.567575] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:05.510 12:01:06 -- common/autotest_common.sh@641 -- # es=1 00:05:05.510 12:01:06 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:05.510 12:01:06 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:05.510 12:01:06 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:05.510 00:05:05.510 real 0m0.073s 00:05:05.510 user 0m0.039s 00:05:05.510 sys 0m0.033s 00:05:05.510 12:01:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:05.510 12:01:06 -- common/autotest_common.sh@10 -- # set +x 00:05:05.510 ************************************ 00:05:05.510 END TEST skip_rpc_with_delay 00:05:05.510 ************************************ 00:05:05.510 12:01:06 -- rpc/skip_rpc.sh@77 -- # uname 00:05:05.510 12:01:06 -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:05.510 12:01:06 -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:05.510 12:01:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:05.510 12:01:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:05.510 12:01:06 -- common/autotest_common.sh@10 -- # set +x 00:05:05.770 ************************************ 00:05:05.770 START TEST exit_on_failed_rpc_init 00:05:05.770 ************************************ 00:05:05.770 12:01:06 -- common/autotest_common.sh@1111 -- # test_exit_on_failed_rpc_init 00:05:05.770 12:01:06 -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3196092 00:05:05.770 12:01:06 -- rpc/skip_rpc.sh@63 -- # waitforlisten 3196092 00:05:05.770 12:01:06 -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:05.770 12:01:06 -- common/autotest_common.sh@817 -- # '[' -z 3196092 ']' 00:05:05.770 12:01:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.770 12:01:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:05.770 12:01:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.770 12:01:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:05.770 12:01:06 -- common/autotest_common.sh@10 -- # set +x 00:05:05.770 [2024-04-26 12:01:06.831985] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:05:05.770 [2024-04-26 12:01:06.832031] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3196092 ] 00:05:05.770 EAL: No free 2048 kB hugepages reported on node 1 00:05:05.770 [2024-04-26 12:01:06.891780] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.770 [2024-04-26 12:01:06.953908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.712 12:01:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:06.712 12:01:07 -- common/autotest_common.sh@850 -- # return 0 00:05:06.712 12:01:07 -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:06.712 12:01:07 -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:06.712 12:01:07 -- common/autotest_common.sh@638 -- # local es=0 00:05:06.712 12:01:07 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:06.712 12:01:07 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:06.712 12:01:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:06.712 12:01:07 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:06.712 12:01:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:06.712 12:01:07 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:06.712 12:01:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:06.712 12:01:07 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:06.712 12:01:07 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:06.712 12:01:07 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:06.712 [2024-04-26 12:01:07.649137] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:05:06.712 [2024-04-26 12:01:07.649184] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3196113 ] 00:05:06.712 EAL: No free 2048 kB hugepages reported on node 1 00:05:06.712 [2024-04-26 12:01:07.723878] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.712 [2024-04-26 12:01:07.786331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.712 [2024-04-26 12:01:07.786399] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:06.712 [2024-04-26 12:01:07.786408] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:06.712 [2024-04-26 12:01:07.786415] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:06.712 12:01:07 -- common/autotest_common.sh@641 -- # es=234 00:05:06.712 12:01:07 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:06.712 12:01:07 -- common/autotest_common.sh@650 -- # es=106 00:05:06.712 12:01:07 -- common/autotest_common.sh@651 -- # case "$es" in 00:05:06.712 12:01:07 -- common/autotest_common.sh@658 -- # es=1 00:05:06.712 12:01:07 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:06.712 12:01:07 -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:06.712 12:01:07 -- rpc/skip_rpc.sh@70 -- # killprocess 3196092 00:05:06.712 12:01:07 -- common/autotest_common.sh@936 -- # '[' -z 3196092 ']' 00:05:06.712 12:01:07 -- common/autotest_common.sh@940 -- # kill -0 3196092 00:05:06.712 12:01:07 -- common/autotest_common.sh@941 -- # uname 00:05:06.712 12:01:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:06.712 12:01:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3196092 00:05:06.712 12:01:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:06.712 12:01:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:06.712 12:01:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3196092' 00:05:06.712 killing process with pid 3196092 00:05:06.712 12:01:07 -- common/autotest_common.sh@955 -- # kill 3196092 00:05:06.712 12:01:07 -- common/autotest_common.sh@960 -- # wait 3196092 00:05:06.973 00:05:06.973 real 0m1.330s 00:05:06.973 user 0m1.544s 00:05:06.973 sys 0m0.377s 00:05:06.973 12:01:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:06.973 12:01:08 -- common/autotest_common.sh@10 -- # set +x 00:05:06.973 ************************************ 00:05:06.973 END TEST exit_on_failed_rpc_init 00:05:06.973 ************************************ 00:05:06.973 12:01:08 -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:06.973 00:05:06.973 real 0m14.121s 00:05:06.973 user 0m13.456s 00:05:06.973 sys 0m1.653s 00:05:06.973 12:01:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:06.973 12:01:08 -- common/autotest_common.sh@10 -- # set +x 00:05:06.973 ************************************ 00:05:06.973 END TEST skip_rpc 00:05:06.973 ************************************ 00:05:06.973 12:01:08 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:06.973 12:01:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:06.973 12:01:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:06.973 12:01:08 -- common/autotest_common.sh@10 -- # set +x 00:05:07.234 ************************************ 00:05:07.234 START TEST rpc_client 00:05:07.234 ************************************ 00:05:07.234 12:01:08 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:07.234 * Looking for test storage... 00:05:07.234 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:07.494 12:01:08 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:07.494 OK 00:05:07.494 12:01:08 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:07.494 00:05:07.494 real 0m0.131s 00:05:07.494 user 0m0.058s 00:05:07.494 sys 0m0.081s 00:05:07.494 12:01:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:07.494 12:01:08 -- common/autotest_common.sh@10 -- # set +x 00:05:07.494 ************************************ 00:05:07.494 END TEST rpc_client 00:05:07.494 ************************************ 00:05:07.494 12:01:08 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:07.494 12:01:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:07.494 12:01:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:07.494 12:01:08 -- common/autotest_common.sh@10 -- # set +x 00:05:07.494 ************************************ 00:05:07.494 START TEST json_config 00:05:07.494 ************************************ 00:05:07.494 12:01:08 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:07.755 12:01:08 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:07.755 12:01:08 -- nvmf/common.sh@7 -- # uname -s 00:05:07.755 12:01:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:07.755 12:01:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:07.755 12:01:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:07.755 12:01:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:07.755 12:01:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:07.755 12:01:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:07.755 12:01:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:07.755 12:01:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:07.755 12:01:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:07.755 12:01:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:07.755 12:01:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:07.755 12:01:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:07.755 12:01:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:07.755 12:01:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:07.755 12:01:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:07.755 12:01:08 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:07.755 12:01:08 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:07.755 12:01:08 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:07.755 12:01:08 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:07.755 12:01:08 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:07.755 12:01:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.755 12:01:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.755 12:01:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.755 12:01:08 -- paths/export.sh@5 -- # export PATH 00:05:07.755 12:01:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.755 12:01:08 -- nvmf/common.sh@47 -- # : 0 00:05:07.755 12:01:08 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:07.755 12:01:08 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:07.755 12:01:08 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:07.755 12:01:08 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:07.755 12:01:08 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:07.755 12:01:08 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:07.755 12:01:08 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:07.755 12:01:08 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:07.755 12:01:08 -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:07.755 12:01:08 -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:07.755 12:01:08 -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:07.755 12:01:08 -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:07.755 12:01:08 -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:07.755 12:01:08 -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:07.755 12:01:08 -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:07.755 12:01:08 -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:07.755 12:01:08 -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:07.755 12:01:08 -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:07.755 12:01:08 -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:07.755 12:01:08 -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:07.755 12:01:08 -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:07.755 12:01:08 -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:07.755 12:01:08 -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:07.755 12:01:08 -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:07.755 INFO: JSON configuration test init 00:05:07.755 12:01:08 -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:07.755 12:01:08 -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:07.755 12:01:08 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:07.755 12:01:08 -- common/autotest_common.sh@10 -- # set +x 00:05:07.755 12:01:08 -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:07.755 12:01:08 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:07.755 12:01:08 -- common/autotest_common.sh@10 -- # set +x 00:05:07.755 12:01:08 -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:07.755 12:01:08 -- json_config/common.sh@9 -- # local app=target 00:05:07.755 12:01:08 -- json_config/common.sh@10 -- # shift 00:05:07.755 12:01:08 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:07.755 12:01:08 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:07.755 12:01:08 -- json_config/common.sh@15 -- # local app_extra_params= 00:05:07.755 12:01:08 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:07.755 12:01:08 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:07.755 12:01:08 -- json_config/common.sh@22 -- # app_pid["$app"]=3196567 00:05:07.755 12:01:08 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:07.755 Waiting for target to run... 00:05:07.755 12:01:08 -- json_config/common.sh@25 -- # waitforlisten 3196567 /var/tmp/spdk_tgt.sock 00:05:07.756 12:01:08 -- common/autotest_common.sh@817 -- # '[' -z 3196567 ']' 00:05:07.756 12:01:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:07.756 12:01:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:07.756 12:01:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:07.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:07.756 12:01:08 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:07.756 12:01:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:07.756 12:01:08 -- common/autotest_common.sh@10 -- # set +x 00:05:07.756 [2024-04-26 12:01:08.857485] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:05:07.756 [2024-04-26 12:01:08.857550] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3196567 ] 00:05:07.756 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.017 [2024-04-26 12:01:09.157871] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.017 [2024-04-26 12:01:09.206597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.586 12:01:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:08.586 12:01:09 -- common/autotest_common.sh@850 -- # return 0 00:05:08.586 12:01:09 -- json_config/common.sh@26 -- # echo '' 00:05:08.586 00:05:08.586 12:01:09 -- json_config/json_config.sh@269 -- # create_accel_config 00:05:08.586 12:01:09 -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:08.586 12:01:09 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:08.586 12:01:09 -- common/autotest_common.sh@10 -- # set +x 00:05:08.586 12:01:09 -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:08.586 12:01:09 -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:08.586 12:01:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:08.586 12:01:09 -- common/autotest_common.sh@10 -- # set +x 00:05:08.586 12:01:09 -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:08.586 12:01:09 -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:08.586 12:01:09 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:09.216 12:01:10 -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:09.216 12:01:10 -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:09.216 12:01:10 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:09.216 12:01:10 -- common/autotest_common.sh@10 -- # set +x 00:05:09.216 12:01:10 -- json_config/json_config.sh@45 -- # local ret=0 00:05:09.216 12:01:10 -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:09.216 12:01:10 -- json_config/json_config.sh@46 -- # local enabled_types 00:05:09.216 12:01:10 -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:09.216 12:01:10 -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:09.216 12:01:10 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:09.216 12:01:10 -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:09.216 12:01:10 -- json_config/json_config.sh@48 -- # local get_types 00:05:09.216 12:01:10 -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:09.216 12:01:10 -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:09.216 12:01:10 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:09.216 12:01:10 -- common/autotest_common.sh@10 -- # set +x 00:05:09.216 12:01:10 -- json_config/json_config.sh@55 -- # return 0 00:05:09.216 12:01:10 -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:09.216 12:01:10 -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:09.216 12:01:10 -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:09.216 12:01:10 -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:09.216 12:01:10 -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:09.216 12:01:10 -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:09.216 12:01:10 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:09.216 12:01:10 -- common/autotest_common.sh@10 -- # set +x 00:05:09.216 12:01:10 -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:09.216 12:01:10 -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:09.216 12:01:10 -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:09.216 12:01:10 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:09.216 12:01:10 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:09.495 MallocForNvmf0 00:05:09.495 12:01:10 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:09.495 12:01:10 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:09.495 MallocForNvmf1 00:05:09.495 12:01:10 -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:09.495 12:01:10 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:09.755 [2024-04-26 12:01:10.829539] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:09.755 12:01:10 -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:09.755 12:01:10 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:10.015 12:01:10 -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:10.015 12:01:10 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:10.015 12:01:11 -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:10.015 12:01:11 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:10.276 12:01:11 -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:10.276 12:01:11 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:10.276 [2024-04-26 12:01:11.423489] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:10.276 12:01:11 -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:10.276 12:01:11 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:10.276 12:01:11 -- common/autotest_common.sh@10 -- # set +x 00:05:10.276 12:01:11 -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:10.276 12:01:11 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:10.276 12:01:11 -- common/autotest_common.sh@10 -- # set +x 00:05:10.536 12:01:11 -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:10.536 12:01:11 -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:10.536 12:01:11 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:10.536 MallocBdevForConfigChangeCheck 00:05:10.536 12:01:11 -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:10.536 12:01:11 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:10.536 12:01:11 -- common/autotest_common.sh@10 -- # set +x 00:05:10.536 12:01:11 -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:10.537 12:01:11 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:10.796 12:01:12 -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:10.796 INFO: shutting down applications... 00:05:10.796 12:01:12 -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:10.796 12:01:12 -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:10.796 12:01:12 -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:10.796 12:01:12 -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:11.367 Calling clear_iscsi_subsystem 00:05:11.367 Calling clear_nvmf_subsystem 00:05:11.367 Calling clear_nbd_subsystem 00:05:11.367 Calling clear_ublk_subsystem 00:05:11.367 Calling clear_vhost_blk_subsystem 00:05:11.367 Calling clear_vhost_scsi_subsystem 00:05:11.367 Calling clear_bdev_subsystem 00:05:11.367 12:01:12 -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:11.367 12:01:12 -- json_config/json_config.sh@343 -- # count=100 00:05:11.367 12:01:12 -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:11.367 12:01:12 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:11.367 12:01:12 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:11.367 12:01:12 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:11.627 12:01:12 -- json_config/json_config.sh@345 -- # break 00:05:11.628 12:01:12 -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:11.628 12:01:12 -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:11.628 12:01:12 -- json_config/common.sh@31 -- # local app=target 00:05:11.628 12:01:12 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:11.628 12:01:12 -- json_config/common.sh@35 -- # [[ -n 3196567 ]] 00:05:11.628 12:01:12 -- json_config/common.sh@38 -- # kill -SIGINT 3196567 00:05:11.628 12:01:12 -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:11.628 12:01:12 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:11.628 12:01:12 -- json_config/common.sh@41 -- # kill -0 3196567 00:05:11.628 12:01:12 -- json_config/common.sh@45 -- # sleep 0.5 00:05:12.198 12:01:13 -- json_config/common.sh@40 -- # (( i++ )) 00:05:12.198 12:01:13 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:12.198 12:01:13 -- json_config/common.sh@41 -- # kill -0 3196567 00:05:12.198 12:01:13 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:12.198 12:01:13 -- json_config/common.sh@43 -- # break 00:05:12.198 12:01:13 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:12.198 12:01:13 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:12.198 SPDK target shutdown done 00:05:12.198 12:01:13 -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:12.198 INFO: relaunching applications... 00:05:12.198 12:01:13 -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:12.198 12:01:13 -- json_config/common.sh@9 -- # local app=target 00:05:12.198 12:01:13 -- json_config/common.sh@10 -- # shift 00:05:12.198 12:01:13 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:12.198 12:01:13 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:12.198 12:01:13 -- json_config/common.sh@15 -- # local app_extra_params= 00:05:12.198 12:01:13 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:12.198 12:01:13 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:12.198 12:01:13 -- json_config/common.sh@22 -- # app_pid["$app"]=3197618 00:05:12.198 12:01:13 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:12.198 Waiting for target to run... 00:05:12.198 12:01:13 -- json_config/common.sh@25 -- # waitforlisten 3197618 /var/tmp/spdk_tgt.sock 00:05:12.198 12:01:13 -- common/autotest_common.sh@817 -- # '[' -z 3197618 ']' 00:05:12.198 12:01:13 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:12.198 12:01:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:12.198 12:01:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:12.198 12:01:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:12.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:12.198 12:01:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:12.198 12:01:13 -- common/autotest_common.sh@10 -- # set +x 00:05:12.199 [2024-04-26 12:01:13.302633] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:05:12.199 [2024-04-26 12:01:13.302697] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3197618 ] 00:05:12.199 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.458 [2024-04-26 12:01:13.582135] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.458 [2024-04-26 12:01:13.630290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.028 [2024-04-26 12:01:14.120124] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:13.028 [2024-04-26 12:01:14.152473] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:13.028 12:01:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:13.028 12:01:14 -- common/autotest_common.sh@850 -- # return 0 00:05:13.028 12:01:14 -- json_config/common.sh@26 -- # echo '' 00:05:13.028 00:05:13.028 12:01:14 -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:13.028 12:01:14 -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:13.028 INFO: Checking if target configuration is the same... 00:05:13.028 12:01:14 -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:13.028 12:01:14 -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:13.028 12:01:14 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:13.028 + '[' 2 -ne 2 ']' 00:05:13.028 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:13.028 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:13.028 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:13.028 +++ basename /dev/fd/62 00:05:13.028 ++ mktemp /tmp/62.XXX 00:05:13.028 + tmp_file_1=/tmp/62.2DA 00:05:13.028 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:13.028 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:13.028 + tmp_file_2=/tmp/spdk_tgt_config.json.mYV 00:05:13.028 + ret=0 00:05:13.028 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:13.287 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:13.547 + diff -u /tmp/62.2DA /tmp/spdk_tgt_config.json.mYV 00:05:13.547 + echo 'INFO: JSON config files are the same' 00:05:13.547 INFO: JSON config files are the same 00:05:13.547 + rm /tmp/62.2DA /tmp/spdk_tgt_config.json.mYV 00:05:13.547 + exit 0 00:05:13.547 12:01:14 -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:13.547 12:01:14 -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:13.547 INFO: changing configuration and checking if this can be detected... 00:05:13.547 12:01:14 -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:13.547 12:01:14 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:13.547 12:01:14 -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:13.547 12:01:14 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:13.547 12:01:14 -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:13.547 + '[' 2 -ne 2 ']' 00:05:13.547 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:13.547 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:13.547 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:13.547 +++ basename /dev/fd/62 00:05:13.547 ++ mktemp /tmp/62.XXX 00:05:13.547 + tmp_file_1=/tmp/62.kZ4 00:05:13.547 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:13.547 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:13.547 + tmp_file_2=/tmp/spdk_tgt_config.json.NQF 00:05:13.547 + ret=0 00:05:13.547 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:13.807 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:14.068 + diff -u /tmp/62.kZ4 /tmp/spdk_tgt_config.json.NQF 00:05:14.068 + ret=1 00:05:14.068 + echo '=== Start of file: /tmp/62.kZ4 ===' 00:05:14.068 + cat /tmp/62.kZ4 00:05:14.068 + echo '=== End of file: /tmp/62.kZ4 ===' 00:05:14.068 + echo '' 00:05:14.068 + echo '=== Start of file: /tmp/spdk_tgt_config.json.NQF ===' 00:05:14.068 + cat /tmp/spdk_tgt_config.json.NQF 00:05:14.068 + echo '=== End of file: /tmp/spdk_tgt_config.json.NQF ===' 00:05:14.068 + echo '' 00:05:14.068 + rm /tmp/62.kZ4 /tmp/spdk_tgt_config.json.NQF 00:05:14.068 + exit 1 00:05:14.068 12:01:15 -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:14.068 INFO: configuration change detected. 00:05:14.068 12:01:15 -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:14.068 12:01:15 -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:14.068 12:01:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:14.068 12:01:15 -- common/autotest_common.sh@10 -- # set +x 00:05:14.068 12:01:15 -- json_config/json_config.sh@307 -- # local ret=0 00:05:14.068 12:01:15 -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:14.068 12:01:15 -- json_config/json_config.sh@317 -- # [[ -n 3197618 ]] 00:05:14.068 12:01:15 -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:14.068 12:01:15 -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:14.068 12:01:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:14.068 12:01:15 -- common/autotest_common.sh@10 -- # set +x 00:05:14.068 12:01:15 -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:14.068 12:01:15 -- json_config/json_config.sh@193 -- # uname -s 00:05:14.068 12:01:15 -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:14.068 12:01:15 -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:14.068 12:01:15 -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:14.068 12:01:15 -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:14.068 12:01:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:14.068 12:01:15 -- common/autotest_common.sh@10 -- # set +x 00:05:14.068 12:01:15 -- json_config/json_config.sh@323 -- # killprocess 3197618 00:05:14.068 12:01:15 -- common/autotest_common.sh@936 -- # '[' -z 3197618 ']' 00:05:14.068 12:01:15 -- common/autotest_common.sh@940 -- # kill -0 3197618 00:05:14.068 12:01:15 -- common/autotest_common.sh@941 -- # uname 00:05:14.068 12:01:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:14.068 12:01:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3197618 00:05:14.068 12:01:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:14.068 12:01:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:14.068 12:01:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3197618' 00:05:14.068 killing process with pid 3197618 00:05:14.068 12:01:15 -- common/autotest_common.sh@955 -- # kill 3197618 00:05:14.068 12:01:15 -- common/autotest_common.sh@960 -- # wait 3197618 00:05:14.329 12:01:15 -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:14.329 12:01:15 -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:14.329 12:01:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:14.329 12:01:15 -- common/autotest_common.sh@10 -- # set +x 00:05:14.329 12:01:15 -- json_config/json_config.sh@328 -- # return 0 00:05:14.329 12:01:15 -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:14.329 INFO: Success 00:05:14.329 00:05:14.329 real 0m6.836s 00:05:14.329 user 0m8.227s 00:05:14.329 sys 0m1.725s 00:05:14.329 12:01:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:14.329 12:01:15 -- common/autotest_common.sh@10 -- # set +x 00:05:14.329 ************************************ 00:05:14.329 END TEST json_config 00:05:14.329 ************************************ 00:05:14.329 12:01:15 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:14.329 12:01:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:14.329 12:01:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:14.329 12:01:15 -- common/autotest_common.sh@10 -- # set +x 00:05:14.591 ************************************ 00:05:14.591 START TEST json_config_extra_key 00:05:14.591 ************************************ 00:05:14.591 12:01:15 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:14.591 12:01:15 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:14.591 12:01:15 -- nvmf/common.sh@7 -- # uname -s 00:05:14.591 12:01:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:14.591 12:01:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:14.591 12:01:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:14.591 12:01:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:14.591 12:01:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:14.591 12:01:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:14.591 12:01:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:14.591 12:01:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:14.591 12:01:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:14.591 12:01:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:14.591 12:01:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:14.591 12:01:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:14.591 12:01:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:14.591 12:01:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:14.591 12:01:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:14.591 12:01:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:14.591 12:01:15 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:14.591 12:01:15 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:14.591 12:01:15 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:14.591 12:01:15 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:14.591 12:01:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.591 12:01:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.591 12:01:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.591 12:01:15 -- paths/export.sh@5 -- # export PATH 00:05:14.591 12:01:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.591 12:01:15 -- nvmf/common.sh@47 -- # : 0 00:05:14.591 12:01:15 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:14.591 12:01:15 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:14.591 12:01:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:14.591 12:01:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:14.591 12:01:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:14.591 12:01:15 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:14.591 12:01:15 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:14.591 12:01:15 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:14.591 12:01:15 -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:14.591 12:01:15 -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:14.591 12:01:15 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:14.591 12:01:15 -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:14.591 12:01:15 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:14.591 12:01:15 -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:14.591 12:01:15 -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:14.591 12:01:15 -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:14.591 12:01:15 -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:14.591 12:01:15 -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:14.591 12:01:15 -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:14.591 INFO: launching applications... 00:05:14.591 12:01:15 -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:14.591 12:01:15 -- json_config/common.sh@9 -- # local app=target 00:05:14.591 12:01:15 -- json_config/common.sh@10 -- # shift 00:05:14.591 12:01:15 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:14.591 12:01:15 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:14.591 12:01:15 -- json_config/common.sh@15 -- # local app_extra_params= 00:05:14.591 12:01:15 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:14.591 12:01:15 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:14.591 12:01:15 -- json_config/common.sh@22 -- # app_pid["$app"]=3198163 00:05:14.591 12:01:15 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:14.591 Waiting for target to run... 00:05:14.591 12:01:15 -- json_config/common.sh@25 -- # waitforlisten 3198163 /var/tmp/spdk_tgt.sock 00:05:14.591 12:01:15 -- common/autotest_common.sh@817 -- # '[' -z 3198163 ']' 00:05:14.591 12:01:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:14.591 12:01:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:14.591 12:01:15 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:14.591 12:01:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:14.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:14.591 12:01:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:14.591 12:01:15 -- common/autotest_common.sh@10 -- # set +x 00:05:14.853 [2024-04-26 12:01:15.847815] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:05:14.853 [2024-04-26 12:01:15.847870] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3198163 ] 00:05:14.853 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.113 [2024-04-26 12:01:16.166969] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.113 [2024-04-26 12:01:16.219426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.685 12:01:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:15.685 12:01:16 -- common/autotest_common.sh@850 -- # return 0 00:05:15.685 12:01:16 -- json_config/common.sh@26 -- # echo '' 00:05:15.685 00:05:15.685 12:01:16 -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:15.685 INFO: shutting down applications... 00:05:15.685 12:01:16 -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:15.685 12:01:16 -- json_config/common.sh@31 -- # local app=target 00:05:15.685 12:01:16 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:15.685 12:01:16 -- json_config/common.sh@35 -- # [[ -n 3198163 ]] 00:05:15.685 12:01:16 -- json_config/common.sh@38 -- # kill -SIGINT 3198163 00:05:15.685 12:01:16 -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:15.685 12:01:16 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:15.685 12:01:16 -- json_config/common.sh@41 -- # kill -0 3198163 00:05:15.685 12:01:16 -- json_config/common.sh@45 -- # sleep 0.5 00:05:15.946 12:01:17 -- json_config/common.sh@40 -- # (( i++ )) 00:05:15.946 12:01:17 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:15.946 12:01:17 -- json_config/common.sh@41 -- # kill -0 3198163 00:05:15.946 12:01:17 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:15.946 12:01:17 -- json_config/common.sh@43 -- # break 00:05:15.946 12:01:17 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:15.946 12:01:17 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:15.946 SPDK target shutdown done 00:05:15.946 12:01:17 -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:15.946 Success 00:05:15.946 00:05:15.946 real 0m1.440s 00:05:15.946 user 0m1.037s 00:05:15.946 sys 0m0.417s 00:05:15.946 12:01:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:15.946 12:01:17 -- common/autotest_common.sh@10 -- # set +x 00:05:15.946 ************************************ 00:05:15.946 END TEST json_config_extra_key 00:05:15.946 ************************************ 00:05:15.946 12:01:17 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:15.946 12:01:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:15.946 12:01:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:15.946 12:01:17 -- common/autotest_common.sh@10 -- # set +x 00:05:16.206 ************************************ 00:05:16.206 START TEST alias_rpc 00:05:16.206 ************************************ 00:05:16.206 12:01:17 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:16.206 * Looking for test storage... 00:05:16.206 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:16.206 12:01:17 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:16.206 12:01:17 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3198551 00:05:16.206 12:01:17 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3198551 00:05:16.206 12:01:17 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:16.206 12:01:17 -- common/autotest_common.sh@817 -- # '[' -z 3198551 ']' 00:05:16.206 12:01:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.206 12:01:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:16.206 12:01:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.206 12:01:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:16.206 12:01:17 -- common/autotest_common.sh@10 -- # set +x 00:05:16.466 [2024-04-26 12:01:17.471610] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:05:16.466 [2024-04-26 12:01:17.471674] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3198551 ] 00:05:16.466 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.466 [2024-04-26 12:01:17.535229] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.466 [2024-04-26 12:01:17.605716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.037 12:01:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:17.037 12:01:18 -- common/autotest_common.sh@850 -- # return 0 00:05:17.037 12:01:18 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:17.298 12:01:18 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3198551 00:05:17.298 12:01:18 -- common/autotest_common.sh@936 -- # '[' -z 3198551 ']' 00:05:17.298 12:01:18 -- common/autotest_common.sh@940 -- # kill -0 3198551 00:05:17.298 12:01:18 -- common/autotest_common.sh@941 -- # uname 00:05:17.298 12:01:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:17.298 12:01:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3198551 00:05:17.298 12:01:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:17.298 12:01:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:17.298 12:01:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3198551' 00:05:17.298 killing process with pid 3198551 00:05:17.298 12:01:18 -- common/autotest_common.sh@955 -- # kill 3198551 00:05:17.298 12:01:18 -- common/autotest_common.sh@960 -- # wait 3198551 00:05:17.559 00:05:17.559 real 0m1.360s 00:05:17.559 user 0m1.495s 00:05:17.559 sys 0m0.364s 00:05:17.559 12:01:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:17.559 12:01:18 -- common/autotest_common.sh@10 -- # set +x 00:05:17.559 ************************************ 00:05:17.559 END TEST alias_rpc 00:05:17.559 ************************************ 00:05:17.559 12:01:18 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:05:17.559 12:01:18 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:17.559 12:01:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:17.559 12:01:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:17.559 12:01:18 -- common/autotest_common.sh@10 -- # set +x 00:05:17.820 ************************************ 00:05:17.820 START TEST spdkcli_tcp 00:05:17.820 ************************************ 00:05:17.820 12:01:18 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:17.820 * Looking for test storage... 00:05:17.820 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:17.820 12:01:18 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:17.820 12:01:18 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:17.820 12:01:18 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:17.820 12:01:18 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:17.820 12:01:18 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:17.820 12:01:18 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:17.820 12:01:18 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:17.820 12:01:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:17.820 12:01:18 -- common/autotest_common.sh@10 -- # set +x 00:05:17.820 12:01:18 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3198953 00:05:17.820 12:01:18 -- spdkcli/tcp.sh@27 -- # waitforlisten 3198953 00:05:17.820 12:01:18 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:17.820 12:01:18 -- common/autotest_common.sh@817 -- # '[' -z 3198953 ']' 00:05:17.820 12:01:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.820 12:01:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:17.820 12:01:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.820 12:01:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:17.820 12:01:18 -- common/autotest_common.sh@10 -- # set +x 00:05:17.820 [2024-04-26 12:01:19.019760] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:05:17.820 [2024-04-26 12:01:19.019815] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3198953 ] 00:05:18.082 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.082 [2024-04-26 12:01:19.081135] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:18.082 [2024-04-26 12:01:19.148748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.082 [2024-04-26 12:01:19.148752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.655 12:01:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:18.655 12:01:19 -- common/autotest_common.sh@850 -- # return 0 00:05:18.655 12:01:19 -- spdkcli/tcp.sh@31 -- # socat_pid=3199124 00:05:18.655 12:01:19 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:18.655 12:01:19 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:18.918 [ 00:05:18.918 "bdev_malloc_delete", 00:05:18.918 "bdev_malloc_create", 00:05:18.918 "bdev_null_resize", 00:05:18.918 "bdev_null_delete", 00:05:18.918 "bdev_null_create", 00:05:18.918 "bdev_nvme_cuse_unregister", 00:05:18.918 "bdev_nvme_cuse_register", 00:05:18.918 "bdev_opal_new_user", 00:05:18.918 "bdev_opal_set_lock_state", 00:05:18.918 "bdev_opal_delete", 00:05:18.918 "bdev_opal_get_info", 00:05:18.918 "bdev_opal_create", 00:05:18.918 "bdev_nvme_opal_revert", 00:05:18.918 "bdev_nvme_opal_init", 00:05:18.918 "bdev_nvme_send_cmd", 00:05:18.918 "bdev_nvme_get_path_iostat", 00:05:18.918 "bdev_nvme_get_mdns_discovery_info", 00:05:18.918 "bdev_nvme_stop_mdns_discovery", 00:05:18.918 "bdev_nvme_start_mdns_discovery", 00:05:18.918 "bdev_nvme_set_multipath_policy", 00:05:18.918 "bdev_nvme_set_preferred_path", 00:05:18.918 "bdev_nvme_get_io_paths", 00:05:18.918 "bdev_nvme_remove_error_injection", 00:05:18.918 "bdev_nvme_add_error_injection", 00:05:18.918 "bdev_nvme_get_discovery_info", 00:05:18.918 "bdev_nvme_stop_discovery", 00:05:18.918 "bdev_nvme_start_discovery", 00:05:18.918 "bdev_nvme_get_controller_health_info", 00:05:18.918 "bdev_nvme_disable_controller", 00:05:18.918 "bdev_nvme_enable_controller", 00:05:18.918 "bdev_nvme_reset_controller", 00:05:18.918 "bdev_nvme_get_transport_statistics", 00:05:18.918 "bdev_nvme_apply_firmware", 00:05:18.918 "bdev_nvme_detach_controller", 00:05:18.918 "bdev_nvme_get_controllers", 00:05:18.918 "bdev_nvme_attach_controller", 00:05:18.918 "bdev_nvme_set_hotplug", 00:05:18.918 "bdev_nvme_set_options", 00:05:18.918 "bdev_passthru_delete", 00:05:18.918 "bdev_passthru_create", 00:05:18.918 "bdev_lvol_grow_lvstore", 00:05:18.918 "bdev_lvol_get_lvols", 00:05:18.918 "bdev_lvol_get_lvstores", 00:05:18.918 "bdev_lvol_delete", 00:05:18.918 "bdev_lvol_set_read_only", 00:05:18.918 "bdev_lvol_resize", 00:05:18.918 "bdev_lvol_decouple_parent", 00:05:18.918 "bdev_lvol_inflate", 00:05:18.918 "bdev_lvol_rename", 00:05:18.918 "bdev_lvol_clone_bdev", 00:05:18.918 "bdev_lvol_clone", 00:05:18.918 "bdev_lvol_snapshot", 00:05:18.918 "bdev_lvol_create", 00:05:18.918 "bdev_lvol_delete_lvstore", 00:05:18.918 "bdev_lvol_rename_lvstore", 00:05:18.918 "bdev_lvol_create_lvstore", 00:05:18.918 "bdev_raid_set_options", 00:05:18.918 "bdev_raid_remove_base_bdev", 00:05:18.918 "bdev_raid_add_base_bdev", 00:05:18.918 "bdev_raid_delete", 00:05:18.918 "bdev_raid_create", 00:05:18.918 "bdev_raid_get_bdevs", 00:05:18.918 "bdev_error_inject_error", 00:05:18.918 "bdev_error_delete", 00:05:18.918 "bdev_error_create", 00:05:18.918 "bdev_split_delete", 00:05:18.918 "bdev_split_create", 00:05:18.918 "bdev_delay_delete", 00:05:18.918 "bdev_delay_create", 00:05:18.918 "bdev_delay_update_latency", 00:05:18.918 "bdev_zone_block_delete", 00:05:18.918 "bdev_zone_block_create", 00:05:18.918 "blobfs_create", 00:05:18.918 "blobfs_detect", 00:05:18.918 "blobfs_set_cache_size", 00:05:18.918 "bdev_aio_delete", 00:05:18.918 "bdev_aio_rescan", 00:05:18.918 "bdev_aio_create", 00:05:18.918 "bdev_ftl_set_property", 00:05:18.918 "bdev_ftl_get_properties", 00:05:18.918 "bdev_ftl_get_stats", 00:05:18.918 "bdev_ftl_unmap", 00:05:18.918 "bdev_ftl_unload", 00:05:18.918 "bdev_ftl_delete", 00:05:18.918 "bdev_ftl_load", 00:05:18.918 "bdev_ftl_create", 00:05:18.918 "bdev_virtio_attach_controller", 00:05:18.918 "bdev_virtio_scsi_get_devices", 00:05:18.918 "bdev_virtio_detach_controller", 00:05:18.918 "bdev_virtio_blk_set_hotplug", 00:05:18.918 "bdev_iscsi_delete", 00:05:18.918 "bdev_iscsi_create", 00:05:18.918 "bdev_iscsi_set_options", 00:05:18.918 "accel_error_inject_error", 00:05:18.918 "ioat_scan_accel_module", 00:05:18.918 "dsa_scan_accel_module", 00:05:18.918 "iaa_scan_accel_module", 00:05:18.918 "vfu_virtio_create_scsi_endpoint", 00:05:18.918 "vfu_virtio_scsi_remove_target", 00:05:18.918 "vfu_virtio_scsi_add_target", 00:05:18.918 "vfu_virtio_create_blk_endpoint", 00:05:18.918 "vfu_virtio_delete_endpoint", 00:05:18.918 "keyring_file_remove_key", 00:05:18.918 "keyring_file_add_key", 00:05:18.918 "iscsi_get_histogram", 00:05:18.918 "iscsi_enable_histogram", 00:05:18.918 "iscsi_set_options", 00:05:18.918 "iscsi_get_auth_groups", 00:05:18.918 "iscsi_auth_group_remove_secret", 00:05:18.918 "iscsi_auth_group_add_secret", 00:05:18.918 "iscsi_delete_auth_group", 00:05:18.918 "iscsi_create_auth_group", 00:05:18.918 "iscsi_set_discovery_auth", 00:05:18.918 "iscsi_get_options", 00:05:18.918 "iscsi_target_node_request_logout", 00:05:18.918 "iscsi_target_node_set_redirect", 00:05:18.918 "iscsi_target_node_set_auth", 00:05:18.918 "iscsi_target_node_add_lun", 00:05:18.918 "iscsi_get_stats", 00:05:18.918 "iscsi_get_connections", 00:05:18.918 "iscsi_portal_group_set_auth", 00:05:18.918 "iscsi_start_portal_group", 00:05:18.918 "iscsi_delete_portal_group", 00:05:18.918 "iscsi_create_portal_group", 00:05:18.918 "iscsi_get_portal_groups", 00:05:18.918 "iscsi_delete_target_node", 00:05:18.918 "iscsi_target_node_remove_pg_ig_maps", 00:05:18.918 "iscsi_target_node_add_pg_ig_maps", 00:05:18.918 "iscsi_create_target_node", 00:05:18.918 "iscsi_get_target_nodes", 00:05:18.918 "iscsi_delete_initiator_group", 00:05:18.918 "iscsi_initiator_group_remove_initiators", 00:05:18.918 "iscsi_initiator_group_add_initiators", 00:05:18.918 "iscsi_create_initiator_group", 00:05:18.918 "iscsi_get_initiator_groups", 00:05:18.918 "nvmf_set_crdt", 00:05:18.918 "nvmf_set_config", 00:05:18.918 "nvmf_set_max_subsystems", 00:05:18.918 "nvmf_subsystem_get_listeners", 00:05:18.918 "nvmf_subsystem_get_qpairs", 00:05:18.918 "nvmf_subsystem_get_controllers", 00:05:18.918 "nvmf_get_stats", 00:05:18.918 "nvmf_get_transports", 00:05:18.918 "nvmf_create_transport", 00:05:18.918 "nvmf_get_targets", 00:05:18.918 "nvmf_delete_target", 00:05:18.918 "nvmf_create_target", 00:05:18.918 "nvmf_subsystem_allow_any_host", 00:05:18.918 "nvmf_subsystem_remove_host", 00:05:18.918 "nvmf_subsystem_add_host", 00:05:18.918 "nvmf_ns_remove_host", 00:05:18.918 "nvmf_ns_add_host", 00:05:18.918 "nvmf_subsystem_remove_ns", 00:05:18.918 "nvmf_subsystem_add_ns", 00:05:18.918 "nvmf_subsystem_listener_set_ana_state", 00:05:18.918 "nvmf_discovery_get_referrals", 00:05:18.918 "nvmf_discovery_remove_referral", 00:05:18.918 "nvmf_discovery_add_referral", 00:05:18.918 "nvmf_subsystem_remove_listener", 00:05:18.918 "nvmf_subsystem_add_listener", 00:05:18.918 "nvmf_delete_subsystem", 00:05:18.918 "nvmf_create_subsystem", 00:05:18.918 "nvmf_get_subsystems", 00:05:18.918 "env_dpdk_get_mem_stats", 00:05:18.918 "nbd_get_disks", 00:05:18.918 "nbd_stop_disk", 00:05:18.918 "nbd_start_disk", 00:05:18.918 "ublk_recover_disk", 00:05:18.918 "ublk_get_disks", 00:05:18.918 "ublk_stop_disk", 00:05:18.918 "ublk_start_disk", 00:05:18.918 "ublk_destroy_target", 00:05:18.918 "ublk_create_target", 00:05:18.918 "virtio_blk_create_transport", 00:05:18.918 "virtio_blk_get_transports", 00:05:18.918 "vhost_controller_set_coalescing", 00:05:18.918 "vhost_get_controllers", 00:05:18.918 "vhost_delete_controller", 00:05:18.918 "vhost_create_blk_controller", 00:05:18.918 "vhost_scsi_controller_remove_target", 00:05:18.918 "vhost_scsi_controller_add_target", 00:05:18.918 "vhost_start_scsi_controller", 00:05:18.918 "vhost_create_scsi_controller", 00:05:18.918 "thread_set_cpumask", 00:05:18.918 "framework_get_scheduler", 00:05:18.918 "framework_set_scheduler", 00:05:18.918 "framework_get_reactors", 00:05:18.918 "thread_get_io_channels", 00:05:18.918 "thread_get_pollers", 00:05:18.918 "thread_get_stats", 00:05:18.918 "framework_monitor_context_switch", 00:05:18.918 "spdk_kill_instance", 00:05:18.918 "log_enable_timestamps", 00:05:18.918 "log_get_flags", 00:05:18.918 "log_clear_flag", 00:05:18.918 "log_set_flag", 00:05:18.918 "log_get_level", 00:05:18.918 "log_set_level", 00:05:18.918 "log_get_print_level", 00:05:18.918 "log_set_print_level", 00:05:18.918 "framework_enable_cpumask_locks", 00:05:18.918 "framework_disable_cpumask_locks", 00:05:18.918 "framework_wait_init", 00:05:18.918 "framework_start_init", 00:05:18.918 "scsi_get_devices", 00:05:18.918 "bdev_get_histogram", 00:05:18.918 "bdev_enable_histogram", 00:05:18.918 "bdev_set_qos_limit", 00:05:18.918 "bdev_set_qd_sampling_period", 00:05:18.918 "bdev_get_bdevs", 00:05:18.918 "bdev_reset_iostat", 00:05:18.918 "bdev_get_iostat", 00:05:18.918 "bdev_examine", 00:05:18.918 "bdev_wait_for_examine", 00:05:18.918 "bdev_set_options", 00:05:18.918 "notify_get_notifications", 00:05:18.918 "notify_get_types", 00:05:18.918 "accel_get_stats", 00:05:18.918 "accel_set_options", 00:05:18.918 "accel_set_driver", 00:05:18.918 "accel_crypto_key_destroy", 00:05:18.918 "accel_crypto_keys_get", 00:05:18.918 "accel_crypto_key_create", 00:05:18.918 "accel_assign_opc", 00:05:18.918 "accel_get_module_info", 00:05:18.918 "accel_get_opc_assignments", 00:05:18.918 "vmd_rescan", 00:05:18.918 "vmd_remove_device", 00:05:18.918 "vmd_enable", 00:05:18.918 "sock_get_default_impl", 00:05:18.918 "sock_set_default_impl", 00:05:18.918 "sock_impl_set_options", 00:05:18.918 "sock_impl_get_options", 00:05:18.918 "iobuf_get_stats", 00:05:18.919 "iobuf_set_options", 00:05:18.919 "keyring_get_keys", 00:05:18.919 "framework_get_pci_devices", 00:05:18.919 "framework_get_config", 00:05:18.919 "framework_get_subsystems", 00:05:18.919 "vfu_tgt_set_base_path", 00:05:18.919 "trace_get_info", 00:05:18.919 "trace_get_tpoint_group_mask", 00:05:18.919 "trace_disable_tpoint_group", 00:05:18.919 "trace_enable_tpoint_group", 00:05:18.919 "trace_clear_tpoint_mask", 00:05:18.919 "trace_set_tpoint_mask", 00:05:18.919 "spdk_get_version", 00:05:18.919 "rpc_get_methods" 00:05:18.919 ] 00:05:18.919 12:01:19 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:18.919 12:01:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:18.919 12:01:19 -- common/autotest_common.sh@10 -- # set +x 00:05:18.919 12:01:19 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:18.919 12:01:19 -- spdkcli/tcp.sh@38 -- # killprocess 3198953 00:05:18.919 12:01:19 -- common/autotest_common.sh@936 -- # '[' -z 3198953 ']' 00:05:18.919 12:01:19 -- common/autotest_common.sh@940 -- # kill -0 3198953 00:05:18.919 12:01:19 -- common/autotest_common.sh@941 -- # uname 00:05:18.919 12:01:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:18.919 12:01:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3198953 00:05:18.919 12:01:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:18.919 12:01:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:18.919 12:01:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3198953' 00:05:18.919 killing process with pid 3198953 00:05:18.919 12:01:20 -- common/autotest_common.sh@955 -- # kill 3198953 00:05:18.919 12:01:20 -- common/autotest_common.sh@960 -- # wait 3198953 00:05:19.180 00:05:19.180 real 0m1.399s 00:05:19.180 user 0m2.615s 00:05:19.180 sys 0m0.381s 00:05:19.180 12:01:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:19.180 12:01:20 -- common/autotest_common.sh@10 -- # set +x 00:05:19.180 ************************************ 00:05:19.180 END TEST spdkcli_tcp 00:05:19.180 ************************************ 00:05:19.180 12:01:20 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:19.180 12:01:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:19.180 12:01:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:19.180 12:01:20 -- common/autotest_common.sh@10 -- # set +x 00:05:19.442 ************************************ 00:05:19.442 START TEST dpdk_mem_utility 00:05:19.442 ************************************ 00:05:19.442 12:01:20 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:19.442 * Looking for test storage... 00:05:19.442 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:19.442 12:01:20 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:19.442 12:01:20 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3199361 00:05:19.442 12:01:20 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3199361 00:05:19.442 12:01:20 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:19.442 12:01:20 -- common/autotest_common.sh@817 -- # '[' -z 3199361 ']' 00:05:19.442 12:01:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.442 12:01:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:19.442 12:01:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.442 12:01:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:19.442 12:01:20 -- common/autotest_common.sh@10 -- # set +x 00:05:19.442 [2024-04-26 12:01:20.612111] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:05:19.442 [2024-04-26 12:01:20.612178] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3199361 ] 00:05:19.442 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.704 [2024-04-26 12:01:20.675529] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.704 [2024-04-26 12:01:20.738179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.277 12:01:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:20.277 12:01:21 -- common/autotest_common.sh@850 -- # return 0 00:05:20.277 12:01:21 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:20.277 12:01:21 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:20.277 12:01:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:20.277 12:01:21 -- common/autotest_common.sh@10 -- # set +x 00:05:20.277 { 00:05:20.277 "filename": "/tmp/spdk_mem_dump.txt" 00:05:20.277 } 00:05:20.277 12:01:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:20.277 12:01:21 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:20.277 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:20.277 1 heaps totaling size 814.000000 MiB 00:05:20.277 size: 814.000000 MiB heap id: 0 00:05:20.277 end heaps---------- 00:05:20.277 8 mempools totaling size 598.116089 MiB 00:05:20.277 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:20.277 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:20.277 size: 84.521057 MiB name: bdev_io_3199361 00:05:20.277 size: 51.011292 MiB name: evtpool_3199361 00:05:20.277 size: 50.003479 MiB name: msgpool_3199361 00:05:20.277 size: 21.763794 MiB name: PDU_Pool 00:05:20.277 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:20.277 size: 0.026123 MiB name: Session_Pool 00:05:20.277 end mempools------- 00:05:20.277 6 memzones totaling size 4.142822 MiB 00:05:20.277 size: 1.000366 MiB name: RG_ring_0_3199361 00:05:20.277 size: 1.000366 MiB name: RG_ring_1_3199361 00:05:20.277 size: 1.000366 MiB name: RG_ring_4_3199361 00:05:20.277 size: 1.000366 MiB name: RG_ring_5_3199361 00:05:20.277 size: 0.125366 MiB name: RG_ring_2_3199361 00:05:20.277 size: 0.015991 MiB name: RG_ring_3_3199361 00:05:20.277 end memzones------- 00:05:20.277 12:01:21 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:20.277 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:20.277 list of free elements. size: 12.519348 MiB 00:05:20.277 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:20.277 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:20.277 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:20.277 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:20.277 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:20.277 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:20.277 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:20.277 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:20.277 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:20.277 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:20.277 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:20.277 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:20.277 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:20.277 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:20.278 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:20.278 list of standard malloc elements. size: 199.218079 MiB 00:05:20.278 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:20.278 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:20.278 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:20.278 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:20.278 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:20.278 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:20.278 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:20.278 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:20.278 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:20.278 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:20.278 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:20.278 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:20.278 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:20.278 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:20.278 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:20.278 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:20.278 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:20.278 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:20.278 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:20.278 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:20.278 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:20.278 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:20.278 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:20.278 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:20.278 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:20.278 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:20.278 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:20.278 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:20.278 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:20.278 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:20.278 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:20.278 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:20.278 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:20.278 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:20.278 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:20.278 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:20.278 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:20.278 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:20.278 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:20.278 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:20.278 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:20.278 list of memzone associated elements. size: 602.262573 MiB 00:05:20.278 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:20.278 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:20.278 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:20.278 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:20.278 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:20.278 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3199361_0 00:05:20.278 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:20.278 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3199361_0 00:05:20.278 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:20.278 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3199361_0 00:05:20.278 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:20.278 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:20.278 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:20.278 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:20.278 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:20.278 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3199361 00:05:20.278 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:20.278 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3199361 00:05:20.278 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:20.278 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3199361 00:05:20.278 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:20.278 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:20.278 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:20.278 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:20.278 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:20.278 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:20.278 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:20.278 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:20.278 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:20.278 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3199361 00:05:20.278 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:20.278 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3199361 00:05:20.278 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:20.278 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3199361 00:05:20.278 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:20.278 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3199361 00:05:20.278 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:20.278 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3199361 00:05:20.278 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:20.278 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:20.278 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:20.278 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:20.278 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:20.278 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:20.278 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:20.278 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3199361 00:05:20.278 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:20.278 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:20.278 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:20.278 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:20.278 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:20.278 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3199361 00:05:20.278 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:20.278 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:20.278 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:20.278 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3199361 00:05:20.278 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:20.278 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3199361 00:05:20.278 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:20.278 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:20.278 12:01:21 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:20.278 12:01:21 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3199361 00:05:20.278 12:01:21 -- common/autotest_common.sh@936 -- # '[' -z 3199361 ']' 00:05:20.278 12:01:21 -- common/autotest_common.sh@940 -- # kill -0 3199361 00:05:20.278 12:01:21 -- common/autotest_common.sh@941 -- # uname 00:05:20.278 12:01:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:20.278 12:01:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3199361 00:05:20.540 12:01:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:20.540 12:01:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:20.540 12:01:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3199361' 00:05:20.540 killing process with pid 3199361 00:05:20.540 12:01:21 -- common/autotest_common.sh@955 -- # kill 3199361 00:05:20.540 12:01:21 -- common/autotest_common.sh@960 -- # wait 3199361 00:05:20.540 00:05:20.540 real 0m1.297s 00:05:20.540 user 0m1.394s 00:05:20.540 sys 0m0.354s 00:05:20.540 12:01:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:20.540 12:01:21 -- common/autotest_common.sh@10 -- # set +x 00:05:20.540 ************************************ 00:05:20.540 END TEST dpdk_mem_utility 00:05:20.540 ************************************ 00:05:20.801 12:01:21 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:20.801 12:01:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:20.801 12:01:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:20.801 12:01:21 -- common/autotest_common.sh@10 -- # set +x 00:05:20.801 ************************************ 00:05:20.801 START TEST event 00:05:20.801 ************************************ 00:05:20.801 12:01:21 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:20.801 * Looking for test storage... 00:05:20.801 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:20.801 12:01:22 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:20.801 12:01:22 -- bdev/nbd_common.sh@6 -- # set -e 00:05:21.062 12:01:22 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:21.062 12:01:22 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:21.062 12:01:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:21.062 12:01:22 -- common/autotest_common.sh@10 -- # set +x 00:05:21.062 ************************************ 00:05:21.062 START TEST event_perf 00:05:21.062 ************************************ 00:05:21.062 12:01:22 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:21.062 Running I/O for 1 seconds...[2024-04-26 12:01:22.199771] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:05:21.062 [2024-04-26 12:01:22.199899] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3199770 ] 00:05:21.062 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.062 [2024-04-26 12:01:22.266505] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:21.323 [2024-04-26 12:01:22.342100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:21.323 [2024-04-26 12:01:22.342216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:21.323 [2024-04-26 12:01:22.342343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.323 Running I/O for 1 seconds...[2024-04-26 12:01:22.342342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:22.268 00:05:22.268 lcore 0: 169190 00:05:22.268 lcore 1: 169186 00:05:22.268 lcore 2: 169188 00:05:22.268 lcore 3: 169190 00:05:22.268 done. 00:05:22.268 00:05:22.268 real 0m1.217s 00:05:22.268 user 0m4.135s 00:05:22.268 sys 0m0.079s 00:05:22.268 12:01:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:22.268 12:01:23 -- common/autotest_common.sh@10 -- # set +x 00:05:22.268 ************************************ 00:05:22.268 END TEST event_perf 00:05:22.268 ************************************ 00:05:22.268 12:01:23 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:22.268 12:01:23 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:22.268 12:01:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:22.268 12:01:23 -- common/autotest_common.sh@10 -- # set +x 00:05:22.529 ************************************ 00:05:22.529 START TEST event_reactor 00:05:22.529 ************************************ 00:05:22.529 12:01:23 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:22.529 [2024-04-26 12:01:23.610429] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:05:22.529 [2024-04-26 12:01:23.610513] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3200129 ] 00:05:22.529 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.529 [2024-04-26 12:01:23.676440] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.529 [2024-04-26 12:01:23.746954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.914 test_start 00:05:23.914 oneshot 00:05:23.914 tick 100 00:05:23.914 tick 100 00:05:23.914 tick 250 00:05:23.914 tick 100 00:05:23.914 tick 100 00:05:23.914 tick 100 00:05:23.914 tick 250 00:05:23.914 tick 500 00:05:23.914 tick 100 00:05:23.914 tick 100 00:05:23.914 tick 250 00:05:23.914 tick 100 00:05:23.914 tick 100 00:05:23.914 test_end 00:05:23.914 00:05:23.914 real 0m1.209s 00:05:23.914 user 0m1.137s 00:05:23.914 sys 0m0.068s 00:05:23.914 12:01:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:23.914 12:01:24 -- common/autotest_common.sh@10 -- # set +x 00:05:23.914 ************************************ 00:05:23.914 END TEST event_reactor 00:05:23.914 ************************************ 00:05:23.914 12:01:24 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:23.914 12:01:24 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:23.914 12:01:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:23.914 12:01:24 -- common/autotest_common.sh@10 -- # set +x 00:05:23.914 ************************************ 00:05:23.914 START TEST event_reactor_perf 00:05:23.914 ************************************ 00:05:23.914 12:01:24 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:23.914 [2024-04-26 12:01:25.013387] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:05:23.914 [2024-04-26 12:01:25.013483] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3200406 ] 00:05:23.914 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.914 [2024-04-26 12:01:25.081802] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.175 [2024-04-26 12:01:25.152768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.117 test_start 00:05:25.117 test_end 00:05:25.117 Performance: 365994 events per second 00:05:25.117 00:05:25.117 real 0m1.213s 00:05:25.117 user 0m1.135s 00:05:25.117 sys 0m0.073s 00:05:25.117 12:01:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:25.117 12:01:26 -- common/autotest_common.sh@10 -- # set +x 00:05:25.117 ************************************ 00:05:25.117 END TEST event_reactor_perf 00:05:25.117 ************************************ 00:05:25.117 12:01:26 -- event/event.sh@49 -- # uname -s 00:05:25.117 12:01:26 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:25.117 12:01:26 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:25.117 12:01:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:25.117 12:01:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:25.117 12:01:26 -- common/autotest_common.sh@10 -- # set +x 00:05:25.378 ************************************ 00:05:25.378 START TEST event_scheduler 00:05:25.378 ************************************ 00:05:25.378 12:01:26 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:25.378 * Looking for test storage... 00:05:25.378 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:25.378 12:01:26 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:25.378 12:01:26 -- scheduler/scheduler.sh@35 -- # scheduler_pid=3200702 00:05:25.378 12:01:26 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:25.378 12:01:26 -- scheduler/scheduler.sh@37 -- # waitforlisten 3200702 00:05:25.378 12:01:26 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:25.378 12:01:26 -- common/autotest_common.sh@817 -- # '[' -z 3200702 ']' 00:05:25.378 12:01:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.378 12:01:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:25.378 12:01:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.378 12:01:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:25.378 12:01:26 -- common/autotest_common.sh@10 -- # set +x 00:05:25.378 [2024-04-26 12:01:26.572583] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:05:25.378 [2024-04-26 12:01:26.572663] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3200702 ] 00:05:25.639 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.639 [2024-04-26 12:01:26.630108] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:25.639 [2024-04-26 12:01:26.696320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.639 [2024-04-26 12:01:26.696478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.639 [2024-04-26 12:01:26.696627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:25.639 [2024-04-26 12:01:26.696629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:26.308 12:01:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:26.308 12:01:27 -- common/autotest_common.sh@850 -- # return 0 00:05:26.308 12:01:27 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:26.308 12:01:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:26.308 12:01:27 -- common/autotest_common.sh@10 -- # set +x 00:05:26.308 POWER: Env isn't set yet! 00:05:26.308 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:26.308 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:26.308 POWER: Cannot set governor of lcore 0 to userspace 00:05:26.308 POWER: Attempting to initialise PSTAT power management... 00:05:26.308 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:26.308 POWER: Initialized successfully for lcore 0 power management 00:05:26.308 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:26.308 POWER: Initialized successfully for lcore 1 power management 00:05:26.308 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:26.308 POWER: Initialized successfully for lcore 2 power management 00:05:26.308 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:26.308 POWER: Initialized successfully for lcore 3 power management 00:05:26.308 12:01:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:26.309 12:01:27 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:26.309 12:01:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:26.309 12:01:27 -- common/autotest_common.sh@10 -- # set +x 00:05:26.309 [2024-04-26 12:01:27.462154] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:26.309 12:01:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:26.309 12:01:27 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:26.309 12:01:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:26.309 12:01:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:26.309 12:01:27 -- common/autotest_common.sh@10 -- # set +x 00:05:26.602 ************************************ 00:05:26.602 START TEST scheduler_create_thread 00:05:26.602 ************************************ 00:05:26.602 12:01:27 -- common/autotest_common.sh@1111 -- # scheduler_create_thread 00:05:26.602 12:01:27 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:26.602 12:01:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:26.602 12:01:27 -- common/autotest_common.sh@10 -- # set +x 00:05:26.602 2 00:05:26.602 12:01:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:26.602 12:01:27 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:26.602 12:01:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:26.603 12:01:27 -- common/autotest_common.sh@10 -- # set +x 00:05:26.603 3 00:05:26.603 12:01:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:26.603 12:01:27 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:26.603 12:01:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:26.603 12:01:27 -- common/autotest_common.sh@10 -- # set +x 00:05:26.603 4 00:05:26.603 12:01:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:26.603 12:01:27 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:26.603 12:01:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:26.603 12:01:27 -- common/autotest_common.sh@10 -- # set +x 00:05:26.603 5 00:05:26.603 12:01:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:26.603 12:01:27 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:26.603 12:01:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:26.603 12:01:27 -- common/autotest_common.sh@10 -- # set +x 00:05:26.603 6 00:05:26.603 12:01:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:26.603 12:01:27 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:26.603 12:01:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:26.603 12:01:27 -- common/autotest_common.sh@10 -- # set +x 00:05:26.603 7 00:05:26.603 12:01:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:26.603 12:01:27 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:26.603 12:01:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:26.603 12:01:27 -- common/autotest_common.sh@10 -- # set +x 00:05:26.603 8 00:05:26.603 12:01:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:26.603 12:01:27 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:26.603 12:01:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:26.603 12:01:27 -- common/autotest_common.sh@10 -- # set +x 00:05:26.603 9 00:05:26.603 12:01:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:26.603 12:01:27 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:26.603 12:01:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:26.603 12:01:27 -- common/autotest_common.sh@10 -- # set +x 00:05:27.986 10 00:05:27.986 12:01:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:27.986 12:01:28 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:27.986 12:01:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:27.986 12:01:28 -- common/autotest_common.sh@10 -- # set +x 00:05:29.370 12:01:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:29.370 12:01:30 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:29.370 12:01:30 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:29.370 12:01:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:29.370 12:01:30 -- common/autotest_common.sh@10 -- # set +x 00:05:29.941 12:01:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:29.941 12:01:31 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:29.941 12:01:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:29.941 12:01:31 -- common/autotest_common.sh@10 -- # set +x 00:05:30.881 12:01:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:30.881 12:01:31 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:30.881 12:01:31 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:30.881 12:01:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:30.881 12:01:31 -- common/autotest_common.sh@10 -- # set +x 00:05:31.823 12:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:31.823 00:05:31.823 real 0m5.097s 00:05:31.823 user 0m0.023s 00:05:31.823 sys 0m0.008s 00:05:31.823 12:01:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:31.823 12:01:32 -- common/autotest_common.sh@10 -- # set +x 00:05:31.823 ************************************ 00:05:31.823 END TEST scheduler_create_thread 00:05:31.823 ************************************ 00:05:31.823 12:01:32 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:31.823 12:01:32 -- scheduler/scheduler.sh@46 -- # killprocess 3200702 00:05:31.823 12:01:32 -- common/autotest_common.sh@936 -- # '[' -z 3200702 ']' 00:05:31.823 12:01:32 -- common/autotest_common.sh@940 -- # kill -0 3200702 00:05:31.823 12:01:32 -- common/autotest_common.sh@941 -- # uname 00:05:31.823 12:01:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:31.823 12:01:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3200702 00:05:31.823 12:01:32 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:31.823 12:01:32 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:31.823 12:01:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3200702' 00:05:31.823 killing process with pid 3200702 00:05:31.823 12:01:32 -- common/autotest_common.sh@955 -- # kill 3200702 00:05:31.823 12:01:32 -- common/autotest_common.sh@960 -- # wait 3200702 00:05:32.084 [2024-04-26 12:01:33.089503] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:32.084 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:05:32.084 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:32.084 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:05:32.084 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:32.084 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:05:32.084 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:32.084 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:05:32.084 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:32.084 00:05:32.084 real 0m6.885s 00:05:32.084 user 0m13.387s 00:05:32.084 sys 0m0.430s 00:05:32.084 12:01:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:32.084 12:01:33 -- common/autotest_common.sh@10 -- # set +x 00:05:32.084 ************************************ 00:05:32.084 END TEST event_scheduler 00:05:32.084 ************************************ 00:05:32.344 12:01:33 -- event/event.sh@51 -- # modprobe -n nbd 00:05:32.344 12:01:33 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:32.344 12:01:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:32.344 12:01:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:32.344 12:01:33 -- common/autotest_common.sh@10 -- # set +x 00:05:32.344 ************************************ 00:05:32.344 START TEST app_repeat 00:05:32.344 ************************************ 00:05:32.344 12:01:33 -- common/autotest_common.sh@1111 -- # app_repeat_test 00:05:32.344 12:01:33 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.344 12:01:33 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.344 12:01:33 -- event/event.sh@13 -- # local nbd_list 00:05:32.344 12:01:33 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:32.344 12:01:33 -- event/event.sh@14 -- # local bdev_list 00:05:32.344 12:01:33 -- event/event.sh@15 -- # local repeat_times=4 00:05:32.344 12:01:33 -- event/event.sh@17 -- # modprobe nbd 00:05:32.344 12:01:33 -- event/event.sh@19 -- # repeat_pid=3202202 00:05:32.344 12:01:33 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:32.344 12:01:33 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:32.344 12:01:33 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3202202' 00:05:32.344 Process app_repeat pid: 3202202 00:05:32.344 12:01:33 -- event/event.sh@23 -- # for i in {0..2} 00:05:32.344 12:01:33 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:32.344 spdk_app_start Round 0 00:05:32.344 12:01:33 -- event/event.sh@25 -- # waitforlisten 3202202 /var/tmp/spdk-nbd.sock 00:05:32.344 12:01:33 -- common/autotest_common.sh@817 -- # '[' -z 3202202 ']' 00:05:32.344 12:01:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:32.344 12:01:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:32.344 12:01:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:32.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:32.344 12:01:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:32.344 12:01:33 -- common/autotest_common.sh@10 -- # set +x 00:05:32.344 [2024-04-26 12:01:33.539852] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:05:32.344 [2024-04-26 12:01:33.539927] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3202202 ] 00:05:32.605 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.605 [2024-04-26 12:01:33.606131] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:32.605 [2024-04-26 12:01:33.679805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.605 [2024-04-26 12:01:33.679809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.176 12:01:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:33.176 12:01:34 -- common/autotest_common.sh@850 -- # return 0 00:05:33.176 12:01:34 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:33.436 Malloc0 00:05:33.436 12:01:34 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:33.698 Malloc1 00:05:33.698 12:01:34 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:33.698 12:01:34 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.698 12:01:34 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:33.698 12:01:34 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:33.698 12:01:34 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.698 12:01:34 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:33.698 12:01:34 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:33.698 12:01:34 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.698 12:01:34 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:33.698 12:01:34 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:33.698 12:01:34 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.698 12:01:34 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:33.698 12:01:34 -- bdev/nbd_common.sh@12 -- # local i 00:05:33.698 12:01:34 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:33.698 12:01:34 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:33.698 12:01:34 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:33.698 /dev/nbd0 00:05:33.698 12:01:34 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:33.698 12:01:34 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:33.698 12:01:34 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:05:33.698 12:01:34 -- common/autotest_common.sh@855 -- # local i 00:05:33.698 12:01:34 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:33.698 12:01:34 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:33.698 12:01:34 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:05:33.698 12:01:34 -- common/autotest_common.sh@859 -- # break 00:05:33.698 12:01:34 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:33.698 12:01:34 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:33.698 12:01:34 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:33.698 1+0 records in 00:05:33.698 1+0 records out 00:05:33.698 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272643 s, 15.0 MB/s 00:05:33.698 12:01:34 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:33.698 12:01:34 -- common/autotest_common.sh@872 -- # size=4096 00:05:33.698 12:01:34 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:33.698 12:01:34 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:33.698 12:01:34 -- common/autotest_common.sh@875 -- # return 0 00:05:33.698 12:01:34 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:33.698 12:01:34 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:33.698 12:01:34 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:33.959 /dev/nbd1 00:05:33.959 12:01:35 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:33.959 12:01:35 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:33.959 12:01:35 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:05:33.959 12:01:35 -- common/autotest_common.sh@855 -- # local i 00:05:33.959 12:01:35 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:33.959 12:01:35 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:33.959 12:01:35 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:05:33.959 12:01:35 -- common/autotest_common.sh@859 -- # break 00:05:33.959 12:01:35 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:33.959 12:01:35 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:33.959 12:01:35 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:33.959 1+0 records in 00:05:33.959 1+0 records out 00:05:33.959 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000242297 s, 16.9 MB/s 00:05:33.960 12:01:35 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:33.960 12:01:35 -- common/autotest_common.sh@872 -- # size=4096 00:05:33.960 12:01:35 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:33.960 12:01:35 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:33.960 12:01:35 -- common/autotest_common.sh@875 -- # return 0 00:05:33.960 12:01:35 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:33.960 12:01:35 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:33.960 12:01:35 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:33.960 12:01:35 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.960 12:01:35 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:34.221 12:01:35 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:34.221 { 00:05:34.221 "nbd_device": "/dev/nbd0", 00:05:34.221 "bdev_name": "Malloc0" 00:05:34.221 }, 00:05:34.221 { 00:05:34.221 "nbd_device": "/dev/nbd1", 00:05:34.221 "bdev_name": "Malloc1" 00:05:34.221 } 00:05:34.221 ]' 00:05:34.221 12:01:35 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:34.221 { 00:05:34.221 "nbd_device": "/dev/nbd0", 00:05:34.221 "bdev_name": "Malloc0" 00:05:34.221 }, 00:05:34.221 { 00:05:34.221 "nbd_device": "/dev/nbd1", 00:05:34.221 "bdev_name": "Malloc1" 00:05:34.221 } 00:05:34.221 ]' 00:05:34.221 12:01:35 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:34.221 12:01:35 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:34.221 /dev/nbd1' 00:05:34.221 12:01:35 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:34.221 /dev/nbd1' 00:05:34.221 12:01:35 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:34.221 12:01:35 -- bdev/nbd_common.sh@65 -- # count=2 00:05:34.221 12:01:35 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:34.221 12:01:35 -- bdev/nbd_common.sh@95 -- # count=2 00:05:34.221 12:01:35 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:34.221 12:01:35 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:34.221 12:01:35 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.221 12:01:35 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:34.221 12:01:35 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:34.221 12:01:35 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:34.221 12:01:35 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:34.221 12:01:35 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:34.221 256+0 records in 00:05:34.221 256+0 records out 00:05:34.221 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124611 s, 84.1 MB/s 00:05:34.221 12:01:35 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:34.221 12:01:35 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:34.221 256+0 records in 00:05:34.221 256+0 records out 00:05:34.221 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0159557 s, 65.7 MB/s 00:05:34.221 12:01:35 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:34.221 12:01:35 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:34.221 256+0 records in 00:05:34.221 256+0 records out 00:05:34.221 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0173837 s, 60.3 MB/s 00:05:34.221 12:01:35 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:34.221 12:01:35 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.221 12:01:35 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:34.221 12:01:35 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:34.221 12:01:35 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:34.221 12:01:35 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:34.221 12:01:35 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:34.221 12:01:35 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:34.221 12:01:35 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:34.221 12:01:35 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:34.221 12:01:35 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:34.221 12:01:35 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:34.221 12:01:35 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:34.221 12:01:35 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.221 12:01:35 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.221 12:01:35 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:34.221 12:01:35 -- bdev/nbd_common.sh@51 -- # local i 00:05:34.221 12:01:35 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:34.221 12:01:35 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:34.482 12:01:35 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:34.482 12:01:35 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:34.482 12:01:35 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:34.482 12:01:35 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:34.482 12:01:35 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:34.482 12:01:35 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:34.482 12:01:35 -- bdev/nbd_common.sh@41 -- # break 00:05:34.482 12:01:35 -- bdev/nbd_common.sh@45 -- # return 0 00:05:34.482 12:01:35 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:34.482 12:01:35 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:34.744 12:01:35 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:34.744 12:01:35 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:34.744 12:01:35 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:34.744 12:01:35 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:34.744 12:01:35 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:34.744 12:01:35 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:34.744 12:01:35 -- bdev/nbd_common.sh@41 -- # break 00:05:34.744 12:01:35 -- bdev/nbd_common.sh@45 -- # return 0 00:05:34.744 12:01:35 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:34.744 12:01:35 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.744 12:01:35 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:34.744 12:01:35 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:34.744 12:01:35 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:34.744 12:01:35 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:34.744 12:01:35 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:34.744 12:01:35 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:34.744 12:01:35 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:34.744 12:01:35 -- bdev/nbd_common.sh@65 -- # true 00:05:34.744 12:01:35 -- bdev/nbd_common.sh@65 -- # count=0 00:05:34.744 12:01:35 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:34.744 12:01:35 -- bdev/nbd_common.sh@104 -- # count=0 00:05:34.744 12:01:35 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:34.744 12:01:35 -- bdev/nbd_common.sh@109 -- # return 0 00:05:34.745 12:01:35 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:35.004 12:01:36 -- event/event.sh@35 -- # sleep 3 00:05:35.004 [2024-04-26 12:01:36.222762] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:35.265 [2024-04-26 12:01:36.283530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.265 [2024-04-26 12:01:36.283534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.265 [2024-04-26 12:01:36.315255] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:35.265 [2024-04-26 12:01:36.315291] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:38.569 12:01:39 -- event/event.sh@23 -- # for i in {0..2} 00:05:38.569 12:01:39 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:38.569 spdk_app_start Round 1 00:05:38.569 12:01:39 -- event/event.sh@25 -- # waitforlisten 3202202 /var/tmp/spdk-nbd.sock 00:05:38.569 12:01:39 -- common/autotest_common.sh@817 -- # '[' -z 3202202 ']' 00:05:38.569 12:01:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:38.569 12:01:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:38.569 12:01:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:38.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:38.569 12:01:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:38.569 12:01:39 -- common/autotest_common.sh@10 -- # set +x 00:05:38.569 12:01:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:38.569 12:01:39 -- common/autotest_common.sh@850 -- # return 0 00:05:38.569 12:01:39 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:38.569 Malloc0 00:05:38.569 12:01:39 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:38.569 Malloc1 00:05:38.569 12:01:39 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:38.569 12:01:39 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.569 12:01:39 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:38.569 12:01:39 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:38.569 12:01:39 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.569 12:01:39 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:38.569 12:01:39 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:38.569 12:01:39 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.569 12:01:39 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:38.569 12:01:39 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:38.569 12:01:39 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.569 12:01:39 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:38.569 12:01:39 -- bdev/nbd_common.sh@12 -- # local i 00:05:38.569 12:01:39 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:38.569 12:01:39 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.569 12:01:39 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:38.569 /dev/nbd0 00:05:38.569 12:01:39 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:38.569 12:01:39 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:38.569 12:01:39 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:05:38.569 12:01:39 -- common/autotest_common.sh@855 -- # local i 00:05:38.569 12:01:39 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:38.569 12:01:39 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:38.569 12:01:39 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:05:38.569 12:01:39 -- common/autotest_common.sh@859 -- # break 00:05:38.569 12:01:39 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:38.569 12:01:39 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:38.569 12:01:39 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:38.569 1+0 records in 00:05:38.569 1+0 records out 00:05:38.569 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000206241 s, 19.9 MB/s 00:05:38.569 12:01:39 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:38.569 12:01:39 -- common/autotest_common.sh@872 -- # size=4096 00:05:38.569 12:01:39 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:38.569 12:01:39 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:38.569 12:01:39 -- common/autotest_common.sh@875 -- # return 0 00:05:38.569 12:01:39 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:38.569 12:01:39 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.569 12:01:39 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:38.828 /dev/nbd1 00:05:38.828 12:01:39 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:38.828 12:01:39 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:38.828 12:01:39 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:05:38.828 12:01:39 -- common/autotest_common.sh@855 -- # local i 00:05:38.828 12:01:39 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:38.828 12:01:39 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:38.828 12:01:39 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:05:38.828 12:01:39 -- common/autotest_common.sh@859 -- # break 00:05:38.828 12:01:39 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:38.828 12:01:39 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:38.828 12:01:39 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:38.828 1+0 records in 00:05:38.828 1+0 records out 00:05:38.828 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000337289 s, 12.1 MB/s 00:05:38.828 12:01:39 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:38.828 12:01:39 -- common/autotest_common.sh@872 -- # size=4096 00:05:38.828 12:01:39 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:38.828 12:01:39 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:38.828 12:01:39 -- common/autotest_common.sh@875 -- # return 0 00:05:38.828 12:01:39 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:38.828 12:01:39 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.828 12:01:39 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:38.828 12:01:39 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.828 12:01:39 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:39.088 12:01:40 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:39.088 { 00:05:39.088 "nbd_device": "/dev/nbd0", 00:05:39.088 "bdev_name": "Malloc0" 00:05:39.088 }, 00:05:39.088 { 00:05:39.088 "nbd_device": "/dev/nbd1", 00:05:39.088 "bdev_name": "Malloc1" 00:05:39.088 } 00:05:39.088 ]' 00:05:39.088 12:01:40 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:39.088 { 00:05:39.088 "nbd_device": "/dev/nbd0", 00:05:39.088 "bdev_name": "Malloc0" 00:05:39.088 }, 00:05:39.088 { 00:05:39.088 "nbd_device": "/dev/nbd1", 00:05:39.088 "bdev_name": "Malloc1" 00:05:39.088 } 00:05:39.088 ]' 00:05:39.088 12:01:40 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:39.088 12:01:40 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:39.088 /dev/nbd1' 00:05:39.088 12:01:40 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:39.088 /dev/nbd1' 00:05:39.088 12:01:40 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:39.088 12:01:40 -- bdev/nbd_common.sh@65 -- # count=2 00:05:39.088 12:01:40 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:39.088 12:01:40 -- bdev/nbd_common.sh@95 -- # count=2 00:05:39.088 12:01:40 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:39.088 12:01:40 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:39.088 12:01:40 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.088 12:01:40 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:39.088 12:01:40 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:39.088 12:01:40 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:39.088 12:01:40 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:39.088 12:01:40 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:39.088 256+0 records in 00:05:39.088 256+0 records out 00:05:39.088 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118204 s, 88.7 MB/s 00:05:39.088 12:01:40 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:39.088 12:01:40 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:39.088 256+0 records in 00:05:39.088 256+0 records out 00:05:39.088 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.016782 s, 62.5 MB/s 00:05:39.088 12:01:40 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:39.088 12:01:40 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:39.088 256+0 records in 00:05:39.088 256+0 records out 00:05:39.088 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0176417 s, 59.4 MB/s 00:05:39.088 12:01:40 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:39.088 12:01:40 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.088 12:01:40 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:39.088 12:01:40 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:39.088 12:01:40 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:39.088 12:01:40 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:39.088 12:01:40 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:39.088 12:01:40 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:39.088 12:01:40 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:39.088 12:01:40 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:39.088 12:01:40 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:39.088 12:01:40 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:39.088 12:01:40 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:39.088 12:01:40 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.088 12:01:40 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.088 12:01:40 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:39.088 12:01:40 -- bdev/nbd_common.sh@51 -- # local i 00:05:39.088 12:01:40 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:39.089 12:01:40 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:39.349 12:01:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:39.349 12:01:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:39.349 12:01:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:39.349 12:01:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:39.349 12:01:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:39.349 12:01:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:39.349 12:01:40 -- bdev/nbd_common.sh@41 -- # break 00:05:39.349 12:01:40 -- bdev/nbd_common.sh@45 -- # return 0 00:05:39.349 12:01:40 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:39.349 12:01:40 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:39.610 12:01:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:39.610 12:01:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:39.610 12:01:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:39.610 12:01:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:39.610 12:01:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:39.610 12:01:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:39.610 12:01:40 -- bdev/nbd_common.sh@41 -- # break 00:05:39.610 12:01:40 -- bdev/nbd_common.sh@45 -- # return 0 00:05:39.610 12:01:40 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:39.610 12:01:40 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.611 12:01:40 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:39.611 12:01:40 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:39.611 12:01:40 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:39.611 12:01:40 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:39.611 12:01:40 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:39.611 12:01:40 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:39.611 12:01:40 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:39.611 12:01:40 -- bdev/nbd_common.sh@65 -- # true 00:05:39.611 12:01:40 -- bdev/nbd_common.sh@65 -- # count=0 00:05:39.611 12:01:40 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:39.611 12:01:40 -- bdev/nbd_common.sh@104 -- # count=0 00:05:39.611 12:01:40 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:39.611 12:01:40 -- bdev/nbd_common.sh@109 -- # return 0 00:05:39.611 12:01:40 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:39.871 12:01:40 -- event/event.sh@35 -- # sleep 3 00:05:39.871 [2024-04-26 12:01:41.089980] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:40.131 [2024-04-26 12:01:41.151737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.131 [2024-04-26 12:01:41.151740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.131 [2024-04-26 12:01:41.184300] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:40.131 [2024-04-26 12:01:41.184336] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:43.428 12:01:43 -- event/event.sh@23 -- # for i in {0..2} 00:05:43.428 12:01:43 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:43.428 spdk_app_start Round 2 00:05:43.428 12:01:43 -- event/event.sh@25 -- # waitforlisten 3202202 /var/tmp/spdk-nbd.sock 00:05:43.428 12:01:43 -- common/autotest_common.sh@817 -- # '[' -z 3202202 ']' 00:05:43.428 12:01:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:43.428 12:01:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:43.428 12:01:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:43.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:43.428 12:01:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:43.428 12:01:43 -- common/autotest_common.sh@10 -- # set +x 00:05:43.428 12:01:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:43.428 12:01:44 -- common/autotest_common.sh@850 -- # return 0 00:05:43.428 12:01:44 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:43.428 Malloc0 00:05:43.428 12:01:44 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:43.428 Malloc1 00:05:43.428 12:01:44 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:43.428 12:01:44 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.428 12:01:44 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:43.428 12:01:44 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:43.428 12:01:44 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.428 12:01:44 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:43.428 12:01:44 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:43.428 12:01:44 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.428 12:01:44 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:43.428 12:01:44 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:43.428 12:01:44 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.428 12:01:44 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:43.428 12:01:44 -- bdev/nbd_common.sh@12 -- # local i 00:05:43.428 12:01:44 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:43.428 12:01:44 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.428 12:01:44 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:43.428 /dev/nbd0 00:05:43.428 12:01:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:43.428 12:01:44 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:43.428 12:01:44 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:05:43.428 12:01:44 -- common/autotest_common.sh@855 -- # local i 00:05:43.428 12:01:44 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:43.428 12:01:44 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:43.428 12:01:44 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:05:43.428 12:01:44 -- common/autotest_common.sh@859 -- # break 00:05:43.428 12:01:44 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:43.428 12:01:44 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:43.428 12:01:44 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:43.428 1+0 records in 00:05:43.428 1+0 records out 00:05:43.428 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000248336 s, 16.5 MB/s 00:05:43.428 12:01:44 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:43.428 12:01:44 -- common/autotest_common.sh@872 -- # size=4096 00:05:43.428 12:01:44 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:43.428 12:01:44 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:43.428 12:01:44 -- common/autotest_common.sh@875 -- # return 0 00:05:43.428 12:01:44 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:43.428 12:01:44 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.428 12:01:44 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:43.691 /dev/nbd1 00:05:43.691 12:01:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:43.691 12:01:44 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:43.691 12:01:44 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:05:43.691 12:01:44 -- common/autotest_common.sh@855 -- # local i 00:05:43.691 12:01:44 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:43.691 12:01:44 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:43.691 12:01:44 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:05:43.691 12:01:44 -- common/autotest_common.sh@859 -- # break 00:05:43.691 12:01:44 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:43.691 12:01:44 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:43.691 12:01:44 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:43.691 1+0 records in 00:05:43.691 1+0 records out 00:05:43.691 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000252706 s, 16.2 MB/s 00:05:43.691 12:01:44 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:43.691 12:01:44 -- common/autotest_common.sh@872 -- # size=4096 00:05:43.691 12:01:44 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:43.691 12:01:44 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:43.691 12:01:44 -- common/autotest_common.sh@875 -- # return 0 00:05:43.691 12:01:44 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:43.691 12:01:44 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.691 12:01:44 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:43.691 12:01:44 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.691 12:01:44 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:43.951 12:01:44 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:43.951 { 00:05:43.951 "nbd_device": "/dev/nbd0", 00:05:43.951 "bdev_name": "Malloc0" 00:05:43.951 }, 00:05:43.951 { 00:05:43.951 "nbd_device": "/dev/nbd1", 00:05:43.951 "bdev_name": "Malloc1" 00:05:43.952 } 00:05:43.952 ]' 00:05:43.952 12:01:44 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:43.952 { 00:05:43.952 "nbd_device": "/dev/nbd0", 00:05:43.952 "bdev_name": "Malloc0" 00:05:43.952 }, 00:05:43.952 { 00:05:43.952 "nbd_device": "/dev/nbd1", 00:05:43.952 "bdev_name": "Malloc1" 00:05:43.952 } 00:05:43.952 ]' 00:05:43.952 12:01:44 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:43.952 12:01:44 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:43.952 /dev/nbd1' 00:05:43.952 12:01:44 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:43.952 /dev/nbd1' 00:05:43.952 12:01:44 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:43.952 12:01:44 -- bdev/nbd_common.sh@65 -- # count=2 00:05:43.952 12:01:44 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:43.952 12:01:44 -- bdev/nbd_common.sh@95 -- # count=2 00:05:43.952 12:01:44 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:43.952 12:01:44 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:43.952 12:01:44 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.952 12:01:44 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:43.952 12:01:44 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:43.952 12:01:44 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:43.952 12:01:44 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:43.952 12:01:44 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:43.952 256+0 records in 00:05:43.952 256+0 records out 00:05:43.952 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0119338 s, 87.9 MB/s 00:05:43.952 12:01:45 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:43.952 12:01:45 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:43.952 256+0 records in 00:05:43.952 256+0 records out 00:05:43.952 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0159274 s, 65.8 MB/s 00:05:43.952 12:01:45 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:43.952 12:01:45 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:43.952 256+0 records in 00:05:43.952 256+0 records out 00:05:43.952 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0219039 s, 47.9 MB/s 00:05:43.952 12:01:45 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:43.952 12:01:45 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.952 12:01:45 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:43.952 12:01:45 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:43.952 12:01:45 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:43.952 12:01:45 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:43.952 12:01:45 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:43.952 12:01:45 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:43.952 12:01:45 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:43.952 12:01:45 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:43.952 12:01:45 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:43.952 12:01:45 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:43.952 12:01:45 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:43.952 12:01:45 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.952 12:01:45 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.952 12:01:45 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:43.952 12:01:45 -- bdev/nbd_common.sh@51 -- # local i 00:05:43.952 12:01:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:43.952 12:01:45 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:44.212 12:01:45 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:44.212 12:01:45 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:44.212 12:01:45 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:44.212 12:01:45 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:44.212 12:01:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:44.212 12:01:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:44.212 12:01:45 -- bdev/nbd_common.sh@41 -- # break 00:05:44.212 12:01:45 -- bdev/nbd_common.sh@45 -- # return 0 00:05:44.212 12:01:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:44.212 12:01:45 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:44.212 12:01:45 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:44.473 12:01:45 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:44.473 12:01:45 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:44.473 12:01:45 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:44.473 12:01:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:44.473 12:01:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:44.473 12:01:45 -- bdev/nbd_common.sh@41 -- # break 00:05:44.473 12:01:45 -- bdev/nbd_common.sh@45 -- # return 0 00:05:44.473 12:01:45 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:44.473 12:01:45 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.473 12:01:45 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:44.473 12:01:45 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:44.473 12:01:45 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:44.473 12:01:45 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:44.473 12:01:45 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:44.473 12:01:45 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:44.473 12:01:45 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:44.473 12:01:45 -- bdev/nbd_common.sh@65 -- # true 00:05:44.473 12:01:45 -- bdev/nbd_common.sh@65 -- # count=0 00:05:44.473 12:01:45 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:44.473 12:01:45 -- bdev/nbd_common.sh@104 -- # count=0 00:05:44.473 12:01:45 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:44.473 12:01:45 -- bdev/nbd_common.sh@109 -- # return 0 00:05:44.473 12:01:45 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:44.733 12:01:45 -- event/event.sh@35 -- # sleep 3 00:05:44.733 [2024-04-26 12:01:45.943916] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:44.993 [2024-04-26 12:01:46.004794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.993 [2024-04-26 12:01:46.004796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.993 [2024-04-26 12:01:46.036563] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:44.993 [2024-04-26 12:01:46.036596] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:48.291 12:01:48 -- event/event.sh@38 -- # waitforlisten 3202202 /var/tmp/spdk-nbd.sock 00:05:48.291 12:01:48 -- common/autotest_common.sh@817 -- # '[' -z 3202202 ']' 00:05:48.291 12:01:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:48.291 12:01:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:48.291 12:01:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:48.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:48.291 12:01:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:48.291 12:01:48 -- common/autotest_common.sh@10 -- # set +x 00:05:48.291 12:01:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:48.291 12:01:48 -- common/autotest_common.sh@850 -- # return 0 00:05:48.291 12:01:48 -- event/event.sh@39 -- # killprocess 3202202 00:05:48.291 12:01:48 -- common/autotest_common.sh@936 -- # '[' -z 3202202 ']' 00:05:48.291 12:01:48 -- common/autotest_common.sh@940 -- # kill -0 3202202 00:05:48.291 12:01:48 -- common/autotest_common.sh@941 -- # uname 00:05:48.291 12:01:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:48.291 12:01:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3202202 00:05:48.291 12:01:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:48.291 12:01:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:48.291 12:01:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3202202' 00:05:48.291 killing process with pid 3202202 00:05:48.291 12:01:49 -- common/autotest_common.sh@955 -- # kill 3202202 00:05:48.291 12:01:49 -- common/autotest_common.sh@960 -- # wait 3202202 00:05:48.291 spdk_app_start is called in Round 0. 00:05:48.291 Shutdown signal received, stop current app iteration 00:05:48.291 Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 reinitialization... 00:05:48.291 spdk_app_start is called in Round 1. 00:05:48.291 Shutdown signal received, stop current app iteration 00:05:48.291 Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 reinitialization... 00:05:48.291 spdk_app_start is called in Round 2. 00:05:48.291 Shutdown signal received, stop current app iteration 00:05:48.291 Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 reinitialization... 00:05:48.291 spdk_app_start is called in Round 3. 00:05:48.291 Shutdown signal received, stop current app iteration 00:05:48.291 12:01:49 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:48.291 12:01:49 -- event/event.sh@42 -- # return 0 00:05:48.291 00:05:48.291 real 0m15.632s 00:05:48.291 user 0m33.743s 00:05:48.291 sys 0m2.066s 00:05:48.291 12:01:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:48.291 12:01:49 -- common/autotest_common.sh@10 -- # set +x 00:05:48.291 ************************************ 00:05:48.291 END TEST app_repeat 00:05:48.291 ************************************ 00:05:48.291 12:01:49 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:48.291 12:01:49 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:48.291 12:01:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:48.291 12:01:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:48.291 12:01:49 -- common/autotest_common.sh@10 -- # set +x 00:05:48.291 ************************************ 00:05:48.291 START TEST cpu_locks 00:05:48.291 ************************************ 00:05:48.291 12:01:49 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:48.291 * Looking for test storage... 00:05:48.291 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:48.291 12:01:49 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:48.291 12:01:49 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:48.291 12:01:49 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:48.291 12:01:49 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:48.291 12:01:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:48.291 12:01:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:48.291 12:01:49 -- common/autotest_common.sh@10 -- # set +x 00:05:48.552 ************************************ 00:05:48.552 START TEST default_locks 00:05:48.552 ************************************ 00:05:48.552 12:01:49 -- common/autotest_common.sh@1111 -- # default_locks 00:05:48.552 12:01:49 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3205563 00:05:48.552 12:01:49 -- event/cpu_locks.sh@47 -- # waitforlisten 3205563 00:05:48.552 12:01:49 -- common/autotest_common.sh@817 -- # '[' -z 3205563 ']' 00:05:48.552 12:01:49 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:48.552 12:01:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.552 12:01:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:48.552 12:01:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.552 12:01:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:48.552 12:01:49 -- common/autotest_common.sh@10 -- # set +x 00:05:48.552 [2024-04-26 12:01:49.642811] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:05:48.552 [2024-04-26 12:01:49.642876] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3205563 ] 00:05:48.552 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.552 [2024-04-26 12:01:49.707247] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.813 [2024-04-26 12:01:49.781086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.384 12:01:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:49.384 12:01:50 -- common/autotest_common.sh@850 -- # return 0 00:05:49.384 12:01:50 -- event/cpu_locks.sh@49 -- # locks_exist 3205563 00:05:49.384 12:01:50 -- event/cpu_locks.sh@22 -- # lslocks -p 3205563 00:05:49.384 12:01:50 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:49.644 lslocks: write error 00:05:49.644 12:01:50 -- event/cpu_locks.sh@50 -- # killprocess 3205563 00:05:49.644 12:01:50 -- common/autotest_common.sh@936 -- # '[' -z 3205563 ']' 00:05:49.644 12:01:50 -- common/autotest_common.sh@940 -- # kill -0 3205563 00:05:49.644 12:01:50 -- common/autotest_common.sh@941 -- # uname 00:05:49.644 12:01:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:49.644 12:01:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3205563 00:05:49.644 12:01:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:49.644 12:01:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:49.644 12:01:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3205563' 00:05:49.644 killing process with pid 3205563 00:05:49.644 12:01:50 -- common/autotest_common.sh@955 -- # kill 3205563 00:05:49.644 12:01:50 -- common/autotest_common.sh@960 -- # wait 3205563 00:05:49.904 12:01:51 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3205563 00:05:49.904 12:01:51 -- common/autotest_common.sh@638 -- # local es=0 00:05:49.904 12:01:51 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 3205563 00:05:49.904 12:01:51 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:05:49.904 12:01:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:49.904 12:01:51 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:05:49.904 12:01:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:49.904 12:01:51 -- common/autotest_common.sh@641 -- # waitforlisten 3205563 00:05:49.904 12:01:51 -- common/autotest_common.sh@817 -- # '[' -z 3205563 ']' 00:05:49.904 12:01:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.904 12:01:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:49.904 12:01:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.904 12:01:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:49.904 12:01:51 -- common/autotest_common.sh@10 -- # set +x 00:05:49.904 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (3205563) - No such process 00:05:49.904 ERROR: process (pid: 3205563) is no longer running 00:05:49.904 12:01:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:49.904 12:01:51 -- common/autotest_common.sh@850 -- # return 1 00:05:49.904 12:01:51 -- common/autotest_common.sh@641 -- # es=1 00:05:49.904 12:01:51 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:49.904 12:01:51 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:49.904 12:01:51 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:49.904 12:01:51 -- event/cpu_locks.sh@54 -- # no_locks 00:05:49.904 12:01:51 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:49.904 12:01:51 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:49.904 12:01:51 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:49.904 00:05:49.904 real 0m1.457s 00:05:49.904 user 0m1.545s 00:05:49.904 sys 0m0.488s 00:05:49.904 12:01:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:49.904 12:01:51 -- common/autotest_common.sh@10 -- # set +x 00:05:49.904 ************************************ 00:05:49.904 END TEST default_locks 00:05:49.904 ************************************ 00:05:49.904 12:01:51 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:49.904 12:01:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:49.904 12:01:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:49.904 12:01:51 -- common/autotest_common.sh@10 -- # set +x 00:05:50.164 ************************************ 00:05:50.164 START TEST default_locks_via_rpc 00:05:50.164 ************************************ 00:05:50.164 12:01:51 -- common/autotest_common.sh@1111 -- # default_locks_via_rpc 00:05:50.164 12:01:51 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3205937 00:05:50.164 12:01:51 -- event/cpu_locks.sh@63 -- # waitforlisten 3205937 00:05:50.164 12:01:51 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:50.164 12:01:51 -- common/autotest_common.sh@817 -- # '[' -z 3205937 ']' 00:05:50.164 12:01:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.164 12:01:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:50.164 12:01:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.164 12:01:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:50.164 12:01:51 -- common/autotest_common.sh@10 -- # set +x 00:05:50.164 [2024-04-26 12:01:51.269382] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:05:50.164 [2024-04-26 12:01:51.269442] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3205937 ] 00:05:50.164 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.164 [2024-04-26 12:01:51.333381] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.424 [2024-04-26 12:01:51.405452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.995 12:01:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:50.995 12:01:52 -- common/autotest_common.sh@850 -- # return 0 00:05:50.995 12:01:52 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:50.995 12:01:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:50.995 12:01:52 -- common/autotest_common.sh@10 -- # set +x 00:05:50.995 12:01:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:50.995 12:01:52 -- event/cpu_locks.sh@67 -- # no_locks 00:05:50.995 12:01:52 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:50.995 12:01:52 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:50.995 12:01:52 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:50.995 12:01:52 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:50.995 12:01:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:50.995 12:01:52 -- common/autotest_common.sh@10 -- # set +x 00:05:50.995 12:01:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:50.995 12:01:52 -- event/cpu_locks.sh@71 -- # locks_exist 3205937 00:05:50.995 12:01:52 -- event/cpu_locks.sh@22 -- # lslocks -p 3205937 00:05:50.995 12:01:52 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:51.255 12:01:52 -- event/cpu_locks.sh@73 -- # killprocess 3205937 00:05:51.255 12:01:52 -- common/autotest_common.sh@936 -- # '[' -z 3205937 ']' 00:05:51.255 12:01:52 -- common/autotest_common.sh@940 -- # kill -0 3205937 00:05:51.255 12:01:52 -- common/autotest_common.sh@941 -- # uname 00:05:51.255 12:01:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:51.255 12:01:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3205937 00:05:51.515 12:01:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:51.515 12:01:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:51.516 12:01:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3205937' 00:05:51.516 killing process with pid 3205937 00:05:51.516 12:01:52 -- common/autotest_common.sh@955 -- # kill 3205937 00:05:51.516 12:01:52 -- common/autotest_common.sh@960 -- # wait 3205937 00:05:51.516 00:05:51.516 real 0m1.475s 00:05:51.516 user 0m1.574s 00:05:51.516 sys 0m0.491s 00:05:51.516 12:01:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:51.516 12:01:52 -- common/autotest_common.sh@10 -- # set +x 00:05:51.516 ************************************ 00:05:51.516 END TEST default_locks_via_rpc 00:05:51.516 ************************************ 00:05:51.516 12:01:52 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:51.516 12:01:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:51.516 12:01:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:51.516 12:01:52 -- common/autotest_common.sh@10 -- # set +x 00:05:51.776 ************************************ 00:05:51.776 START TEST non_locking_app_on_locked_coremask 00:05:51.776 ************************************ 00:05:51.776 12:01:52 -- common/autotest_common.sh@1111 -- # non_locking_app_on_locked_coremask 00:05:51.776 12:01:52 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3206310 00:05:51.776 12:01:52 -- event/cpu_locks.sh@81 -- # waitforlisten 3206310 /var/tmp/spdk.sock 00:05:51.776 12:01:52 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:51.776 12:01:52 -- common/autotest_common.sh@817 -- # '[' -z 3206310 ']' 00:05:51.776 12:01:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.776 12:01:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:51.776 12:01:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.776 12:01:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:51.776 12:01:52 -- common/autotest_common.sh@10 -- # set +x 00:05:51.776 [2024-04-26 12:01:52.938136] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:05:51.776 [2024-04-26 12:01:52.938196] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3206310 ] 00:05:51.776 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.036 [2024-04-26 12:01:53.002087] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.036 [2024-04-26 12:01:53.074473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.623 12:01:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:52.623 12:01:53 -- common/autotest_common.sh@850 -- # return 0 00:05:52.623 12:01:53 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3206623 00:05:52.623 12:01:53 -- event/cpu_locks.sh@85 -- # waitforlisten 3206623 /var/tmp/spdk2.sock 00:05:52.623 12:01:53 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:52.623 12:01:53 -- common/autotest_common.sh@817 -- # '[' -z 3206623 ']' 00:05:52.623 12:01:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:52.623 12:01:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:52.623 12:01:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:52.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:52.623 12:01:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:52.623 12:01:53 -- common/autotest_common.sh@10 -- # set +x 00:05:52.623 [2024-04-26 12:01:53.743458] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:05:52.623 [2024-04-26 12:01:53.743510] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3206623 ] 00:05:52.623 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.623 [2024-04-26 12:01:53.829333] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:52.623 [2024-04-26 12:01:53.829361] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.882 [2024-04-26 12:01:53.960001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.452 12:01:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:53.453 12:01:54 -- common/autotest_common.sh@850 -- # return 0 00:05:53.453 12:01:54 -- event/cpu_locks.sh@87 -- # locks_exist 3206310 00:05:53.453 12:01:54 -- event/cpu_locks.sh@22 -- # lslocks -p 3206310 00:05:53.453 12:01:54 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:54.024 lslocks: write error 00:05:54.024 12:01:55 -- event/cpu_locks.sh@89 -- # killprocess 3206310 00:05:54.024 12:01:55 -- common/autotest_common.sh@936 -- # '[' -z 3206310 ']' 00:05:54.024 12:01:55 -- common/autotest_common.sh@940 -- # kill -0 3206310 00:05:54.024 12:01:55 -- common/autotest_common.sh@941 -- # uname 00:05:54.024 12:01:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:54.024 12:01:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3206310 00:05:54.024 12:01:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:54.024 12:01:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:54.024 12:01:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3206310' 00:05:54.024 killing process with pid 3206310 00:05:54.024 12:01:55 -- common/autotest_common.sh@955 -- # kill 3206310 00:05:54.024 12:01:55 -- common/autotest_common.sh@960 -- # wait 3206310 00:05:54.595 12:01:55 -- event/cpu_locks.sh@90 -- # killprocess 3206623 00:05:54.595 12:01:55 -- common/autotest_common.sh@936 -- # '[' -z 3206623 ']' 00:05:54.595 12:01:55 -- common/autotest_common.sh@940 -- # kill -0 3206623 00:05:54.595 12:01:55 -- common/autotest_common.sh@941 -- # uname 00:05:54.595 12:01:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:54.595 12:01:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3206623 00:05:54.595 12:01:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:54.595 12:01:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:54.595 12:01:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3206623' 00:05:54.595 killing process with pid 3206623 00:05:54.595 12:01:55 -- common/autotest_common.sh@955 -- # kill 3206623 00:05:54.595 12:01:55 -- common/autotest_common.sh@960 -- # wait 3206623 00:05:54.595 00:05:54.595 real 0m2.898s 00:05:54.595 user 0m3.151s 00:05:54.595 sys 0m0.875s 00:05:54.595 12:01:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:54.595 12:01:55 -- common/autotest_common.sh@10 -- # set +x 00:05:54.595 ************************************ 00:05:54.595 END TEST non_locking_app_on_locked_coremask 00:05:54.595 ************************************ 00:05:54.595 12:01:55 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:54.595 12:01:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:54.595 12:01:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:54.595 12:01:55 -- common/autotest_common.sh@10 -- # set +x 00:05:54.856 ************************************ 00:05:54.856 START TEST locking_app_on_unlocked_coremask 00:05:54.856 ************************************ 00:05:54.856 12:01:55 -- common/autotest_common.sh@1111 -- # locking_app_on_unlocked_coremask 00:05:54.856 12:01:55 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3207020 00:05:54.856 12:01:55 -- event/cpu_locks.sh@99 -- # waitforlisten 3207020 /var/tmp/spdk.sock 00:05:54.856 12:01:55 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:54.856 12:01:55 -- common/autotest_common.sh@817 -- # '[' -z 3207020 ']' 00:05:54.856 12:01:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.856 12:01:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:54.856 12:01:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.856 12:01:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:54.856 12:01:55 -- common/autotest_common.sh@10 -- # set +x 00:05:54.856 [2024-04-26 12:01:56.020371] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:05:54.856 [2024-04-26 12:01:56.020430] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3207020 ] 00:05:54.856 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.117 [2024-04-26 12:01:56.081406] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:55.117 [2024-04-26 12:01:56.081435] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.117 [2024-04-26 12:01:56.143214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.690 12:01:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:55.690 12:01:56 -- common/autotest_common.sh@850 -- # return 0 00:05:55.690 12:01:56 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3207208 00:05:55.690 12:01:56 -- event/cpu_locks.sh@103 -- # waitforlisten 3207208 /var/tmp/spdk2.sock 00:05:55.690 12:01:56 -- common/autotest_common.sh@817 -- # '[' -z 3207208 ']' 00:05:55.690 12:01:56 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:55.690 12:01:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:55.690 12:01:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:55.690 12:01:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:55.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:55.690 12:01:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:55.690 12:01:56 -- common/autotest_common.sh@10 -- # set +x 00:05:55.690 [2024-04-26 12:01:56.838879] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:05:55.690 [2024-04-26 12:01:56.838934] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3207208 ] 00:05:55.690 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.950 [2024-04-26 12:01:56.933027] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.950 [2024-04-26 12:01:57.056371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.522 12:01:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:56.522 12:01:57 -- common/autotest_common.sh@850 -- # return 0 00:05:56.522 12:01:57 -- event/cpu_locks.sh@105 -- # locks_exist 3207208 00:05:56.522 12:01:57 -- event/cpu_locks.sh@22 -- # lslocks -p 3207208 00:05:56.522 12:01:57 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:56.783 lslocks: write error 00:05:56.783 12:01:57 -- event/cpu_locks.sh@107 -- # killprocess 3207020 00:05:56.783 12:01:57 -- common/autotest_common.sh@936 -- # '[' -z 3207020 ']' 00:05:56.783 12:01:57 -- common/autotest_common.sh@940 -- # kill -0 3207020 00:05:56.783 12:01:57 -- common/autotest_common.sh@941 -- # uname 00:05:56.783 12:01:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:56.783 12:01:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3207020 00:05:56.783 12:01:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:56.783 12:01:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:56.783 12:01:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3207020' 00:05:56.783 killing process with pid 3207020 00:05:56.783 12:01:57 -- common/autotest_common.sh@955 -- # kill 3207020 00:05:56.783 12:01:57 -- common/autotest_common.sh@960 -- # wait 3207020 00:05:57.356 12:01:58 -- event/cpu_locks.sh@108 -- # killprocess 3207208 00:05:57.356 12:01:58 -- common/autotest_common.sh@936 -- # '[' -z 3207208 ']' 00:05:57.356 12:01:58 -- common/autotest_common.sh@940 -- # kill -0 3207208 00:05:57.356 12:01:58 -- common/autotest_common.sh@941 -- # uname 00:05:57.356 12:01:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:57.356 12:01:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3207208 00:05:57.356 12:01:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:57.356 12:01:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:57.356 12:01:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3207208' 00:05:57.356 killing process with pid 3207208 00:05:57.356 12:01:58 -- common/autotest_common.sh@955 -- # kill 3207208 00:05:57.356 12:01:58 -- common/autotest_common.sh@960 -- # wait 3207208 00:05:57.356 00:05:57.356 real 0m2.614s 00:05:57.356 user 0m2.870s 00:05:57.356 sys 0m0.749s 00:05:57.356 12:01:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:57.356 12:01:58 -- common/autotest_common.sh@10 -- # set +x 00:05:57.356 ************************************ 00:05:57.356 END TEST locking_app_on_unlocked_coremask 00:05:57.356 ************************************ 00:05:57.616 12:01:58 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:57.616 12:01:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:57.616 12:01:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:57.616 12:01:58 -- common/autotest_common.sh@10 -- # set +x 00:05:57.616 ************************************ 00:05:57.616 START TEST locking_app_on_locked_coremask 00:05:57.616 ************************************ 00:05:57.617 12:01:58 -- common/autotest_common.sh@1111 -- # locking_app_on_locked_coremask 00:05:57.617 12:01:58 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3207734 00:05:57.617 12:01:58 -- event/cpu_locks.sh@116 -- # waitforlisten 3207734 /var/tmp/spdk.sock 00:05:57.617 12:01:58 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:57.617 12:01:58 -- common/autotest_common.sh@817 -- # '[' -z 3207734 ']' 00:05:57.617 12:01:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.617 12:01:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:57.617 12:01:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.617 12:01:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:57.617 12:01:58 -- common/autotest_common.sh@10 -- # set +x 00:05:57.617 [2024-04-26 12:01:58.810679] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:05:57.617 [2024-04-26 12:01:58.810725] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3207734 ] 00:05:57.617 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.877 [2024-04-26 12:01:58.870750] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.877 [2024-04-26 12:01:58.933763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.467 12:01:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:58.467 12:01:59 -- common/autotest_common.sh@850 -- # return 0 00:05:58.467 12:01:59 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:58.467 12:01:59 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3207753 00:05:58.467 12:01:59 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3207753 /var/tmp/spdk2.sock 00:05:58.467 12:01:59 -- common/autotest_common.sh@638 -- # local es=0 00:05:58.467 12:01:59 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 3207753 /var/tmp/spdk2.sock 00:05:58.467 12:01:59 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:05:58.467 12:01:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:58.467 12:01:59 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:05:58.467 12:01:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:58.467 12:01:59 -- common/autotest_common.sh@641 -- # waitforlisten 3207753 /var/tmp/spdk2.sock 00:05:58.467 12:01:59 -- common/autotest_common.sh@817 -- # '[' -z 3207753 ']' 00:05:58.467 12:01:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:58.467 12:01:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:58.467 12:01:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:58.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:58.467 12:01:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:58.467 12:01:59 -- common/autotest_common.sh@10 -- # set +x 00:05:58.467 [2024-04-26 12:01:59.617959] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:05:58.467 [2024-04-26 12:01:59.618007] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3207753 ] 00:05:58.467 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.727 [2024-04-26 12:01:59.708390] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3207734 has claimed it. 00:05:58.727 [2024-04-26 12:01:59.708430] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:59.297 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (3207753) - No such process 00:05:59.297 ERROR: process (pid: 3207753) is no longer running 00:05:59.297 12:02:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:59.297 12:02:00 -- common/autotest_common.sh@850 -- # return 1 00:05:59.297 12:02:00 -- common/autotest_common.sh@641 -- # es=1 00:05:59.297 12:02:00 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:59.297 12:02:00 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:59.297 12:02:00 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:59.297 12:02:00 -- event/cpu_locks.sh@122 -- # locks_exist 3207734 00:05:59.297 12:02:00 -- event/cpu_locks.sh@22 -- # lslocks -p 3207734 00:05:59.297 12:02:00 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:59.557 lslocks: write error 00:05:59.557 12:02:00 -- event/cpu_locks.sh@124 -- # killprocess 3207734 00:05:59.557 12:02:00 -- common/autotest_common.sh@936 -- # '[' -z 3207734 ']' 00:05:59.557 12:02:00 -- common/autotest_common.sh@940 -- # kill -0 3207734 00:05:59.557 12:02:00 -- common/autotest_common.sh@941 -- # uname 00:05:59.557 12:02:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:59.557 12:02:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3207734 00:05:59.557 12:02:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:59.557 12:02:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:59.557 12:02:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3207734' 00:05:59.557 killing process with pid 3207734 00:05:59.557 12:02:00 -- common/autotest_common.sh@955 -- # kill 3207734 00:05:59.557 12:02:00 -- common/autotest_common.sh@960 -- # wait 3207734 00:05:59.817 00:05:59.817 real 0m2.119s 00:05:59.817 user 0m2.358s 00:05:59.817 sys 0m0.564s 00:05:59.817 12:02:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:59.817 12:02:00 -- common/autotest_common.sh@10 -- # set +x 00:05:59.817 ************************************ 00:05:59.817 END TEST locking_app_on_locked_coremask 00:05:59.817 ************************************ 00:05:59.817 12:02:00 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:59.817 12:02:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:59.817 12:02:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:59.817 12:02:00 -- common/autotest_common.sh@10 -- # set +x 00:06:00.077 ************************************ 00:06:00.077 START TEST locking_overlapped_coremask 00:06:00.077 ************************************ 00:06:00.077 12:02:01 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask 00:06:00.077 12:02:01 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3208120 00:06:00.077 12:02:01 -- event/cpu_locks.sh@133 -- # waitforlisten 3208120 /var/tmp/spdk.sock 00:06:00.077 12:02:01 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:00.077 12:02:01 -- common/autotest_common.sh@817 -- # '[' -z 3208120 ']' 00:06:00.077 12:02:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.077 12:02:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:00.077 12:02:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.078 12:02:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:00.078 12:02:01 -- common/autotest_common.sh@10 -- # set +x 00:06:00.078 [2024-04-26 12:02:01.106776] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:00.078 [2024-04-26 12:02:01.106822] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3208120 ] 00:06:00.078 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.078 [2024-04-26 12:02:01.167685] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:00.078 [2024-04-26 12:02:01.234351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.078 [2024-04-26 12:02:01.234470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:00.078 [2024-04-26 12:02:01.234473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.648 12:02:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:00.648 12:02:01 -- common/autotest_common.sh@850 -- # return 0 00:06:00.648 12:02:01 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3208310 00:06:00.648 12:02:01 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3208310 /var/tmp/spdk2.sock 00:06:00.648 12:02:01 -- common/autotest_common.sh@638 -- # local es=0 00:06:00.648 12:02:01 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:00.648 12:02:01 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 3208310 /var/tmp/spdk2.sock 00:06:00.648 12:02:01 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:06:00.648 12:02:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:00.648 12:02:01 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:06:00.648 12:02:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:00.648 12:02:01 -- common/autotest_common.sh@641 -- # waitforlisten 3208310 /var/tmp/spdk2.sock 00:06:00.648 12:02:01 -- common/autotest_common.sh@817 -- # '[' -z 3208310 ']' 00:06:00.648 12:02:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:00.648 12:02:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:00.648 12:02:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:00.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:00.648 12:02:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:00.648 12:02:01 -- common/autotest_common.sh@10 -- # set +x 00:06:00.909 [2024-04-26 12:02:01.916612] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:00.909 [2024-04-26 12:02:01.916666] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3208310 ] 00:06:00.909 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.909 [2024-04-26 12:02:01.985939] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3208120 has claimed it. 00:06:00.909 [2024-04-26 12:02:01.985967] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:01.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (3208310) - No such process 00:06:01.479 ERROR: process (pid: 3208310) is no longer running 00:06:01.479 12:02:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:01.479 12:02:02 -- common/autotest_common.sh@850 -- # return 1 00:06:01.479 12:02:02 -- common/autotest_common.sh@641 -- # es=1 00:06:01.479 12:02:02 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:01.479 12:02:02 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:01.479 12:02:02 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:01.479 12:02:02 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:01.479 12:02:02 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:01.479 12:02:02 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:01.479 12:02:02 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:01.479 12:02:02 -- event/cpu_locks.sh@141 -- # killprocess 3208120 00:06:01.479 12:02:02 -- common/autotest_common.sh@936 -- # '[' -z 3208120 ']' 00:06:01.479 12:02:02 -- common/autotest_common.sh@940 -- # kill -0 3208120 00:06:01.479 12:02:02 -- common/autotest_common.sh@941 -- # uname 00:06:01.479 12:02:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:01.479 12:02:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3208120 00:06:01.479 12:02:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:01.479 12:02:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:01.479 12:02:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3208120' 00:06:01.479 killing process with pid 3208120 00:06:01.479 12:02:02 -- common/autotest_common.sh@955 -- # kill 3208120 00:06:01.479 12:02:02 -- common/autotest_common.sh@960 -- # wait 3208120 00:06:01.810 00:06:01.810 real 0m1.727s 00:06:01.810 user 0m4.916s 00:06:01.810 sys 0m0.339s 00:06:01.810 12:02:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:01.810 12:02:02 -- common/autotest_common.sh@10 -- # set +x 00:06:01.810 ************************************ 00:06:01.810 END TEST locking_overlapped_coremask 00:06:01.810 ************************************ 00:06:01.810 12:02:02 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:01.810 12:02:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:01.810 12:02:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:01.810 12:02:02 -- common/autotest_common.sh@10 -- # set +x 00:06:01.810 ************************************ 00:06:01.810 START TEST locking_overlapped_coremask_via_rpc 00:06:01.810 ************************************ 00:06:01.810 12:02:02 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask_via_rpc 00:06:01.810 12:02:02 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3208502 00:06:01.810 12:02:02 -- event/cpu_locks.sh@149 -- # waitforlisten 3208502 /var/tmp/spdk.sock 00:06:01.810 12:02:02 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:01.810 12:02:02 -- common/autotest_common.sh@817 -- # '[' -z 3208502 ']' 00:06:01.810 12:02:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.810 12:02:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:01.810 12:02:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.810 12:02:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:01.810 12:02:02 -- common/autotest_common.sh@10 -- # set +x 00:06:02.074 [2024-04-26 12:02:03.012527] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:02.074 [2024-04-26 12:02:03.012585] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3208502 ] 00:06:02.074 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.074 [2024-04-26 12:02:03.077048] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:02.074 [2024-04-26 12:02:03.077079] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:02.074 [2024-04-26 12:02:03.150622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.074 [2024-04-26 12:02:03.150737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:02.074 [2024-04-26 12:02:03.150740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.644 12:02:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:02.644 12:02:03 -- common/autotest_common.sh@850 -- # return 0 00:06:02.644 12:02:03 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3208836 00:06:02.644 12:02:03 -- event/cpu_locks.sh@153 -- # waitforlisten 3208836 /var/tmp/spdk2.sock 00:06:02.644 12:02:03 -- common/autotest_common.sh@817 -- # '[' -z 3208836 ']' 00:06:02.644 12:02:03 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:02.644 12:02:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:02.644 12:02:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:02.644 12:02:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:02.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:02.644 12:02:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:02.644 12:02:03 -- common/autotest_common.sh@10 -- # set +x 00:06:02.644 [2024-04-26 12:02:03.836704] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:02.644 [2024-04-26 12:02:03.836758] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3208836 ] 00:06:02.644 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.905 [2024-04-26 12:02:03.907614] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:02.905 [2024-04-26 12:02:03.907634] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:02.905 [2024-04-26 12:02:04.011930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:02.905 [2024-04-26 12:02:04.015959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:02.905 [2024-04-26 12:02:04.015962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:03.476 12:02:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:03.476 12:02:04 -- common/autotest_common.sh@850 -- # return 0 00:06:03.476 12:02:04 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:03.476 12:02:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:03.476 12:02:04 -- common/autotest_common.sh@10 -- # set +x 00:06:03.476 12:02:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:03.476 12:02:04 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:03.476 12:02:04 -- common/autotest_common.sh@638 -- # local es=0 00:06:03.476 12:02:04 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:03.476 12:02:04 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:06:03.476 12:02:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:03.476 12:02:04 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:06:03.476 12:02:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:03.476 12:02:04 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:03.476 12:02:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:03.476 12:02:04 -- common/autotest_common.sh@10 -- # set +x 00:06:03.476 [2024-04-26 12:02:04.611897] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3208502 has claimed it. 00:06:03.476 request: 00:06:03.476 { 00:06:03.476 "method": "framework_enable_cpumask_locks", 00:06:03.476 "req_id": 1 00:06:03.476 } 00:06:03.476 Got JSON-RPC error response 00:06:03.476 response: 00:06:03.476 { 00:06:03.476 "code": -32603, 00:06:03.476 "message": "Failed to claim CPU core: 2" 00:06:03.476 } 00:06:03.476 12:02:04 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:06:03.476 12:02:04 -- common/autotest_common.sh@641 -- # es=1 00:06:03.476 12:02:04 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:03.476 12:02:04 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:03.476 12:02:04 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:03.476 12:02:04 -- event/cpu_locks.sh@158 -- # waitforlisten 3208502 /var/tmp/spdk.sock 00:06:03.476 12:02:04 -- common/autotest_common.sh@817 -- # '[' -z 3208502 ']' 00:06:03.476 12:02:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.476 12:02:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:03.476 12:02:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.476 12:02:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:03.476 12:02:04 -- common/autotest_common.sh@10 -- # set +x 00:06:03.737 12:02:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:03.737 12:02:04 -- common/autotest_common.sh@850 -- # return 0 00:06:03.737 12:02:04 -- event/cpu_locks.sh@159 -- # waitforlisten 3208836 /var/tmp/spdk2.sock 00:06:03.737 12:02:04 -- common/autotest_common.sh@817 -- # '[' -z 3208836 ']' 00:06:03.737 12:02:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:03.737 12:02:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:03.737 12:02:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:03.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:03.737 12:02:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:03.737 12:02:04 -- common/autotest_common.sh@10 -- # set +x 00:06:03.737 12:02:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:03.737 12:02:04 -- common/autotest_common.sh@850 -- # return 0 00:06:03.737 12:02:04 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:03.737 12:02:04 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:03.737 12:02:04 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:03.737 12:02:04 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:03.737 00:06:03.737 real 0m1.998s 00:06:03.737 user 0m0.770s 00:06:03.737 sys 0m0.163s 00:06:03.737 12:02:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:03.737 12:02:04 -- common/autotest_common.sh@10 -- # set +x 00:06:03.737 ************************************ 00:06:03.737 END TEST locking_overlapped_coremask_via_rpc 00:06:03.737 ************************************ 00:06:03.997 12:02:04 -- event/cpu_locks.sh@174 -- # cleanup 00:06:03.997 12:02:04 -- event/cpu_locks.sh@15 -- # [[ -z 3208502 ]] 00:06:03.997 12:02:04 -- event/cpu_locks.sh@15 -- # killprocess 3208502 00:06:03.997 12:02:04 -- common/autotest_common.sh@936 -- # '[' -z 3208502 ']' 00:06:03.997 12:02:04 -- common/autotest_common.sh@940 -- # kill -0 3208502 00:06:03.997 12:02:04 -- common/autotest_common.sh@941 -- # uname 00:06:03.997 12:02:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:03.997 12:02:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3208502 00:06:03.997 12:02:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:03.997 12:02:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:03.997 12:02:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3208502' 00:06:03.997 killing process with pid 3208502 00:06:03.997 12:02:05 -- common/autotest_common.sh@955 -- # kill 3208502 00:06:03.997 12:02:05 -- common/autotest_common.sh@960 -- # wait 3208502 00:06:04.258 12:02:05 -- event/cpu_locks.sh@16 -- # [[ -z 3208836 ]] 00:06:04.258 12:02:05 -- event/cpu_locks.sh@16 -- # killprocess 3208836 00:06:04.258 12:02:05 -- common/autotest_common.sh@936 -- # '[' -z 3208836 ']' 00:06:04.258 12:02:05 -- common/autotest_common.sh@940 -- # kill -0 3208836 00:06:04.258 12:02:05 -- common/autotest_common.sh@941 -- # uname 00:06:04.258 12:02:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:04.258 12:02:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3208836 00:06:04.258 12:02:05 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:04.258 12:02:05 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:04.258 12:02:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3208836' 00:06:04.258 killing process with pid 3208836 00:06:04.258 12:02:05 -- common/autotest_common.sh@955 -- # kill 3208836 00:06:04.258 12:02:05 -- common/autotest_common.sh@960 -- # wait 3208836 00:06:04.519 12:02:05 -- event/cpu_locks.sh@18 -- # rm -f 00:06:04.519 12:02:05 -- event/cpu_locks.sh@1 -- # cleanup 00:06:04.519 12:02:05 -- event/cpu_locks.sh@15 -- # [[ -z 3208502 ]] 00:06:04.519 12:02:05 -- event/cpu_locks.sh@15 -- # killprocess 3208502 00:06:04.519 12:02:05 -- common/autotest_common.sh@936 -- # '[' -z 3208502 ']' 00:06:04.519 12:02:05 -- common/autotest_common.sh@940 -- # kill -0 3208502 00:06:04.519 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (3208502) - No such process 00:06:04.519 12:02:05 -- common/autotest_common.sh@963 -- # echo 'Process with pid 3208502 is not found' 00:06:04.519 Process with pid 3208502 is not found 00:06:04.519 12:02:05 -- event/cpu_locks.sh@16 -- # [[ -z 3208836 ]] 00:06:04.519 12:02:05 -- event/cpu_locks.sh@16 -- # killprocess 3208836 00:06:04.519 12:02:05 -- common/autotest_common.sh@936 -- # '[' -z 3208836 ']' 00:06:04.519 12:02:05 -- common/autotest_common.sh@940 -- # kill -0 3208836 00:06:04.519 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (3208836) - No such process 00:06:04.519 12:02:05 -- common/autotest_common.sh@963 -- # echo 'Process with pid 3208836 is not found' 00:06:04.519 Process with pid 3208836 is not found 00:06:04.519 12:02:05 -- event/cpu_locks.sh@18 -- # rm -f 00:06:04.519 00:06:04.519 real 0m16.183s 00:06:04.519 user 0m27.010s 00:06:04.519 sys 0m4.853s 00:06:04.519 12:02:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:04.519 12:02:05 -- common/autotest_common.sh@10 -- # set +x 00:06:04.519 ************************************ 00:06:04.519 END TEST cpu_locks 00:06:04.519 ************************************ 00:06:04.519 00:06:04.519 real 0m43.634s 00:06:04.519 user 1m21.032s 00:06:04.519 sys 0m8.283s 00:06:04.519 12:02:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:04.519 12:02:05 -- common/autotest_common.sh@10 -- # set +x 00:06:04.519 ************************************ 00:06:04.519 END TEST event 00:06:04.519 ************************************ 00:06:04.519 12:02:05 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:04.519 12:02:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:04.519 12:02:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:04.519 12:02:05 -- common/autotest_common.sh@10 -- # set +x 00:06:04.519 ************************************ 00:06:04.519 START TEST thread 00:06:04.519 ************************************ 00:06:04.519 12:02:05 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:04.780 * Looking for test storage... 00:06:04.780 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:04.780 12:02:05 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:04.780 12:02:05 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:04.780 12:02:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:04.780 12:02:05 -- common/autotest_common.sh@10 -- # set +x 00:06:04.780 ************************************ 00:06:04.780 START TEST thread_poller_perf 00:06:04.780 ************************************ 00:06:04.780 12:02:05 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:05.040 [2024-04-26 12:02:06.021854] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:05.040 [2024-04-26 12:02:06.021952] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3209294 ] 00:06:05.040 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.040 [2024-04-26 12:02:06.086287] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.040 [2024-04-26 12:02:06.151423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.040 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:06.422 ====================================== 00:06:06.422 busy:2409380952 (cyc) 00:06:06.422 total_run_count: 287000 00:06:06.422 tsc_hz: 2400000000 (cyc) 00:06:06.422 ====================================== 00:06:06.422 poller_cost: 8395 (cyc), 3497 (nsec) 00:06:06.422 00:06:06.422 real 0m1.211s 00:06:06.422 user 0m1.139s 00:06:06.422 sys 0m0.067s 00:06:06.422 12:02:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:06.422 12:02:07 -- common/autotest_common.sh@10 -- # set +x 00:06:06.422 ************************************ 00:06:06.422 END TEST thread_poller_perf 00:06:06.422 ************************************ 00:06:06.422 12:02:07 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:06.422 12:02:07 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:06.422 12:02:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:06.422 12:02:07 -- common/autotest_common.sh@10 -- # set +x 00:06:06.422 ************************************ 00:06:06.422 START TEST thread_poller_perf 00:06:06.422 ************************************ 00:06:06.422 12:02:07 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:06.422 [2024-04-26 12:02:07.430893] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:06.423 [2024-04-26 12:02:07.431001] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3209648 ] 00:06:06.423 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.423 [2024-04-26 12:02:07.499362] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.423 [2024-04-26 12:02:07.571342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.423 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:07.862 ====================================== 00:06:07.862 busy:2402245888 (cyc) 00:06:07.862 total_run_count: 3815000 00:06:07.862 tsc_hz: 2400000000 (cyc) 00:06:07.862 ====================================== 00:06:07.862 poller_cost: 629 (cyc), 262 (nsec) 00:06:07.862 00:06:07.862 real 0m1.216s 00:06:07.862 user 0m1.139s 00:06:07.862 sys 0m0.073s 00:06:07.862 12:02:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:07.862 12:02:08 -- common/autotest_common.sh@10 -- # set +x 00:06:07.862 ************************************ 00:06:07.862 END TEST thread_poller_perf 00:06:07.862 ************************************ 00:06:07.862 12:02:08 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:07.862 00:06:07.862 real 0m2.930s 00:06:07.862 user 0m2.478s 00:06:07.862 sys 0m0.418s 00:06:07.862 12:02:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:07.862 12:02:08 -- common/autotest_common.sh@10 -- # set +x 00:06:07.862 ************************************ 00:06:07.862 END TEST thread 00:06:07.862 ************************************ 00:06:07.862 12:02:08 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:07.862 12:02:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:07.862 12:02:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:07.862 12:02:08 -- common/autotest_common.sh@10 -- # set +x 00:06:07.862 ************************************ 00:06:07.862 START TEST accel 00:06:07.862 ************************************ 00:06:07.862 12:02:08 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:07.862 * Looking for test storage... 00:06:07.862 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:07.862 12:02:08 -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:07.862 12:02:08 -- accel/accel.sh@82 -- # get_expected_opcs 00:06:07.862 12:02:08 -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:07.862 12:02:08 -- accel/accel.sh@62 -- # spdk_tgt_pid=3210052 00:06:07.862 12:02:08 -- accel/accel.sh@63 -- # waitforlisten 3210052 00:06:07.862 12:02:08 -- common/autotest_common.sh@817 -- # '[' -z 3210052 ']' 00:06:07.862 12:02:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.862 12:02:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:07.862 12:02:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.862 12:02:08 -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:07.862 12:02:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:07.862 12:02:08 -- accel/accel.sh@61 -- # build_accel_config 00:06:07.862 12:02:08 -- common/autotest_common.sh@10 -- # set +x 00:06:07.862 12:02:08 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:07.862 12:02:08 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:07.862 12:02:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.862 12:02:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.862 12:02:08 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:07.862 12:02:08 -- accel/accel.sh@40 -- # local IFS=, 00:06:07.862 12:02:08 -- accel/accel.sh@41 -- # jq -r . 00:06:07.862 [2024-04-26 12:02:09.026608] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:07.862 [2024-04-26 12:02:09.026668] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3210052 ] 00:06:07.862 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.122 [2024-04-26 12:02:09.092173] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.122 [2024-04-26 12:02:09.165178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.690 12:02:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:08.690 12:02:09 -- common/autotest_common.sh@850 -- # return 0 00:06:08.690 12:02:09 -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:08.690 12:02:09 -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:08.690 12:02:09 -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:08.690 12:02:09 -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:08.690 12:02:09 -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:08.690 12:02:09 -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:08.690 12:02:09 -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:08.690 12:02:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:08.690 12:02:09 -- common/autotest_common.sh@10 -- # set +x 00:06:08.690 12:02:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:08.690 12:02:09 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.690 12:02:09 -- accel/accel.sh@72 -- # IFS== 00:06:08.690 12:02:09 -- accel/accel.sh@72 -- # read -r opc module 00:06:08.690 12:02:09 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:08.690 12:02:09 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.690 12:02:09 -- accel/accel.sh@72 -- # IFS== 00:06:08.690 12:02:09 -- accel/accel.sh@72 -- # read -r opc module 00:06:08.690 12:02:09 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:08.690 12:02:09 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.690 12:02:09 -- accel/accel.sh@72 -- # IFS== 00:06:08.690 12:02:09 -- accel/accel.sh@72 -- # read -r opc module 00:06:08.690 12:02:09 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:08.690 12:02:09 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.690 12:02:09 -- accel/accel.sh@72 -- # IFS== 00:06:08.690 12:02:09 -- accel/accel.sh@72 -- # read -r opc module 00:06:08.690 12:02:09 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:08.690 12:02:09 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.690 12:02:09 -- accel/accel.sh@72 -- # IFS== 00:06:08.690 12:02:09 -- accel/accel.sh@72 -- # read -r opc module 00:06:08.690 12:02:09 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:08.690 12:02:09 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.690 12:02:09 -- accel/accel.sh@72 -- # IFS== 00:06:08.690 12:02:09 -- accel/accel.sh@72 -- # read -r opc module 00:06:08.690 12:02:09 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:08.690 12:02:09 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.690 12:02:09 -- accel/accel.sh@72 -- # IFS== 00:06:08.690 12:02:09 -- accel/accel.sh@72 -- # read -r opc module 00:06:08.690 12:02:09 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:08.690 12:02:09 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.690 12:02:09 -- accel/accel.sh@72 -- # IFS== 00:06:08.690 12:02:09 -- accel/accel.sh@72 -- # read -r opc module 00:06:08.690 12:02:09 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:08.690 12:02:09 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.690 12:02:09 -- accel/accel.sh@72 -- # IFS== 00:06:08.690 12:02:09 -- accel/accel.sh@72 -- # read -r opc module 00:06:08.690 12:02:09 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:08.690 12:02:09 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.690 12:02:09 -- accel/accel.sh@72 -- # IFS== 00:06:08.690 12:02:09 -- accel/accel.sh@72 -- # read -r opc module 00:06:08.690 12:02:09 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:08.690 12:02:09 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.690 12:02:09 -- accel/accel.sh@72 -- # IFS== 00:06:08.690 12:02:09 -- accel/accel.sh@72 -- # read -r opc module 00:06:08.690 12:02:09 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:08.690 12:02:09 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.690 12:02:09 -- accel/accel.sh@72 -- # IFS== 00:06:08.690 12:02:09 -- accel/accel.sh@72 -- # read -r opc module 00:06:08.690 12:02:09 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:08.690 12:02:09 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.690 12:02:09 -- accel/accel.sh@72 -- # IFS== 00:06:08.690 12:02:09 -- accel/accel.sh@72 -- # read -r opc module 00:06:08.690 12:02:09 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:08.690 12:02:09 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.690 12:02:09 -- accel/accel.sh@72 -- # IFS== 00:06:08.690 12:02:09 -- accel/accel.sh@72 -- # read -r opc module 00:06:08.690 12:02:09 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:08.690 12:02:09 -- accel/accel.sh@75 -- # killprocess 3210052 00:06:08.690 12:02:09 -- common/autotest_common.sh@936 -- # '[' -z 3210052 ']' 00:06:08.690 12:02:09 -- common/autotest_common.sh@940 -- # kill -0 3210052 00:06:08.690 12:02:09 -- common/autotest_common.sh@941 -- # uname 00:06:08.690 12:02:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:08.690 12:02:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3210052 00:06:08.690 12:02:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:08.690 12:02:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:08.690 12:02:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3210052' 00:06:08.690 killing process with pid 3210052 00:06:08.690 12:02:09 -- common/autotest_common.sh@955 -- # kill 3210052 00:06:08.690 12:02:09 -- common/autotest_common.sh@960 -- # wait 3210052 00:06:08.951 12:02:10 -- accel/accel.sh@76 -- # trap - ERR 00:06:08.951 12:02:10 -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:08.951 12:02:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:08.951 12:02:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:08.951 12:02:10 -- common/autotest_common.sh@10 -- # set +x 00:06:09.212 12:02:10 -- common/autotest_common.sh@1111 -- # accel_perf -h 00:06:09.212 12:02:10 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:09.212 12:02:10 -- accel/accel.sh@12 -- # build_accel_config 00:06:09.212 12:02:10 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:09.212 12:02:10 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:09.212 12:02:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.212 12:02:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.212 12:02:10 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:09.212 12:02:10 -- accel/accel.sh@40 -- # local IFS=, 00:06:09.212 12:02:10 -- accel/accel.sh@41 -- # jq -r . 00:06:09.212 12:02:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:09.212 12:02:10 -- common/autotest_common.sh@10 -- # set +x 00:06:09.212 12:02:10 -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:09.212 12:02:10 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:09.212 12:02:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:09.212 12:02:10 -- common/autotest_common.sh@10 -- # set +x 00:06:09.474 ************************************ 00:06:09.474 START TEST accel_missing_filename 00:06:09.474 ************************************ 00:06:09.474 12:02:10 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress 00:06:09.474 12:02:10 -- common/autotest_common.sh@638 -- # local es=0 00:06:09.474 12:02:10 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:09.474 12:02:10 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:09.474 12:02:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:09.474 12:02:10 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:09.474 12:02:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:09.474 12:02:10 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress 00:06:09.474 12:02:10 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:09.474 12:02:10 -- accel/accel.sh@12 -- # build_accel_config 00:06:09.474 12:02:10 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:09.474 12:02:10 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:09.474 12:02:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.474 12:02:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.474 12:02:10 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:09.474 12:02:10 -- accel/accel.sh@40 -- # local IFS=, 00:06:09.474 12:02:10 -- accel/accel.sh@41 -- # jq -r . 00:06:09.474 [2024-04-26 12:02:10.496911] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:09.474 [2024-04-26 12:02:10.496988] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3210384 ] 00:06:09.474 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.474 [2024-04-26 12:02:10.564787] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.474 [2024-04-26 12:02:10.637432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.474 [2024-04-26 12:02:10.669930] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:09.734 [2024-04-26 12:02:10.707366] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:06:09.734 A filename is required. 00:06:09.734 12:02:10 -- common/autotest_common.sh@641 -- # es=234 00:06:09.734 12:02:10 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:09.734 12:02:10 -- common/autotest_common.sh@650 -- # es=106 00:06:09.734 12:02:10 -- common/autotest_common.sh@651 -- # case "$es" in 00:06:09.734 12:02:10 -- common/autotest_common.sh@658 -- # es=1 00:06:09.734 12:02:10 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:09.734 00:06:09.734 real 0m0.295s 00:06:09.734 user 0m0.229s 00:06:09.734 sys 0m0.106s 00:06:09.734 12:02:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:09.734 12:02:10 -- common/autotest_common.sh@10 -- # set +x 00:06:09.734 ************************************ 00:06:09.734 END TEST accel_missing_filename 00:06:09.734 ************************************ 00:06:09.734 12:02:10 -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:09.734 12:02:10 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:09.734 12:02:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:09.734 12:02:10 -- common/autotest_common.sh@10 -- # set +x 00:06:09.734 ************************************ 00:06:09.734 START TEST accel_compress_verify 00:06:09.734 ************************************ 00:06:09.734 12:02:10 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:09.734 12:02:10 -- common/autotest_common.sh@638 -- # local es=0 00:06:09.734 12:02:10 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:09.734 12:02:10 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:09.734 12:02:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:09.734 12:02:10 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:09.734 12:02:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:09.734 12:02:10 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:09.734 12:02:10 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:09.734 12:02:10 -- accel/accel.sh@12 -- # build_accel_config 00:06:09.734 12:02:10 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:09.734 12:02:10 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:09.734 12:02:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.734 12:02:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.734 12:02:10 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:09.734 12:02:10 -- accel/accel.sh@40 -- # local IFS=, 00:06:09.734 12:02:10 -- accel/accel.sh@41 -- # jq -r . 00:06:09.994 [2024-04-26 12:02:10.972166] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:09.994 [2024-04-26 12:02:10.972241] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3210468 ] 00:06:09.994 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.994 [2024-04-26 12:02:11.037443] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.994 [2024-04-26 12:02:11.108242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.994 [2024-04-26 12:02:11.140584] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:09.994 [2024-04-26 12:02:11.177834] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:06:10.254 00:06:10.254 Compression does not support the verify option, aborting. 00:06:10.254 12:02:11 -- common/autotest_common.sh@641 -- # es=161 00:06:10.254 12:02:11 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:10.254 12:02:11 -- common/autotest_common.sh@650 -- # es=33 00:06:10.254 12:02:11 -- common/autotest_common.sh@651 -- # case "$es" in 00:06:10.254 12:02:11 -- common/autotest_common.sh@658 -- # es=1 00:06:10.254 12:02:11 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:10.254 00:06:10.254 real 0m0.288s 00:06:10.254 user 0m0.225s 00:06:10.254 sys 0m0.104s 00:06:10.254 12:02:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:10.254 12:02:11 -- common/autotest_common.sh@10 -- # set +x 00:06:10.254 ************************************ 00:06:10.254 END TEST accel_compress_verify 00:06:10.254 ************************************ 00:06:10.254 12:02:11 -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:10.254 12:02:11 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:10.254 12:02:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:10.254 12:02:11 -- common/autotest_common.sh@10 -- # set +x 00:06:10.254 ************************************ 00:06:10.254 START TEST accel_wrong_workload 00:06:10.254 ************************************ 00:06:10.254 12:02:11 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w foobar 00:06:10.254 12:02:11 -- common/autotest_common.sh@638 -- # local es=0 00:06:10.254 12:02:11 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:10.254 12:02:11 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:10.254 12:02:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:10.254 12:02:11 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:10.254 12:02:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:10.254 12:02:11 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w foobar 00:06:10.254 12:02:11 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:10.254 12:02:11 -- accel/accel.sh@12 -- # build_accel_config 00:06:10.254 12:02:11 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:10.254 12:02:11 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:10.254 12:02:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.254 12:02:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.254 12:02:11 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:10.254 12:02:11 -- accel/accel.sh@40 -- # local IFS=, 00:06:10.254 12:02:11 -- accel/accel.sh@41 -- # jq -r . 00:06:10.254 Unsupported workload type: foobar 00:06:10.254 [2024-04-26 12:02:11.444078] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:10.254 accel_perf options: 00:06:10.254 [-h help message] 00:06:10.254 [-q queue depth per core] 00:06:10.254 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:10.254 [-T number of threads per core 00:06:10.254 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:10.254 [-t time in seconds] 00:06:10.254 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:10.254 [ dif_verify, , dif_generate, dif_generate_copy 00:06:10.254 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:10.254 [-l for compress/decompress workloads, name of uncompressed input file 00:06:10.254 [-S for crc32c workload, use this seed value (default 0) 00:06:10.254 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:10.254 [-f for fill workload, use this BYTE value (default 255) 00:06:10.254 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:10.254 [-y verify result if this switch is on] 00:06:10.254 [-a tasks to allocate per core (default: same value as -q)] 00:06:10.254 Can be used to spread operations across a wider range of memory. 00:06:10.254 12:02:11 -- common/autotest_common.sh@641 -- # es=1 00:06:10.254 12:02:11 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:10.254 12:02:11 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:10.254 12:02:11 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:10.254 00:06:10.254 real 0m0.036s 00:06:10.254 user 0m0.023s 00:06:10.254 sys 0m0.012s 00:06:10.254 12:02:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:10.254 12:02:11 -- common/autotest_common.sh@10 -- # set +x 00:06:10.254 ************************************ 00:06:10.254 END TEST accel_wrong_workload 00:06:10.254 ************************************ 00:06:10.254 Error: writing output failed: Broken pipe 00:06:10.515 12:02:11 -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:10.515 12:02:11 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:10.515 12:02:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:10.515 12:02:11 -- common/autotest_common.sh@10 -- # set +x 00:06:10.515 ************************************ 00:06:10.515 START TEST accel_negative_buffers 00:06:10.515 ************************************ 00:06:10.515 12:02:11 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:10.515 12:02:11 -- common/autotest_common.sh@638 -- # local es=0 00:06:10.515 12:02:11 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:10.515 12:02:11 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:10.515 12:02:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:10.515 12:02:11 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:10.515 12:02:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:10.515 12:02:11 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w xor -y -x -1 00:06:10.515 12:02:11 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:10.515 12:02:11 -- accel/accel.sh@12 -- # build_accel_config 00:06:10.515 12:02:11 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:10.515 12:02:11 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:10.515 12:02:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.515 12:02:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.515 12:02:11 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:10.515 12:02:11 -- accel/accel.sh@40 -- # local IFS=, 00:06:10.515 12:02:11 -- accel/accel.sh@41 -- # jq -r . 00:06:10.515 -x option must be non-negative. 00:06:10.515 [2024-04-26 12:02:11.673115] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:10.515 accel_perf options: 00:06:10.515 [-h help message] 00:06:10.515 [-q queue depth per core] 00:06:10.515 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:10.515 [-T number of threads per core 00:06:10.515 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:10.515 [-t time in seconds] 00:06:10.515 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:10.515 [ dif_verify, , dif_generate, dif_generate_copy 00:06:10.515 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:10.515 [-l for compress/decompress workloads, name of uncompressed input file 00:06:10.515 [-S for crc32c workload, use this seed value (default 0) 00:06:10.515 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:10.515 [-f for fill workload, use this BYTE value (default 255) 00:06:10.515 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:10.515 [-y verify result if this switch is on] 00:06:10.515 [-a tasks to allocate per core (default: same value as -q)] 00:06:10.515 Can be used to spread operations across a wider range of memory. 00:06:10.515 12:02:11 -- common/autotest_common.sh@641 -- # es=1 00:06:10.515 12:02:11 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:10.515 12:02:11 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:10.515 12:02:11 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:10.515 00:06:10.515 real 0m0.035s 00:06:10.515 user 0m0.018s 00:06:10.515 sys 0m0.017s 00:06:10.515 12:02:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:10.515 12:02:11 -- common/autotest_common.sh@10 -- # set +x 00:06:10.515 ************************************ 00:06:10.515 END TEST accel_negative_buffers 00:06:10.515 ************************************ 00:06:10.515 Error: writing output failed: Broken pipe 00:06:10.515 12:02:11 -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:10.515 12:02:11 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:10.515 12:02:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:10.515 12:02:11 -- common/autotest_common.sh@10 -- # set +x 00:06:10.776 ************************************ 00:06:10.776 START TEST accel_crc32c 00:06:10.776 ************************************ 00:06:10.776 12:02:11 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:10.776 12:02:11 -- accel/accel.sh@16 -- # local accel_opc 00:06:10.776 12:02:11 -- accel/accel.sh@17 -- # local accel_module 00:06:10.776 12:02:11 -- accel/accel.sh@19 -- # IFS=: 00:06:10.776 12:02:11 -- accel/accel.sh@19 -- # read -r var val 00:06:10.776 12:02:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:10.776 12:02:11 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:10.776 12:02:11 -- accel/accel.sh@12 -- # build_accel_config 00:06:10.776 12:02:11 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:10.776 12:02:11 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:10.776 12:02:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.776 12:02:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.776 12:02:11 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:10.776 12:02:11 -- accel/accel.sh@40 -- # local IFS=, 00:06:10.776 12:02:11 -- accel/accel.sh@41 -- # jq -r . 00:06:10.776 [2024-04-26 12:02:11.888467] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:10.776 [2024-04-26 12:02:11.888548] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3210667 ] 00:06:10.776 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.776 [2024-04-26 12:02:11.954200] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.036 [2024-04-26 12:02:12.026689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.036 12:02:12 -- accel/accel.sh@20 -- # val= 00:06:11.036 12:02:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.036 12:02:12 -- accel/accel.sh@19 -- # IFS=: 00:06:11.036 12:02:12 -- accel/accel.sh@19 -- # read -r var val 00:06:11.036 12:02:12 -- accel/accel.sh@20 -- # val= 00:06:11.036 12:02:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.036 12:02:12 -- accel/accel.sh@19 -- # IFS=: 00:06:11.036 12:02:12 -- accel/accel.sh@19 -- # read -r var val 00:06:11.036 12:02:12 -- accel/accel.sh@20 -- # val=0x1 00:06:11.036 12:02:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.036 12:02:12 -- accel/accel.sh@19 -- # IFS=: 00:06:11.036 12:02:12 -- accel/accel.sh@19 -- # read -r var val 00:06:11.036 12:02:12 -- accel/accel.sh@20 -- # val= 00:06:11.036 12:02:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.036 12:02:12 -- accel/accel.sh@19 -- # IFS=: 00:06:11.036 12:02:12 -- accel/accel.sh@19 -- # read -r var val 00:06:11.036 12:02:12 -- accel/accel.sh@20 -- # val= 00:06:11.036 12:02:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.036 12:02:12 -- accel/accel.sh@19 -- # IFS=: 00:06:11.036 12:02:12 -- accel/accel.sh@19 -- # read -r var val 00:06:11.036 12:02:12 -- accel/accel.sh@20 -- # val=crc32c 00:06:11.036 12:02:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.036 12:02:12 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:11.036 12:02:12 -- accel/accel.sh@19 -- # IFS=: 00:06:11.036 12:02:12 -- accel/accel.sh@19 -- # read -r var val 00:06:11.036 12:02:12 -- accel/accel.sh@20 -- # val=32 00:06:11.036 12:02:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.036 12:02:12 -- accel/accel.sh@19 -- # IFS=: 00:06:11.036 12:02:12 -- accel/accel.sh@19 -- # read -r var val 00:06:11.036 12:02:12 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:11.036 12:02:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.036 12:02:12 -- accel/accel.sh@19 -- # IFS=: 00:06:11.036 12:02:12 -- accel/accel.sh@19 -- # read -r var val 00:06:11.036 12:02:12 -- accel/accel.sh@20 -- # val= 00:06:11.036 12:02:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.036 12:02:12 -- accel/accel.sh@19 -- # IFS=: 00:06:11.036 12:02:12 -- accel/accel.sh@19 -- # read -r var val 00:06:11.036 12:02:12 -- accel/accel.sh@20 -- # val=software 00:06:11.036 12:02:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.036 12:02:12 -- accel/accel.sh@22 -- # accel_module=software 00:06:11.036 12:02:12 -- accel/accel.sh@19 -- # IFS=: 00:06:11.036 12:02:12 -- accel/accel.sh@19 -- # read -r var val 00:06:11.036 12:02:12 -- accel/accel.sh@20 -- # val=32 00:06:11.036 12:02:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.036 12:02:12 -- accel/accel.sh@19 -- # IFS=: 00:06:11.036 12:02:12 -- accel/accel.sh@19 -- # read -r var val 00:06:11.036 12:02:12 -- accel/accel.sh@20 -- # val=32 00:06:11.036 12:02:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.036 12:02:12 -- accel/accel.sh@19 -- # IFS=: 00:06:11.036 12:02:12 -- accel/accel.sh@19 -- # read -r var val 00:06:11.036 12:02:12 -- accel/accel.sh@20 -- # val=1 00:06:11.036 12:02:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.036 12:02:12 -- accel/accel.sh@19 -- # IFS=: 00:06:11.036 12:02:12 -- accel/accel.sh@19 -- # read -r var val 00:06:11.036 12:02:12 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:11.036 12:02:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.036 12:02:12 -- accel/accel.sh@19 -- # IFS=: 00:06:11.036 12:02:12 -- accel/accel.sh@19 -- # read -r var val 00:06:11.036 12:02:12 -- accel/accel.sh@20 -- # val=Yes 00:06:11.036 12:02:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.036 12:02:12 -- accel/accel.sh@19 -- # IFS=: 00:06:11.036 12:02:12 -- accel/accel.sh@19 -- # read -r var val 00:06:11.036 12:02:12 -- accel/accel.sh@20 -- # val= 00:06:11.036 12:02:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.036 12:02:12 -- accel/accel.sh@19 -- # IFS=: 00:06:11.036 12:02:12 -- accel/accel.sh@19 -- # read -r var val 00:06:11.036 12:02:12 -- accel/accel.sh@20 -- # val= 00:06:11.036 12:02:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.036 12:02:12 -- accel/accel.sh@19 -- # IFS=: 00:06:11.036 12:02:12 -- accel/accel.sh@19 -- # read -r var val 00:06:11.975 12:02:13 -- accel/accel.sh@20 -- # val= 00:06:11.975 12:02:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.976 12:02:13 -- accel/accel.sh@19 -- # IFS=: 00:06:11.976 12:02:13 -- accel/accel.sh@19 -- # read -r var val 00:06:11.976 12:02:13 -- accel/accel.sh@20 -- # val= 00:06:11.976 12:02:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.976 12:02:13 -- accel/accel.sh@19 -- # IFS=: 00:06:11.976 12:02:13 -- accel/accel.sh@19 -- # read -r var val 00:06:11.976 12:02:13 -- accel/accel.sh@20 -- # val= 00:06:11.976 12:02:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.976 12:02:13 -- accel/accel.sh@19 -- # IFS=: 00:06:11.976 12:02:13 -- accel/accel.sh@19 -- # read -r var val 00:06:11.976 12:02:13 -- accel/accel.sh@20 -- # val= 00:06:11.976 12:02:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.976 12:02:13 -- accel/accel.sh@19 -- # IFS=: 00:06:11.976 12:02:13 -- accel/accel.sh@19 -- # read -r var val 00:06:11.976 12:02:13 -- accel/accel.sh@20 -- # val= 00:06:11.976 12:02:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.976 12:02:13 -- accel/accel.sh@19 -- # IFS=: 00:06:11.976 12:02:13 -- accel/accel.sh@19 -- # read -r var val 00:06:11.976 12:02:13 -- accel/accel.sh@20 -- # val= 00:06:11.976 12:02:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.976 12:02:13 -- accel/accel.sh@19 -- # IFS=: 00:06:11.976 12:02:13 -- accel/accel.sh@19 -- # read -r var val 00:06:11.976 12:02:13 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:11.976 12:02:13 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:11.976 12:02:13 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:11.976 00:06:11.976 real 0m1.296s 00:06:11.976 user 0m1.194s 00:06:11.976 sys 0m0.113s 00:06:11.976 12:02:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:11.976 12:02:13 -- common/autotest_common.sh@10 -- # set +x 00:06:11.976 ************************************ 00:06:11.976 END TEST accel_crc32c 00:06:11.976 ************************************ 00:06:11.976 12:02:13 -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:11.976 12:02:13 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:11.976 12:02:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:11.976 12:02:13 -- common/autotest_common.sh@10 -- # set +x 00:06:12.236 ************************************ 00:06:12.236 START TEST accel_crc32c_C2 00:06:12.236 ************************************ 00:06:12.236 12:02:13 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:12.236 12:02:13 -- accel/accel.sh@16 -- # local accel_opc 00:06:12.236 12:02:13 -- accel/accel.sh@17 -- # local accel_module 00:06:12.236 12:02:13 -- accel/accel.sh@19 -- # IFS=: 00:06:12.236 12:02:13 -- accel/accel.sh@19 -- # read -r var val 00:06:12.236 12:02:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:12.236 12:02:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:12.236 12:02:13 -- accel/accel.sh@12 -- # build_accel_config 00:06:12.236 12:02:13 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:12.236 12:02:13 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:12.236 12:02:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.236 12:02:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.236 12:02:13 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:12.236 12:02:13 -- accel/accel.sh@40 -- # local IFS=, 00:06:12.236 12:02:13 -- accel/accel.sh@41 -- # jq -r . 00:06:12.236 [2024-04-26 12:02:13.361329] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:12.236 [2024-04-26 12:02:13.361451] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3210925 ] 00:06:12.236 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.236 [2024-04-26 12:02:13.436944] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.497 [2024-04-26 12:02:13.509104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.497 12:02:13 -- accel/accel.sh@20 -- # val= 00:06:12.497 12:02:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.497 12:02:13 -- accel/accel.sh@19 -- # IFS=: 00:06:12.497 12:02:13 -- accel/accel.sh@19 -- # read -r var val 00:06:12.497 12:02:13 -- accel/accel.sh@20 -- # val= 00:06:12.497 12:02:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.497 12:02:13 -- accel/accel.sh@19 -- # IFS=: 00:06:12.497 12:02:13 -- accel/accel.sh@19 -- # read -r var val 00:06:12.497 12:02:13 -- accel/accel.sh@20 -- # val=0x1 00:06:12.497 12:02:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.497 12:02:13 -- accel/accel.sh@19 -- # IFS=: 00:06:12.497 12:02:13 -- accel/accel.sh@19 -- # read -r var val 00:06:12.497 12:02:13 -- accel/accel.sh@20 -- # val= 00:06:12.497 12:02:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.497 12:02:13 -- accel/accel.sh@19 -- # IFS=: 00:06:12.497 12:02:13 -- accel/accel.sh@19 -- # read -r var val 00:06:12.497 12:02:13 -- accel/accel.sh@20 -- # val= 00:06:12.497 12:02:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.497 12:02:13 -- accel/accel.sh@19 -- # IFS=: 00:06:12.497 12:02:13 -- accel/accel.sh@19 -- # read -r var val 00:06:12.497 12:02:13 -- accel/accel.sh@20 -- # val=crc32c 00:06:12.497 12:02:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.497 12:02:13 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:12.497 12:02:13 -- accel/accel.sh@19 -- # IFS=: 00:06:12.497 12:02:13 -- accel/accel.sh@19 -- # read -r var val 00:06:12.497 12:02:13 -- accel/accel.sh@20 -- # val=0 00:06:12.497 12:02:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.497 12:02:13 -- accel/accel.sh@19 -- # IFS=: 00:06:12.497 12:02:13 -- accel/accel.sh@19 -- # read -r var val 00:06:12.497 12:02:13 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:12.497 12:02:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.497 12:02:13 -- accel/accel.sh@19 -- # IFS=: 00:06:12.497 12:02:13 -- accel/accel.sh@19 -- # read -r var val 00:06:12.497 12:02:13 -- accel/accel.sh@20 -- # val= 00:06:12.497 12:02:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.497 12:02:13 -- accel/accel.sh@19 -- # IFS=: 00:06:12.497 12:02:13 -- accel/accel.sh@19 -- # read -r var val 00:06:12.497 12:02:13 -- accel/accel.sh@20 -- # val=software 00:06:12.497 12:02:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.497 12:02:13 -- accel/accel.sh@22 -- # accel_module=software 00:06:12.497 12:02:13 -- accel/accel.sh@19 -- # IFS=: 00:06:12.497 12:02:13 -- accel/accel.sh@19 -- # read -r var val 00:06:12.497 12:02:13 -- accel/accel.sh@20 -- # val=32 00:06:12.497 12:02:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.498 12:02:13 -- accel/accel.sh@19 -- # IFS=: 00:06:12.498 12:02:13 -- accel/accel.sh@19 -- # read -r var val 00:06:12.498 12:02:13 -- accel/accel.sh@20 -- # val=32 00:06:12.498 12:02:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.498 12:02:13 -- accel/accel.sh@19 -- # IFS=: 00:06:12.498 12:02:13 -- accel/accel.sh@19 -- # read -r var val 00:06:12.498 12:02:13 -- accel/accel.sh@20 -- # val=1 00:06:12.498 12:02:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.498 12:02:13 -- accel/accel.sh@19 -- # IFS=: 00:06:12.498 12:02:13 -- accel/accel.sh@19 -- # read -r var val 00:06:12.498 12:02:13 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:12.498 12:02:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.498 12:02:13 -- accel/accel.sh@19 -- # IFS=: 00:06:12.498 12:02:13 -- accel/accel.sh@19 -- # read -r var val 00:06:12.498 12:02:13 -- accel/accel.sh@20 -- # val=Yes 00:06:12.498 12:02:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.498 12:02:13 -- accel/accel.sh@19 -- # IFS=: 00:06:12.498 12:02:13 -- accel/accel.sh@19 -- # read -r var val 00:06:12.498 12:02:13 -- accel/accel.sh@20 -- # val= 00:06:12.498 12:02:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.498 12:02:13 -- accel/accel.sh@19 -- # IFS=: 00:06:12.498 12:02:13 -- accel/accel.sh@19 -- # read -r var val 00:06:12.498 12:02:13 -- accel/accel.sh@20 -- # val= 00:06:12.498 12:02:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.498 12:02:13 -- accel/accel.sh@19 -- # IFS=: 00:06:12.498 12:02:13 -- accel/accel.sh@19 -- # read -r var val 00:06:13.441 12:02:14 -- accel/accel.sh@20 -- # val= 00:06:13.441 12:02:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.441 12:02:14 -- accel/accel.sh@19 -- # IFS=: 00:06:13.441 12:02:14 -- accel/accel.sh@19 -- # read -r var val 00:06:13.441 12:02:14 -- accel/accel.sh@20 -- # val= 00:06:13.441 12:02:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.441 12:02:14 -- accel/accel.sh@19 -- # IFS=: 00:06:13.441 12:02:14 -- accel/accel.sh@19 -- # read -r var val 00:06:13.441 12:02:14 -- accel/accel.sh@20 -- # val= 00:06:13.441 12:02:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.441 12:02:14 -- accel/accel.sh@19 -- # IFS=: 00:06:13.441 12:02:14 -- accel/accel.sh@19 -- # read -r var val 00:06:13.441 12:02:14 -- accel/accel.sh@20 -- # val= 00:06:13.441 12:02:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.441 12:02:14 -- accel/accel.sh@19 -- # IFS=: 00:06:13.441 12:02:14 -- accel/accel.sh@19 -- # read -r var val 00:06:13.441 12:02:14 -- accel/accel.sh@20 -- # val= 00:06:13.441 12:02:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.441 12:02:14 -- accel/accel.sh@19 -- # IFS=: 00:06:13.441 12:02:14 -- accel/accel.sh@19 -- # read -r var val 00:06:13.441 12:02:14 -- accel/accel.sh@20 -- # val= 00:06:13.441 12:02:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.441 12:02:14 -- accel/accel.sh@19 -- # IFS=: 00:06:13.441 12:02:14 -- accel/accel.sh@19 -- # read -r var val 00:06:13.441 12:02:14 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:13.441 12:02:14 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:13.441 12:02:14 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:13.441 00:06:13.441 real 0m1.307s 00:06:13.441 user 0m1.202s 00:06:13.441 sys 0m0.115s 00:06:13.441 12:02:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:13.441 12:02:14 -- common/autotest_common.sh@10 -- # set +x 00:06:13.441 ************************************ 00:06:13.441 END TEST accel_crc32c_C2 00:06:13.441 ************************************ 00:06:13.703 12:02:14 -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:13.703 12:02:14 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:13.703 12:02:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:13.703 12:02:14 -- common/autotest_common.sh@10 -- # set +x 00:06:13.703 ************************************ 00:06:13.703 START TEST accel_copy 00:06:13.703 ************************************ 00:06:13.703 12:02:14 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy -y 00:06:13.703 12:02:14 -- accel/accel.sh@16 -- # local accel_opc 00:06:13.703 12:02:14 -- accel/accel.sh@17 -- # local accel_module 00:06:13.703 12:02:14 -- accel/accel.sh@19 -- # IFS=: 00:06:13.703 12:02:14 -- accel/accel.sh@19 -- # read -r var val 00:06:13.703 12:02:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:13.703 12:02:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:13.703 12:02:14 -- accel/accel.sh@12 -- # build_accel_config 00:06:13.703 12:02:14 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:13.703 12:02:14 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:13.703 12:02:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.703 12:02:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.703 12:02:14 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:13.703 12:02:14 -- accel/accel.sh@40 -- # local IFS=, 00:06:13.703 12:02:14 -- accel/accel.sh@41 -- # jq -r . 00:06:13.703 [2024-04-26 12:02:14.854557] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:13.703 [2024-04-26 12:02:14.854656] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3211282 ] 00:06:13.703 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.703 [2024-04-26 12:02:14.921398] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.964 [2024-04-26 12:02:14.993112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.964 12:02:15 -- accel/accel.sh@20 -- # val= 00:06:13.964 12:02:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.964 12:02:15 -- accel/accel.sh@19 -- # IFS=: 00:06:13.964 12:02:15 -- accel/accel.sh@19 -- # read -r var val 00:06:13.964 12:02:15 -- accel/accel.sh@20 -- # val= 00:06:13.964 12:02:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.964 12:02:15 -- accel/accel.sh@19 -- # IFS=: 00:06:13.964 12:02:15 -- accel/accel.sh@19 -- # read -r var val 00:06:13.964 12:02:15 -- accel/accel.sh@20 -- # val=0x1 00:06:13.964 12:02:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.964 12:02:15 -- accel/accel.sh@19 -- # IFS=: 00:06:13.964 12:02:15 -- accel/accel.sh@19 -- # read -r var val 00:06:13.964 12:02:15 -- accel/accel.sh@20 -- # val= 00:06:13.964 12:02:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.964 12:02:15 -- accel/accel.sh@19 -- # IFS=: 00:06:13.964 12:02:15 -- accel/accel.sh@19 -- # read -r var val 00:06:13.964 12:02:15 -- accel/accel.sh@20 -- # val= 00:06:13.964 12:02:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.964 12:02:15 -- accel/accel.sh@19 -- # IFS=: 00:06:13.964 12:02:15 -- accel/accel.sh@19 -- # read -r var val 00:06:13.964 12:02:15 -- accel/accel.sh@20 -- # val=copy 00:06:13.964 12:02:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.964 12:02:15 -- accel/accel.sh@23 -- # accel_opc=copy 00:06:13.964 12:02:15 -- accel/accel.sh@19 -- # IFS=: 00:06:13.964 12:02:15 -- accel/accel.sh@19 -- # read -r var val 00:06:13.964 12:02:15 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:13.964 12:02:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.964 12:02:15 -- accel/accel.sh@19 -- # IFS=: 00:06:13.964 12:02:15 -- accel/accel.sh@19 -- # read -r var val 00:06:13.964 12:02:15 -- accel/accel.sh@20 -- # val= 00:06:13.964 12:02:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.964 12:02:15 -- accel/accel.sh@19 -- # IFS=: 00:06:13.964 12:02:15 -- accel/accel.sh@19 -- # read -r var val 00:06:13.964 12:02:15 -- accel/accel.sh@20 -- # val=software 00:06:13.964 12:02:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.964 12:02:15 -- accel/accel.sh@22 -- # accel_module=software 00:06:13.964 12:02:15 -- accel/accel.sh@19 -- # IFS=: 00:06:13.964 12:02:15 -- accel/accel.sh@19 -- # read -r var val 00:06:13.964 12:02:15 -- accel/accel.sh@20 -- # val=32 00:06:13.964 12:02:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.964 12:02:15 -- accel/accel.sh@19 -- # IFS=: 00:06:13.964 12:02:15 -- accel/accel.sh@19 -- # read -r var val 00:06:13.964 12:02:15 -- accel/accel.sh@20 -- # val=32 00:06:13.964 12:02:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.964 12:02:15 -- accel/accel.sh@19 -- # IFS=: 00:06:13.964 12:02:15 -- accel/accel.sh@19 -- # read -r var val 00:06:13.964 12:02:15 -- accel/accel.sh@20 -- # val=1 00:06:13.964 12:02:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.964 12:02:15 -- accel/accel.sh@19 -- # IFS=: 00:06:13.964 12:02:15 -- accel/accel.sh@19 -- # read -r var val 00:06:13.964 12:02:15 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:13.964 12:02:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.964 12:02:15 -- accel/accel.sh@19 -- # IFS=: 00:06:13.964 12:02:15 -- accel/accel.sh@19 -- # read -r var val 00:06:13.964 12:02:15 -- accel/accel.sh@20 -- # val=Yes 00:06:13.964 12:02:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.964 12:02:15 -- accel/accel.sh@19 -- # IFS=: 00:06:13.964 12:02:15 -- accel/accel.sh@19 -- # read -r var val 00:06:13.964 12:02:15 -- accel/accel.sh@20 -- # val= 00:06:13.964 12:02:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.964 12:02:15 -- accel/accel.sh@19 -- # IFS=: 00:06:13.964 12:02:15 -- accel/accel.sh@19 -- # read -r var val 00:06:13.964 12:02:15 -- accel/accel.sh@20 -- # val= 00:06:13.964 12:02:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.964 12:02:15 -- accel/accel.sh@19 -- # IFS=: 00:06:13.964 12:02:15 -- accel/accel.sh@19 -- # read -r var val 00:06:14.907 12:02:16 -- accel/accel.sh@20 -- # val= 00:06:14.907 12:02:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.907 12:02:16 -- accel/accel.sh@19 -- # IFS=: 00:06:14.907 12:02:16 -- accel/accel.sh@19 -- # read -r var val 00:06:14.907 12:02:16 -- accel/accel.sh@20 -- # val= 00:06:14.907 12:02:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.907 12:02:16 -- accel/accel.sh@19 -- # IFS=: 00:06:14.907 12:02:16 -- accel/accel.sh@19 -- # read -r var val 00:06:14.907 12:02:16 -- accel/accel.sh@20 -- # val= 00:06:14.907 12:02:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.907 12:02:16 -- accel/accel.sh@19 -- # IFS=: 00:06:14.907 12:02:16 -- accel/accel.sh@19 -- # read -r var val 00:06:14.907 12:02:16 -- accel/accel.sh@20 -- # val= 00:06:14.907 12:02:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.907 12:02:16 -- accel/accel.sh@19 -- # IFS=: 00:06:14.907 12:02:16 -- accel/accel.sh@19 -- # read -r var val 00:06:14.907 12:02:16 -- accel/accel.sh@20 -- # val= 00:06:14.907 12:02:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.907 12:02:16 -- accel/accel.sh@19 -- # IFS=: 00:06:14.907 12:02:16 -- accel/accel.sh@19 -- # read -r var val 00:06:14.907 12:02:16 -- accel/accel.sh@20 -- # val= 00:06:14.907 12:02:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.907 12:02:16 -- accel/accel.sh@19 -- # IFS=: 00:06:14.907 12:02:16 -- accel/accel.sh@19 -- # read -r var val 00:06:14.907 12:02:16 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:14.907 12:02:16 -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:14.907 12:02:16 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:14.907 00:06:14.907 real 0m1.299s 00:06:14.907 user 0m1.199s 00:06:14.907 sys 0m0.109s 00:06:14.907 12:02:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:14.907 12:02:16 -- common/autotest_common.sh@10 -- # set +x 00:06:14.907 ************************************ 00:06:14.907 END TEST accel_copy 00:06:14.907 ************************************ 00:06:15.167 12:02:16 -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:15.167 12:02:16 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:15.167 12:02:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:15.167 12:02:16 -- common/autotest_common.sh@10 -- # set +x 00:06:15.167 ************************************ 00:06:15.167 START TEST accel_fill 00:06:15.167 ************************************ 00:06:15.167 12:02:16 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:15.167 12:02:16 -- accel/accel.sh@16 -- # local accel_opc 00:06:15.167 12:02:16 -- accel/accel.sh@17 -- # local accel_module 00:06:15.167 12:02:16 -- accel/accel.sh@19 -- # IFS=: 00:06:15.167 12:02:16 -- accel/accel.sh@19 -- # read -r var val 00:06:15.167 12:02:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:15.167 12:02:16 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:15.167 12:02:16 -- accel/accel.sh@12 -- # build_accel_config 00:06:15.167 12:02:16 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:15.167 12:02:16 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:15.167 12:02:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.167 12:02:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.167 12:02:16 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:15.168 12:02:16 -- accel/accel.sh@40 -- # local IFS=, 00:06:15.168 12:02:16 -- accel/accel.sh@41 -- # jq -r . 00:06:15.168 [2024-04-26 12:02:16.344693] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:15.168 [2024-04-26 12:02:16.344756] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3211635 ] 00:06:15.168 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.428 [2024-04-26 12:02:16.407205] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.428 [2024-04-26 12:02:16.470387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.428 12:02:16 -- accel/accel.sh@20 -- # val= 00:06:15.428 12:02:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.428 12:02:16 -- accel/accel.sh@19 -- # IFS=: 00:06:15.428 12:02:16 -- accel/accel.sh@19 -- # read -r var val 00:06:15.428 12:02:16 -- accel/accel.sh@20 -- # val= 00:06:15.428 12:02:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.428 12:02:16 -- accel/accel.sh@19 -- # IFS=: 00:06:15.428 12:02:16 -- accel/accel.sh@19 -- # read -r var val 00:06:15.428 12:02:16 -- accel/accel.sh@20 -- # val=0x1 00:06:15.428 12:02:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.428 12:02:16 -- accel/accel.sh@19 -- # IFS=: 00:06:15.428 12:02:16 -- accel/accel.sh@19 -- # read -r var val 00:06:15.428 12:02:16 -- accel/accel.sh@20 -- # val= 00:06:15.428 12:02:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.428 12:02:16 -- accel/accel.sh@19 -- # IFS=: 00:06:15.428 12:02:16 -- accel/accel.sh@19 -- # read -r var val 00:06:15.428 12:02:16 -- accel/accel.sh@20 -- # val= 00:06:15.428 12:02:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.428 12:02:16 -- accel/accel.sh@19 -- # IFS=: 00:06:15.428 12:02:16 -- accel/accel.sh@19 -- # read -r var val 00:06:15.428 12:02:16 -- accel/accel.sh@20 -- # val=fill 00:06:15.428 12:02:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.428 12:02:16 -- accel/accel.sh@23 -- # accel_opc=fill 00:06:15.428 12:02:16 -- accel/accel.sh@19 -- # IFS=: 00:06:15.428 12:02:16 -- accel/accel.sh@19 -- # read -r var val 00:06:15.428 12:02:16 -- accel/accel.sh@20 -- # val=0x80 00:06:15.428 12:02:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.428 12:02:16 -- accel/accel.sh@19 -- # IFS=: 00:06:15.428 12:02:16 -- accel/accel.sh@19 -- # read -r var val 00:06:15.428 12:02:16 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:15.428 12:02:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.428 12:02:16 -- accel/accel.sh@19 -- # IFS=: 00:06:15.428 12:02:16 -- accel/accel.sh@19 -- # read -r var val 00:06:15.428 12:02:16 -- accel/accel.sh@20 -- # val= 00:06:15.428 12:02:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.428 12:02:16 -- accel/accel.sh@19 -- # IFS=: 00:06:15.428 12:02:16 -- accel/accel.sh@19 -- # read -r var val 00:06:15.428 12:02:16 -- accel/accel.sh@20 -- # val=software 00:06:15.428 12:02:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.428 12:02:16 -- accel/accel.sh@22 -- # accel_module=software 00:06:15.428 12:02:16 -- accel/accel.sh@19 -- # IFS=: 00:06:15.428 12:02:16 -- accel/accel.sh@19 -- # read -r var val 00:06:15.428 12:02:16 -- accel/accel.sh@20 -- # val=64 00:06:15.428 12:02:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.428 12:02:16 -- accel/accel.sh@19 -- # IFS=: 00:06:15.428 12:02:16 -- accel/accel.sh@19 -- # read -r var val 00:06:15.428 12:02:16 -- accel/accel.sh@20 -- # val=64 00:06:15.428 12:02:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.428 12:02:16 -- accel/accel.sh@19 -- # IFS=: 00:06:15.428 12:02:16 -- accel/accel.sh@19 -- # read -r var val 00:06:15.428 12:02:16 -- accel/accel.sh@20 -- # val=1 00:06:15.428 12:02:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.428 12:02:16 -- accel/accel.sh@19 -- # IFS=: 00:06:15.428 12:02:16 -- accel/accel.sh@19 -- # read -r var val 00:06:15.428 12:02:16 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:15.428 12:02:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.428 12:02:16 -- accel/accel.sh@19 -- # IFS=: 00:06:15.428 12:02:16 -- accel/accel.sh@19 -- # read -r var val 00:06:15.428 12:02:16 -- accel/accel.sh@20 -- # val=Yes 00:06:15.428 12:02:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.428 12:02:16 -- accel/accel.sh@19 -- # IFS=: 00:06:15.428 12:02:16 -- accel/accel.sh@19 -- # read -r var val 00:06:15.428 12:02:16 -- accel/accel.sh@20 -- # val= 00:06:15.428 12:02:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.428 12:02:16 -- accel/accel.sh@19 -- # IFS=: 00:06:15.428 12:02:16 -- accel/accel.sh@19 -- # read -r var val 00:06:15.428 12:02:16 -- accel/accel.sh@20 -- # val= 00:06:15.428 12:02:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.428 12:02:16 -- accel/accel.sh@19 -- # IFS=: 00:06:15.428 12:02:16 -- accel/accel.sh@19 -- # read -r var val 00:06:16.814 12:02:17 -- accel/accel.sh@20 -- # val= 00:06:16.814 12:02:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.814 12:02:17 -- accel/accel.sh@19 -- # IFS=: 00:06:16.814 12:02:17 -- accel/accel.sh@19 -- # read -r var val 00:06:16.814 12:02:17 -- accel/accel.sh@20 -- # val= 00:06:16.814 12:02:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.814 12:02:17 -- accel/accel.sh@19 -- # IFS=: 00:06:16.814 12:02:17 -- accel/accel.sh@19 -- # read -r var val 00:06:16.814 12:02:17 -- accel/accel.sh@20 -- # val= 00:06:16.814 12:02:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.814 12:02:17 -- accel/accel.sh@19 -- # IFS=: 00:06:16.814 12:02:17 -- accel/accel.sh@19 -- # read -r var val 00:06:16.814 12:02:17 -- accel/accel.sh@20 -- # val= 00:06:16.814 12:02:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.814 12:02:17 -- accel/accel.sh@19 -- # IFS=: 00:06:16.814 12:02:17 -- accel/accel.sh@19 -- # read -r var val 00:06:16.814 12:02:17 -- accel/accel.sh@20 -- # val= 00:06:16.814 12:02:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.814 12:02:17 -- accel/accel.sh@19 -- # IFS=: 00:06:16.814 12:02:17 -- accel/accel.sh@19 -- # read -r var val 00:06:16.814 12:02:17 -- accel/accel.sh@20 -- # val= 00:06:16.814 12:02:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.814 12:02:17 -- accel/accel.sh@19 -- # IFS=: 00:06:16.814 12:02:17 -- accel/accel.sh@19 -- # read -r var val 00:06:16.814 12:02:17 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:16.814 12:02:17 -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:16.814 12:02:17 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:16.814 00:06:16.814 real 0m1.283s 00:06:16.814 user 0m1.197s 00:06:16.814 sys 0m0.096s 00:06:16.814 12:02:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:16.814 12:02:17 -- common/autotest_common.sh@10 -- # set +x 00:06:16.814 ************************************ 00:06:16.814 END TEST accel_fill 00:06:16.814 ************************************ 00:06:16.814 12:02:17 -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:16.814 12:02:17 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:16.814 12:02:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:16.814 12:02:17 -- common/autotest_common.sh@10 -- # set +x 00:06:16.814 ************************************ 00:06:16.814 START TEST accel_copy_crc32c 00:06:16.814 ************************************ 00:06:16.814 12:02:17 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y 00:06:16.814 12:02:17 -- accel/accel.sh@16 -- # local accel_opc 00:06:16.814 12:02:17 -- accel/accel.sh@17 -- # local accel_module 00:06:16.814 12:02:17 -- accel/accel.sh@19 -- # IFS=: 00:06:16.814 12:02:17 -- accel/accel.sh@19 -- # read -r var val 00:06:16.815 12:02:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:16.815 12:02:17 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:16.815 12:02:17 -- accel/accel.sh@12 -- # build_accel_config 00:06:16.815 12:02:17 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:16.815 12:02:17 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:16.815 12:02:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.815 12:02:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.815 12:02:17 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:16.815 12:02:17 -- accel/accel.sh@40 -- # local IFS=, 00:06:16.815 12:02:17 -- accel/accel.sh@41 -- # jq -r . 00:06:16.815 [2024-04-26 12:02:17.812280] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:16.815 [2024-04-26 12:02:17.812352] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3211997 ] 00:06:16.815 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.815 [2024-04-26 12:02:17.876304] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.815 [2024-04-26 12:02:17.945706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.815 12:02:17 -- accel/accel.sh@20 -- # val= 00:06:16.815 12:02:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.815 12:02:17 -- accel/accel.sh@19 -- # IFS=: 00:06:16.815 12:02:17 -- accel/accel.sh@19 -- # read -r var val 00:06:16.815 12:02:17 -- accel/accel.sh@20 -- # val= 00:06:16.815 12:02:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.815 12:02:17 -- accel/accel.sh@19 -- # IFS=: 00:06:16.815 12:02:17 -- accel/accel.sh@19 -- # read -r var val 00:06:16.815 12:02:17 -- accel/accel.sh@20 -- # val=0x1 00:06:16.815 12:02:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.815 12:02:17 -- accel/accel.sh@19 -- # IFS=: 00:06:16.815 12:02:17 -- accel/accel.sh@19 -- # read -r var val 00:06:16.815 12:02:17 -- accel/accel.sh@20 -- # val= 00:06:16.815 12:02:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.815 12:02:17 -- accel/accel.sh@19 -- # IFS=: 00:06:16.815 12:02:17 -- accel/accel.sh@19 -- # read -r var val 00:06:16.815 12:02:17 -- accel/accel.sh@20 -- # val= 00:06:16.815 12:02:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.815 12:02:17 -- accel/accel.sh@19 -- # IFS=: 00:06:16.815 12:02:17 -- accel/accel.sh@19 -- # read -r var val 00:06:16.815 12:02:17 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:16.815 12:02:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.815 12:02:17 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:16.815 12:02:17 -- accel/accel.sh@19 -- # IFS=: 00:06:16.815 12:02:17 -- accel/accel.sh@19 -- # read -r var val 00:06:16.815 12:02:17 -- accel/accel.sh@20 -- # val=0 00:06:16.815 12:02:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.815 12:02:17 -- accel/accel.sh@19 -- # IFS=: 00:06:16.815 12:02:17 -- accel/accel.sh@19 -- # read -r var val 00:06:16.815 12:02:17 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:16.815 12:02:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.815 12:02:17 -- accel/accel.sh@19 -- # IFS=: 00:06:16.815 12:02:17 -- accel/accel.sh@19 -- # read -r var val 00:06:16.815 12:02:17 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:16.815 12:02:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.815 12:02:17 -- accel/accel.sh@19 -- # IFS=: 00:06:16.815 12:02:17 -- accel/accel.sh@19 -- # read -r var val 00:06:16.815 12:02:17 -- accel/accel.sh@20 -- # val= 00:06:16.815 12:02:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.815 12:02:17 -- accel/accel.sh@19 -- # IFS=: 00:06:16.815 12:02:17 -- accel/accel.sh@19 -- # read -r var val 00:06:16.815 12:02:17 -- accel/accel.sh@20 -- # val=software 00:06:16.815 12:02:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.815 12:02:17 -- accel/accel.sh@22 -- # accel_module=software 00:06:16.815 12:02:17 -- accel/accel.sh@19 -- # IFS=: 00:06:16.815 12:02:17 -- accel/accel.sh@19 -- # read -r var val 00:06:16.815 12:02:17 -- accel/accel.sh@20 -- # val=32 00:06:16.815 12:02:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.815 12:02:17 -- accel/accel.sh@19 -- # IFS=: 00:06:16.815 12:02:17 -- accel/accel.sh@19 -- # read -r var val 00:06:16.815 12:02:17 -- accel/accel.sh@20 -- # val=32 00:06:16.815 12:02:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.815 12:02:17 -- accel/accel.sh@19 -- # IFS=: 00:06:16.815 12:02:17 -- accel/accel.sh@19 -- # read -r var val 00:06:16.815 12:02:17 -- accel/accel.sh@20 -- # val=1 00:06:16.815 12:02:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.815 12:02:17 -- accel/accel.sh@19 -- # IFS=: 00:06:16.815 12:02:17 -- accel/accel.sh@19 -- # read -r var val 00:06:16.815 12:02:17 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:16.815 12:02:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.815 12:02:17 -- accel/accel.sh@19 -- # IFS=: 00:06:16.815 12:02:17 -- accel/accel.sh@19 -- # read -r var val 00:06:16.815 12:02:17 -- accel/accel.sh@20 -- # val=Yes 00:06:16.815 12:02:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.815 12:02:17 -- accel/accel.sh@19 -- # IFS=: 00:06:16.815 12:02:17 -- accel/accel.sh@19 -- # read -r var val 00:06:16.815 12:02:17 -- accel/accel.sh@20 -- # val= 00:06:16.815 12:02:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.815 12:02:17 -- accel/accel.sh@19 -- # IFS=: 00:06:16.815 12:02:17 -- accel/accel.sh@19 -- # read -r var val 00:06:16.815 12:02:17 -- accel/accel.sh@20 -- # val= 00:06:16.815 12:02:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.815 12:02:17 -- accel/accel.sh@19 -- # IFS=: 00:06:16.815 12:02:17 -- accel/accel.sh@19 -- # read -r var val 00:06:18.200 12:02:19 -- accel/accel.sh@20 -- # val= 00:06:18.200 12:02:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.200 12:02:19 -- accel/accel.sh@19 -- # IFS=: 00:06:18.200 12:02:19 -- accel/accel.sh@19 -- # read -r var val 00:06:18.200 12:02:19 -- accel/accel.sh@20 -- # val= 00:06:18.200 12:02:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.200 12:02:19 -- accel/accel.sh@19 -- # IFS=: 00:06:18.200 12:02:19 -- accel/accel.sh@19 -- # read -r var val 00:06:18.200 12:02:19 -- accel/accel.sh@20 -- # val= 00:06:18.200 12:02:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.200 12:02:19 -- accel/accel.sh@19 -- # IFS=: 00:06:18.200 12:02:19 -- accel/accel.sh@19 -- # read -r var val 00:06:18.200 12:02:19 -- accel/accel.sh@20 -- # val= 00:06:18.200 12:02:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.200 12:02:19 -- accel/accel.sh@19 -- # IFS=: 00:06:18.200 12:02:19 -- accel/accel.sh@19 -- # read -r var val 00:06:18.200 12:02:19 -- accel/accel.sh@20 -- # val= 00:06:18.200 12:02:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.200 12:02:19 -- accel/accel.sh@19 -- # IFS=: 00:06:18.200 12:02:19 -- accel/accel.sh@19 -- # read -r var val 00:06:18.200 12:02:19 -- accel/accel.sh@20 -- # val= 00:06:18.200 12:02:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.200 12:02:19 -- accel/accel.sh@19 -- # IFS=: 00:06:18.200 12:02:19 -- accel/accel.sh@19 -- # read -r var val 00:06:18.200 12:02:19 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:18.200 12:02:19 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:18.200 12:02:19 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:18.200 00:06:18.200 real 0m1.292s 00:06:18.200 user 0m1.206s 00:06:18.200 sys 0m0.099s 00:06:18.200 12:02:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:18.200 12:02:19 -- common/autotest_common.sh@10 -- # set +x 00:06:18.200 ************************************ 00:06:18.200 END TEST accel_copy_crc32c 00:06:18.200 ************************************ 00:06:18.200 12:02:19 -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:18.201 12:02:19 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:18.201 12:02:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:18.201 12:02:19 -- common/autotest_common.sh@10 -- # set +x 00:06:18.201 ************************************ 00:06:18.201 START TEST accel_copy_crc32c_C2 00:06:18.201 ************************************ 00:06:18.201 12:02:19 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:18.201 12:02:19 -- accel/accel.sh@16 -- # local accel_opc 00:06:18.201 12:02:19 -- accel/accel.sh@17 -- # local accel_module 00:06:18.201 12:02:19 -- accel/accel.sh@19 -- # IFS=: 00:06:18.201 12:02:19 -- accel/accel.sh@19 -- # read -r var val 00:06:18.201 12:02:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:18.201 12:02:19 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:18.201 12:02:19 -- accel/accel.sh@12 -- # build_accel_config 00:06:18.201 12:02:19 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:18.201 12:02:19 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:18.201 12:02:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.201 12:02:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.201 12:02:19 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:18.201 12:02:19 -- accel/accel.sh@40 -- # local IFS=, 00:06:18.201 12:02:19 -- accel/accel.sh@41 -- # jq -r . 00:06:18.201 [2024-04-26 12:02:19.284658] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:18.201 [2024-04-26 12:02:19.284750] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3212355 ] 00:06:18.201 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.201 [2024-04-26 12:02:19.348455] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.201 [2024-04-26 12:02:19.416585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.461 12:02:19 -- accel/accel.sh@20 -- # val= 00:06:18.461 12:02:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.461 12:02:19 -- accel/accel.sh@19 -- # IFS=: 00:06:18.461 12:02:19 -- accel/accel.sh@19 -- # read -r var val 00:06:18.461 12:02:19 -- accel/accel.sh@20 -- # val= 00:06:18.461 12:02:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.461 12:02:19 -- accel/accel.sh@19 -- # IFS=: 00:06:18.461 12:02:19 -- accel/accel.sh@19 -- # read -r var val 00:06:18.461 12:02:19 -- accel/accel.sh@20 -- # val=0x1 00:06:18.461 12:02:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.461 12:02:19 -- accel/accel.sh@19 -- # IFS=: 00:06:18.461 12:02:19 -- accel/accel.sh@19 -- # read -r var val 00:06:18.461 12:02:19 -- accel/accel.sh@20 -- # val= 00:06:18.461 12:02:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.461 12:02:19 -- accel/accel.sh@19 -- # IFS=: 00:06:18.461 12:02:19 -- accel/accel.sh@19 -- # read -r var val 00:06:18.461 12:02:19 -- accel/accel.sh@20 -- # val= 00:06:18.461 12:02:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.461 12:02:19 -- accel/accel.sh@19 -- # IFS=: 00:06:18.461 12:02:19 -- accel/accel.sh@19 -- # read -r var val 00:06:18.461 12:02:19 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:18.461 12:02:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.461 12:02:19 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:18.461 12:02:19 -- accel/accel.sh@19 -- # IFS=: 00:06:18.461 12:02:19 -- accel/accel.sh@19 -- # read -r var val 00:06:18.461 12:02:19 -- accel/accel.sh@20 -- # val=0 00:06:18.461 12:02:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.461 12:02:19 -- accel/accel.sh@19 -- # IFS=: 00:06:18.461 12:02:19 -- accel/accel.sh@19 -- # read -r var val 00:06:18.461 12:02:19 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:18.461 12:02:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.461 12:02:19 -- accel/accel.sh@19 -- # IFS=: 00:06:18.461 12:02:19 -- accel/accel.sh@19 -- # read -r var val 00:06:18.461 12:02:19 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:18.461 12:02:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.461 12:02:19 -- accel/accel.sh@19 -- # IFS=: 00:06:18.461 12:02:19 -- accel/accel.sh@19 -- # read -r var val 00:06:18.461 12:02:19 -- accel/accel.sh@20 -- # val= 00:06:18.461 12:02:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.461 12:02:19 -- accel/accel.sh@19 -- # IFS=: 00:06:18.461 12:02:19 -- accel/accel.sh@19 -- # read -r var val 00:06:18.461 12:02:19 -- accel/accel.sh@20 -- # val=software 00:06:18.461 12:02:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.461 12:02:19 -- accel/accel.sh@22 -- # accel_module=software 00:06:18.461 12:02:19 -- accel/accel.sh@19 -- # IFS=: 00:06:18.461 12:02:19 -- accel/accel.sh@19 -- # read -r var val 00:06:18.461 12:02:19 -- accel/accel.sh@20 -- # val=32 00:06:18.461 12:02:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.461 12:02:19 -- accel/accel.sh@19 -- # IFS=: 00:06:18.461 12:02:19 -- accel/accel.sh@19 -- # read -r var val 00:06:18.461 12:02:19 -- accel/accel.sh@20 -- # val=32 00:06:18.461 12:02:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.461 12:02:19 -- accel/accel.sh@19 -- # IFS=: 00:06:18.461 12:02:19 -- accel/accel.sh@19 -- # read -r var val 00:06:18.461 12:02:19 -- accel/accel.sh@20 -- # val=1 00:06:18.461 12:02:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.461 12:02:19 -- accel/accel.sh@19 -- # IFS=: 00:06:18.461 12:02:19 -- accel/accel.sh@19 -- # read -r var val 00:06:18.461 12:02:19 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:18.461 12:02:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.461 12:02:19 -- accel/accel.sh@19 -- # IFS=: 00:06:18.461 12:02:19 -- accel/accel.sh@19 -- # read -r var val 00:06:18.461 12:02:19 -- accel/accel.sh@20 -- # val=Yes 00:06:18.461 12:02:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.461 12:02:19 -- accel/accel.sh@19 -- # IFS=: 00:06:18.461 12:02:19 -- accel/accel.sh@19 -- # read -r var val 00:06:18.461 12:02:19 -- accel/accel.sh@20 -- # val= 00:06:18.461 12:02:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.461 12:02:19 -- accel/accel.sh@19 -- # IFS=: 00:06:18.461 12:02:19 -- accel/accel.sh@19 -- # read -r var val 00:06:18.461 12:02:19 -- accel/accel.sh@20 -- # val= 00:06:18.461 12:02:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.461 12:02:19 -- accel/accel.sh@19 -- # IFS=: 00:06:18.461 12:02:19 -- accel/accel.sh@19 -- # read -r var val 00:06:19.401 12:02:20 -- accel/accel.sh@20 -- # val= 00:06:19.401 12:02:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.401 12:02:20 -- accel/accel.sh@19 -- # IFS=: 00:06:19.401 12:02:20 -- accel/accel.sh@19 -- # read -r var val 00:06:19.401 12:02:20 -- accel/accel.sh@20 -- # val= 00:06:19.401 12:02:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.401 12:02:20 -- accel/accel.sh@19 -- # IFS=: 00:06:19.401 12:02:20 -- accel/accel.sh@19 -- # read -r var val 00:06:19.401 12:02:20 -- accel/accel.sh@20 -- # val= 00:06:19.401 12:02:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.401 12:02:20 -- accel/accel.sh@19 -- # IFS=: 00:06:19.401 12:02:20 -- accel/accel.sh@19 -- # read -r var val 00:06:19.401 12:02:20 -- accel/accel.sh@20 -- # val= 00:06:19.401 12:02:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.401 12:02:20 -- accel/accel.sh@19 -- # IFS=: 00:06:19.401 12:02:20 -- accel/accel.sh@19 -- # read -r var val 00:06:19.401 12:02:20 -- accel/accel.sh@20 -- # val= 00:06:19.401 12:02:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.401 12:02:20 -- accel/accel.sh@19 -- # IFS=: 00:06:19.401 12:02:20 -- accel/accel.sh@19 -- # read -r var val 00:06:19.401 12:02:20 -- accel/accel.sh@20 -- # val= 00:06:19.401 12:02:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.401 12:02:20 -- accel/accel.sh@19 -- # IFS=: 00:06:19.401 12:02:20 -- accel/accel.sh@19 -- # read -r var val 00:06:19.401 12:02:20 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:19.401 12:02:20 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:19.401 12:02:20 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:19.401 00:06:19.401 real 0m1.291s 00:06:19.401 user 0m1.204s 00:06:19.401 sys 0m0.098s 00:06:19.401 12:02:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:19.401 12:02:20 -- common/autotest_common.sh@10 -- # set +x 00:06:19.401 ************************************ 00:06:19.401 END TEST accel_copy_crc32c_C2 00:06:19.401 ************************************ 00:06:19.401 12:02:20 -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:19.401 12:02:20 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:19.401 12:02:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:19.401 12:02:20 -- common/autotest_common.sh@10 -- # set +x 00:06:19.662 ************************************ 00:06:19.662 START TEST accel_dualcast 00:06:19.662 ************************************ 00:06:19.662 12:02:20 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dualcast -y 00:06:19.662 12:02:20 -- accel/accel.sh@16 -- # local accel_opc 00:06:19.662 12:02:20 -- accel/accel.sh@17 -- # local accel_module 00:06:19.662 12:02:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:19.662 12:02:20 -- accel/accel.sh@19 -- # IFS=: 00:06:19.662 12:02:20 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:19.662 12:02:20 -- accel/accel.sh@19 -- # read -r var val 00:06:19.662 12:02:20 -- accel/accel.sh@12 -- # build_accel_config 00:06:19.662 12:02:20 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:19.662 12:02:20 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:19.662 12:02:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.662 12:02:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.662 12:02:20 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:19.662 12:02:20 -- accel/accel.sh@40 -- # local IFS=, 00:06:19.662 12:02:20 -- accel/accel.sh@41 -- # jq -r . 00:06:19.662 [2024-04-26 12:02:20.713986] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:19.662 [2024-04-26 12:02:20.714021] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3212708 ] 00:06:19.662 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.662 [2024-04-26 12:02:20.765491] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.662 [2024-04-26 12:02:20.828224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.662 12:02:20 -- accel/accel.sh@20 -- # val= 00:06:19.662 12:02:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.662 12:02:20 -- accel/accel.sh@19 -- # IFS=: 00:06:19.662 12:02:20 -- accel/accel.sh@19 -- # read -r var val 00:06:19.662 12:02:20 -- accel/accel.sh@20 -- # val= 00:06:19.662 12:02:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.662 12:02:20 -- accel/accel.sh@19 -- # IFS=: 00:06:19.662 12:02:20 -- accel/accel.sh@19 -- # read -r var val 00:06:19.662 12:02:20 -- accel/accel.sh@20 -- # val=0x1 00:06:19.662 12:02:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.662 12:02:20 -- accel/accel.sh@19 -- # IFS=: 00:06:19.662 12:02:20 -- accel/accel.sh@19 -- # read -r var val 00:06:19.662 12:02:20 -- accel/accel.sh@20 -- # val= 00:06:19.662 12:02:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.662 12:02:20 -- accel/accel.sh@19 -- # IFS=: 00:06:19.662 12:02:20 -- accel/accel.sh@19 -- # read -r var val 00:06:19.662 12:02:20 -- accel/accel.sh@20 -- # val= 00:06:19.662 12:02:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.662 12:02:20 -- accel/accel.sh@19 -- # IFS=: 00:06:19.662 12:02:20 -- accel/accel.sh@19 -- # read -r var val 00:06:19.662 12:02:20 -- accel/accel.sh@20 -- # val=dualcast 00:06:19.662 12:02:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.662 12:02:20 -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:19.662 12:02:20 -- accel/accel.sh@19 -- # IFS=: 00:06:19.662 12:02:20 -- accel/accel.sh@19 -- # read -r var val 00:06:19.662 12:02:20 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:19.662 12:02:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.662 12:02:20 -- accel/accel.sh@19 -- # IFS=: 00:06:19.662 12:02:20 -- accel/accel.sh@19 -- # read -r var val 00:06:19.662 12:02:20 -- accel/accel.sh@20 -- # val= 00:06:19.662 12:02:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.662 12:02:20 -- accel/accel.sh@19 -- # IFS=: 00:06:19.662 12:02:20 -- accel/accel.sh@19 -- # read -r var val 00:06:19.662 12:02:20 -- accel/accel.sh@20 -- # val=software 00:06:19.662 12:02:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.662 12:02:20 -- accel/accel.sh@22 -- # accel_module=software 00:06:19.662 12:02:20 -- accel/accel.sh@19 -- # IFS=: 00:06:19.662 12:02:20 -- accel/accel.sh@19 -- # read -r var val 00:06:19.662 12:02:20 -- accel/accel.sh@20 -- # val=32 00:06:19.662 12:02:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.662 12:02:20 -- accel/accel.sh@19 -- # IFS=: 00:06:19.662 12:02:20 -- accel/accel.sh@19 -- # read -r var val 00:06:19.662 12:02:20 -- accel/accel.sh@20 -- # val=32 00:06:19.662 12:02:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.662 12:02:20 -- accel/accel.sh@19 -- # IFS=: 00:06:19.662 12:02:20 -- accel/accel.sh@19 -- # read -r var val 00:06:19.662 12:02:20 -- accel/accel.sh@20 -- # val=1 00:06:19.662 12:02:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.662 12:02:20 -- accel/accel.sh@19 -- # IFS=: 00:06:19.662 12:02:20 -- accel/accel.sh@19 -- # read -r var val 00:06:19.662 12:02:20 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:19.662 12:02:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.662 12:02:20 -- accel/accel.sh@19 -- # IFS=: 00:06:19.662 12:02:20 -- accel/accel.sh@19 -- # read -r var val 00:06:19.662 12:02:20 -- accel/accel.sh@20 -- # val=Yes 00:06:19.662 12:02:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.662 12:02:20 -- accel/accel.sh@19 -- # IFS=: 00:06:19.662 12:02:20 -- accel/accel.sh@19 -- # read -r var val 00:06:19.662 12:02:20 -- accel/accel.sh@20 -- # val= 00:06:19.662 12:02:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.662 12:02:20 -- accel/accel.sh@19 -- # IFS=: 00:06:19.662 12:02:20 -- accel/accel.sh@19 -- # read -r var val 00:06:19.662 12:02:20 -- accel/accel.sh@20 -- # val= 00:06:19.662 12:02:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.662 12:02:20 -- accel/accel.sh@19 -- # IFS=: 00:06:19.662 12:02:20 -- accel/accel.sh@19 -- # read -r var val 00:06:21.045 12:02:21 -- accel/accel.sh@20 -- # val= 00:06:21.045 12:02:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.045 12:02:21 -- accel/accel.sh@19 -- # IFS=: 00:06:21.045 12:02:21 -- accel/accel.sh@19 -- # read -r var val 00:06:21.045 12:02:21 -- accel/accel.sh@20 -- # val= 00:06:21.045 12:02:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.045 12:02:21 -- accel/accel.sh@19 -- # IFS=: 00:06:21.045 12:02:21 -- accel/accel.sh@19 -- # read -r var val 00:06:21.045 12:02:21 -- accel/accel.sh@20 -- # val= 00:06:21.045 12:02:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.045 12:02:21 -- accel/accel.sh@19 -- # IFS=: 00:06:21.045 12:02:21 -- accel/accel.sh@19 -- # read -r var val 00:06:21.045 12:02:21 -- accel/accel.sh@20 -- # val= 00:06:21.045 12:02:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.046 12:02:21 -- accel/accel.sh@19 -- # IFS=: 00:06:21.046 12:02:21 -- accel/accel.sh@19 -- # read -r var val 00:06:21.046 12:02:21 -- accel/accel.sh@20 -- # val= 00:06:21.046 12:02:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.046 12:02:21 -- accel/accel.sh@19 -- # IFS=: 00:06:21.046 12:02:21 -- accel/accel.sh@19 -- # read -r var val 00:06:21.046 12:02:21 -- accel/accel.sh@20 -- # val= 00:06:21.046 12:02:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.046 12:02:21 -- accel/accel.sh@19 -- # IFS=: 00:06:21.046 12:02:21 -- accel/accel.sh@19 -- # read -r var val 00:06:21.046 12:02:21 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:21.046 12:02:21 -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:21.046 12:02:21 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:21.046 00:06:21.046 real 0m1.254s 00:06:21.046 user 0m1.183s 00:06:21.046 sys 0m0.081s 00:06:21.046 12:02:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:21.046 12:02:21 -- common/autotest_common.sh@10 -- # set +x 00:06:21.046 ************************************ 00:06:21.046 END TEST accel_dualcast 00:06:21.046 ************************************ 00:06:21.046 12:02:21 -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:21.046 12:02:21 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:21.046 12:02:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:21.046 12:02:21 -- common/autotest_common.sh@10 -- # set +x 00:06:21.046 ************************************ 00:06:21.046 START TEST accel_compare 00:06:21.046 ************************************ 00:06:21.046 12:02:22 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compare -y 00:06:21.046 12:02:22 -- accel/accel.sh@16 -- # local accel_opc 00:06:21.046 12:02:22 -- accel/accel.sh@17 -- # local accel_module 00:06:21.046 12:02:22 -- accel/accel.sh@19 -- # IFS=: 00:06:21.046 12:02:22 -- accel/accel.sh@19 -- # read -r var val 00:06:21.046 12:02:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:21.046 12:02:22 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:21.046 12:02:22 -- accel/accel.sh@12 -- # build_accel_config 00:06:21.046 12:02:22 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:21.046 12:02:22 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:21.046 12:02:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.046 12:02:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.046 12:02:22 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:21.046 12:02:22 -- accel/accel.sh@40 -- # local IFS=, 00:06:21.046 12:02:22 -- accel/accel.sh@41 -- # jq -r . 00:06:21.046 [2024-04-26 12:02:22.154617] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:21.046 [2024-04-26 12:02:22.154712] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3212984 ] 00:06:21.046 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.046 [2024-04-26 12:02:22.220646] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.306 [2024-04-26 12:02:22.292674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.306 12:02:22 -- accel/accel.sh@20 -- # val= 00:06:21.306 12:02:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.306 12:02:22 -- accel/accel.sh@19 -- # IFS=: 00:06:21.306 12:02:22 -- accel/accel.sh@19 -- # read -r var val 00:06:21.306 12:02:22 -- accel/accel.sh@20 -- # val= 00:06:21.306 12:02:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.306 12:02:22 -- accel/accel.sh@19 -- # IFS=: 00:06:21.306 12:02:22 -- accel/accel.sh@19 -- # read -r var val 00:06:21.306 12:02:22 -- accel/accel.sh@20 -- # val=0x1 00:06:21.306 12:02:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.306 12:02:22 -- accel/accel.sh@19 -- # IFS=: 00:06:21.306 12:02:22 -- accel/accel.sh@19 -- # read -r var val 00:06:21.306 12:02:22 -- accel/accel.sh@20 -- # val= 00:06:21.306 12:02:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.306 12:02:22 -- accel/accel.sh@19 -- # IFS=: 00:06:21.306 12:02:22 -- accel/accel.sh@19 -- # read -r var val 00:06:21.306 12:02:22 -- accel/accel.sh@20 -- # val= 00:06:21.306 12:02:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.306 12:02:22 -- accel/accel.sh@19 -- # IFS=: 00:06:21.306 12:02:22 -- accel/accel.sh@19 -- # read -r var val 00:06:21.306 12:02:22 -- accel/accel.sh@20 -- # val=compare 00:06:21.306 12:02:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.306 12:02:22 -- accel/accel.sh@23 -- # accel_opc=compare 00:06:21.306 12:02:22 -- accel/accel.sh@19 -- # IFS=: 00:06:21.306 12:02:22 -- accel/accel.sh@19 -- # read -r var val 00:06:21.306 12:02:22 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:21.306 12:02:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.306 12:02:22 -- accel/accel.sh@19 -- # IFS=: 00:06:21.306 12:02:22 -- accel/accel.sh@19 -- # read -r var val 00:06:21.306 12:02:22 -- accel/accel.sh@20 -- # val= 00:06:21.306 12:02:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.306 12:02:22 -- accel/accel.sh@19 -- # IFS=: 00:06:21.306 12:02:22 -- accel/accel.sh@19 -- # read -r var val 00:06:21.306 12:02:22 -- accel/accel.sh@20 -- # val=software 00:06:21.306 12:02:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.306 12:02:22 -- accel/accel.sh@22 -- # accel_module=software 00:06:21.306 12:02:22 -- accel/accel.sh@19 -- # IFS=: 00:06:21.306 12:02:22 -- accel/accel.sh@19 -- # read -r var val 00:06:21.306 12:02:22 -- accel/accel.sh@20 -- # val=32 00:06:21.306 12:02:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.306 12:02:22 -- accel/accel.sh@19 -- # IFS=: 00:06:21.306 12:02:22 -- accel/accel.sh@19 -- # read -r var val 00:06:21.306 12:02:22 -- accel/accel.sh@20 -- # val=32 00:06:21.307 12:02:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.307 12:02:22 -- accel/accel.sh@19 -- # IFS=: 00:06:21.307 12:02:22 -- accel/accel.sh@19 -- # read -r var val 00:06:21.307 12:02:22 -- accel/accel.sh@20 -- # val=1 00:06:21.307 12:02:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.307 12:02:22 -- accel/accel.sh@19 -- # IFS=: 00:06:21.307 12:02:22 -- accel/accel.sh@19 -- # read -r var val 00:06:21.307 12:02:22 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:21.307 12:02:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.307 12:02:22 -- accel/accel.sh@19 -- # IFS=: 00:06:21.307 12:02:22 -- accel/accel.sh@19 -- # read -r var val 00:06:21.307 12:02:22 -- accel/accel.sh@20 -- # val=Yes 00:06:21.307 12:02:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.307 12:02:22 -- accel/accel.sh@19 -- # IFS=: 00:06:21.307 12:02:22 -- accel/accel.sh@19 -- # read -r var val 00:06:21.307 12:02:22 -- accel/accel.sh@20 -- # val= 00:06:21.307 12:02:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.307 12:02:22 -- accel/accel.sh@19 -- # IFS=: 00:06:21.307 12:02:22 -- accel/accel.sh@19 -- # read -r var val 00:06:21.307 12:02:22 -- accel/accel.sh@20 -- # val= 00:06:21.307 12:02:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.307 12:02:22 -- accel/accel.sh@19 -- # IFS=: 00:06:21.307 12:02:22 -- accel/accel.sh@19 -- # read -r var val 00:06:22.248 12:02:23 -- accel/accel.sh@20 -- # val= 00:06:22.248 12:02:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.248 12:02:23 -- accel/accel.sh@19 -- # IFS=: 00:06:22.248 12:02:23 -- accel/accel.sh@19 -- # read -r var val 00:06:22.248 12:02:23 -- accel/accel.sh@20 -- # val= 00:06:22.248 12:02:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.248 12:02:23 -- accel/accel.sh@19 -- # IFS=: 00:06:22.248 12:02:23 -- accel/accel.sh@19 -- # read -r var val 00:06:22.248 12:02:23 -- accel/accel.sh@20 -- # val= 00:06:22.248 12:02:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.248 12:02:23 -- accel/accel.sh@19 -- # IFS=: 00:06:22.248 12:02:23 -- accel/accel.sh@19 -- # read -r var val 00:06:22.248 12:02:23 -- accel/accel.sh@20 -- # val= 00:06:22.248 12:02:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.248 12:02:23 -- accel/accel.sh@19 -- # IFS=: 00:06:22.248 12:02:23 -- accel/accel.sh@19 -- # read -r var val 00:06:22.248 12:02:23 -- accel/accel.sh@20 -- # val= 00:06:22.248 12:02:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.248 12:02:23 -- accel/accel.sh@19 -- # IFS=: 00:06:22.248 12:02:23 -- accel/accel.sh@19 -- # read -r var val 00:06:22.248 12:02:23 -- accel/accel.sh@20 -- # val= 00:06:22.248 12:02:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.248 12:02:23 -- accel/accel.sh@19 -- # IFS=: 00:06:22.248 12:02:23 -- accel/accel.sh@19 -- # read -r var val 00:06:22.248 12:02:23 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:22.248 12:02:23 -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:22.248 12:02:23 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:22.248 00:06:22.248 real 0m1.298s 00:06:22.248 user 0m1.194s 00:06:22.248 sys 0m0.113s 00:06:22.248 12:02:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:22.248 12:02:23 -- common/autotest_common.sh@10 -- # set +x 00:06:22.248 ************************************ 00:06:22.248 END TEST accel_compare 00:06:22.248 ************************************ 00:06:22.248 12:02:23 -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:22.248 12:02:23 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:22.248 12:02:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:22.248 12:02:23 -- common/autotest_common.sh@10 -- # set +x 00:06:22.508 ************************************ 00:06:22.508 START TEST accel_xor 00:06:22.508 ************************************ 00:06:22.508 12:02:23 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y 00:06:22.508 12:02:23 -- accel/accel.sh@16 -- # local accel_opc 00:06:22.508 12:02:23 -- accel/accel.sh@17 -- # local accel_module 00:06:22.508 12:02:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:22.508 12:02:23 -- accel/accel.sh@19 -- # IFS=: 00:06:22.508 12:02:23 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:22.508 12:02:23 -- accel/accel.sh@19 -- # read -r var val 00:06:22.508 12:02:23 -- accel/accel.sh@12 -- # build_accel_config 00:06:22.508 12:02:23 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:22.508 12:02:23 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:22.508 12:02:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.508 12:02:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.508 12:02:23 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:22.508 12:02:23 -- accel/accel.sh@40 -- # local IFS=, 00:06:22.508 12:02:23 -- accel/accel.sh@41 -- # jq -r . 00:06:22.508 [2024-04-26 12:02:23.595492] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:22.508 [2024-04-26 12:02:23.595526] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3213226 ] 00:06:22.508 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.508 [2024-04-26 12:02:23.647889] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.508 [2024-04-26 12:02:23.711600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.768 12:02:23 -- accel/accel.sh@20 -- # val= 00:06:22.768 12:02:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.768 12:02:23 -- accel/accel.sh@19 -- # IFS=: 00:06:22.768 12:02:23 -- accel/accel.sh@19 -- # read -r var val 00:06:22.768 12:02:23 -- accel/accel.sh@20 -- # val= 00:06:22.768 12:02:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.768 12:02:23 -- accel/accel.sh@19 -- # IFS=: 00:06:22.768 12:02:23 -- accel/accel.sh@19 -- # read -r var val 00:06:22.768 12:02:23 -- accel/accel.sh@20 -- # val=0x1 00:06:22.768 12:02:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.768 12:02:23 -- accel/accel.sh@19 -- # IFS=: 00:06:22.768 12:02:23 -- accel/accel.sh@19 -- # read -r var val 00:06:22.768 12:02:23 -- accel/accel.sh@20 -- # val= 00:06:22.768 12:02:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.768 12:02:23 -- accel/accel.sh@19 -- # IFS=: 00:06:22.768 12:02:23 -- accel/accel.sh@19 -- # read -r var val 00:06:22.768 12:02:23 -- accel/accel.sh@20 -- # val= 00:06:22.768 12:02:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.768 12:02:23 -- accel/accel.sh@19 -- # IFS=: 00:06:22.768 12:02:23 -- accel/accel.sh@19 -- # read -r var val 00:06:22.768 12:02:23 -- accel/accel.sh@20 -- # val=xor 00:06:22.768 12:02:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.768 12:02:23 -- accel/accel.sh@23 -- # accel_opc=xor 00:06:22.768 12:02:23 -- accel/accel.sh@19 -- # IFS=: 00:06:22.768 12:02:23 -- accel/accel.sh@19 -- # read -r var val 00:06:22.768 12:02:23 -- accel/accel.sh@20 -- # val=2 00:06:22.768 12:02:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.768 12:02:23 -- accel/accel.sh@19 -- # IFS=: 00:06:22.768 12:02:23 -- accel/accel.sh@19 -- # read -r var val 00:06:22.768 12:02:23 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:22.768 12:02:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.768 12:02:23 -- accel/accel.sh@19 -- # IFS=: 00:06:22.768 12:02:23 -- accel/accel.sh@19 -- # read -r var val 00:06:22.768 12:02:23 -- accel/accel.sh@20 -- # val= 00:06:22.768 12:02:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.768 12:02:23 -- accel/accel.sh@19 -- # IFS=: 00:06:22.768 12:02:23 -- accel/accel.sh@19 -- # read -r var val 00:06:22.768 12:02:23 -- accel/accel.sh@20 -- # val=software 00:06:22.768 12:02:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.768 12:02:23 -- accel/accel.sh@22 -- # accel_module=software 00:06:22.768 12:02:23 -- accel/accel.sh@19 -- # IFS=: 00:06:22.768 12:02:23 -- accel/accel.sh@19 -- # read -r var val 00:06:22.768 12:02:23 -- accel/accel.sh@20 -- # val=32 00:06:22.768 12:02:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.768 12:02:23 -- accel/accel.sh@19 -- # IFS=: 00:06:22.768 12:02:23 -- accel/accel.sh@19 -- # read -r var val 00:06:22.768 12:02:23 -- accel/accel.sh@20 -- # val=32 00:06:22.768 12:02:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.768 12:02:23 -- accel/accel.sh@19 -- # IFS=: 00:06:22.768 12:02:23 -- accel/accel.sh@19 -- # read -r var val 00:06:22.768 12:02:23 -- accel/accel.sh@20 -- # val=1 00:06:22.768 12:02:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.768 12:02:23 -- accel/accel.sh@19 -- # IFS=: 00:06:22.768 12:02:23 -- accel/accel.sh@19 -- # read -r var val 00:06:22.768 12:02:23 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:22.768 12:02:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.768 12:02:23 -- accel/accel.sh@19 -- # IFS=: 00:06:22.768 12:02:23 -- accel/accel.sh@19 -- # read -r var val 00:06:22.768 12:02:23 -- accel/accel.sh@20 -- # val=Yes 00:06:22.768 12:02:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.768 12:02:23 -- accel/accel.sh@19 -- # IFS=: 00:06:22.768 12:02:23 -- accel/accel.sh@19 -- # read -r var val 00:06:22.768 12:02:23 -- accel/accel.sh@20 -- # val= 00:06:22.768 12:02:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.769 12:02:23 -- accel/accel.sh@19 -- # IFS=: 00:06:22.769 12:02:23 -- accel/accel.sh@19 -- # read -r var val 00:06:22.769 12:02:23 -- accel/accel.sh@20 -- # val= 00:06:22.769 12:02:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.769 12:02:23 -- accel/accel.sh@19 -- # IFS=: 00:06:22.769 12:02:23 -- accel/accel.sh@19 -- # read -r var val 00:06:23.708 12:02:24 -- accel/accel.sh@20 -- # val= 00:06:23.708 12:02:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.708 12:02:24 -- accel/accel.sh@19 -- # IFS=: 00:06:23.708 12:02:24 -- accel/accel.sh@19 -- # read -r var val 00:06:23.708 12:02:24 -- accel/accel.sh@20 -- # val= 00:06:23.708 12:02:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.708 12:02:24 -- accel/accel.sh@19 -- # IFS=: 00:06:23.708 12:02:24 -- accel/accel.sh@19 -- # read -r var val 00:06:23.708 12:02:24 -- accel/accel.sh@20 -- # val= 00:06:23.708 12:02:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.708 12:02:24 -- accel/accel.sh@19 -- # IFS=: 00:06:23.708 12:02:24 -- accel/accel.sh@19 -- # read -r var val 00:06:23.708 12:02:24 -- accel/accel.sh@20 -- # val= 00:06:23.708 12:02:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.708 12:02:24 -- accel/accel.sh@19 -- # IFS=: 00:06:23.708 12:02:24 -- accel/accel.sh@19 -- # read -r var val 00:06:23.708 12:02:24 -- accel/accel.sh@20 -- # val= 00:06:23.708 12:02:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.708 12:02:24 -- accel/accel.sh@19 -- # IFS=: 00:06:23.708 12:02:24 -- accel/accel.sh@19 -- # read -r var val 00:06:23.708 12:02:24 -- accel/accel.sh@20 -- # val= 00:06:23.708 12:02:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.708 12:02:24 -- accel/accel.sh@19 -- # IFS=: 00:06:23.708 12:02:24 -- accel/accel.sh@19 -- # read -r var val 00:06:23.708 12:02:24 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:23.708 12:02:24 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:23.708 12:02:24 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:23.708 00:06:23.708 real 0m1.256s 00:06:23.708 user 0m1.178s 00:06:23.708 sys 0m0.089s 00:06:23.708 12:02:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:23.708 12:02:24 -- common/autotest_common.sh@10 -- # set +x 00:06:23.708 ************************************ 00:06:23.708 END TEST accel_xor 00:06:23.708 ************************************ 00:06:23.708 12:02:24 -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:23.708 12:02:24 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:23.708 12:02:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:23.708 12:02:24 -- common/autotest_common.sh@10 -- # set +x 00:06:23.968 ************************************ 00:06:23.968 START TEST accel_xor 00:06:23.968 ************************************ 00:06:23.968 12:02:25 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y -x 3 00:06:23.968 12:02:25 -- accel/accel.sh@16 -- # local accel_opc 00:06:23.968 12:02:25 -- accel/accel.sh@17 -- # local accel_module 00:06:23.968 12:02:25 -- accel/accel.sh@19 -- # IFS=: 00:06:23.968 12:02:25 -- accel/accel.sh@19 -- # read -r var val 00:06:23.968 12:02:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:23.968 12:02:25 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:23.968 12:02:25 -- accel/accel.sh@12 -- # build_accel_config 00:06:23.968 12:02:25 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:23.968 12:02:25 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:23.968 12:02:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.968 12:02:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.968 12:02:25 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:23.968 12:02:25 -- accel/accel.sh@40 -- # local IFS=, 00:06:23.968 12:02:25 -- accel/accel.sh@41 -- # jq -r . 00:06:23.968 [2024-04-26 12:02:25.057922] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:23.968 [2024-04-26 12:02:25.058014] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3213482 ] 00:06:23.968 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.968 [2024-04-26 12:02:25.122693] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.968 [2024-04-26 12:02:25.186395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.228 12:02:25 -- accel/accel.sh@20 -- # val= 00:06:24.229 12:02:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.229 12:02:25 -- accel/accel.sh@19 -- # IFS=: 00:06:24.229 12:02:25 -- accel/accel.sh@19 -- # read -r var val 00:06:24.229 12:02:25 -- accel/accel.sh@20 -- # val= 00:06:24.229 12:02:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.229 12:02:25 -- accel/accel.sh@19 -- # IFS=: 00:06:24.229 12:02:25 -- accel/accel.sh@19 -- # read -r var val 00:06:24.229 12:02:25 -- accel/accel.sh@20 -- # val=0x1 00:06:24.229 12:02:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.229 12:02:25 -- accel/accel.sh@19 -- # IFS=: 00:06:24.229 12:02:25 -- accel/accel.sh@19 -- # read -r var val 00:06:24.229 12:02:25 -- accel/accel.sh@20 -- # val= 00:06:24.229 12:02:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.229 12:02:25 -- accel/accel.sh@19 -- # IFS=: 00:06:24.229 12:02:25 -- accel/accel.sh@19 -- # read -r var val 00:06:24.229 12:02:25 -- accel/accel.sh@20 -- # val= 00:06:24.229 12:02:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.229 12:02:25 -- accel/accel.sh@19 -- # IFS=: 00:06:24.229 12:02:25 -- accel/accel.sh@19 -- # read -r var val 00:06:24.229 12:02:25 -- accel/accel.sh@20 -- # val=xor 00:06:24.229 12:02:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.229 12:02:25 -- accel/accel.sh@23 -- # accel_opc=xor 00:06:24.229 12:02:25 -- accel/accel.sh@19 -- # IFS=: 00:06:24.229 12:02:25 -- accel/accel.sh@19 -- # read -r var val 00:06:24.229 12:02:25 -- accel/accel.sh@20 -- # val=3 00:06:24.229 12:02:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.229 12:02:25 -- accel/accel.sh@19 -- # IFS=: 00:06:24.229 12:02:25 -- accel/accel.sh@19 -- # read -r var val 00:06:24.229 12:02:25 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:24.229 12:02:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.229 12:02:25 -- accel/accel.sh@19 -- # IFS=: 00:06:24.229 12:02:25 -- accel/accel.sh@19 -- # read -r var val 00:06:24.229 12:02:25 -- accel/accel.sh@20 -- # val= 00:06:24.229 12:02:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.229 12:02:25 -- accel/accel.sh@19 -- # IFS=: 00:06:24.229 12:02:25 -- accel/accel.sh@19 -- # read -r var val 00:06:24.229 12:02:25 -- accel/accel.sh@20 -- # val=software 00:06:24.229 12:02:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.229 12:02:25 -- accel/accel.sh@22 -- # accel_module=software 00:06:24.229 12:02:25 -- accel/accel.sh@19 -- # IFS=: 00:06:24.229 12:02:25 -- accel/accel.sh@19 -- # read -r var val 00:06:24.229 12:02:25 -- accel/accel.sh@20 -- # val=32 00:06:24.229 12:02:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.229 12:02:25 -- accel/accel.sh@19 -- # IFS=: 00:06:24.229 12:02:25 -- accel/accel.sh@19 -- # read -r var val 00:06:24.229 12:02:25 -- accel/accel.sh@20 -- # val=32 00:06:24.229 12:02:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.229 12:02:25 -- accel/accel.sh@19 -- # IFS=: 00:06:24.229 12:02:25 -- accel/accel.sh@19 -- # read -r var val 00:06:24.229 12:02:25 -- accel/accel.sh@20 -- # val=1 00:06:24.229 12:02:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.229 12:02:25 -- accel/accel.sh@19 -- # IFS=: 00:06:24.229 12:02:25 -- accel/accel.sh@19 -- # read -r var val 00:06:24.229 12:02:25 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:24.229 12:02:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.229 12:02:25 -- accel/accel.sh@19 -- # IFS=: 00:06:24.229 12:02:25 -- accel/accel.sh@19 -- # read -r var val 00:06:24.229 12:02:25 -- accel/accel.sh@20 -- # val=Yes 00:06:24.229 12:02:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.229 12:02:25 -- accel/accel.sh@19 -- # IFS=: 00:06:24.229 12:02:25 -- accel/accel.sh@19 -- # read -r var val 00:06:24.229 12:02:25 -- accel/accel.sh@20 -- # val= 00:06:24.229 12:02:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.229 12:02:25 -- accel/accel.sh@19 -- # IFS=: 00:06:24.229 12:02:25 -- accel/accel.sh@19 -- # read -r var val 00:06:24.229 12:02:25 -- accel/accel.sh@20 -- # val= 00:06:24.229 12:02:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.229 12:02:25 -- accel/accel.sh@19 -- # IFS=: 00:06:24.229 12:02:25 -- accel/accel.sh@19 -- # read -r var val 00:06:25.194 12:02:26 -- accel/accel.sh@20 -- # val= 00:06:25.194 12:02:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.194 12:02:26 -- accel/accel.sh@19 -- # IFS=: 00:06:25.194 12:02:26 -- accel/accel.sh@19 -- # read -r var val 00:06:25.194 12:02:26 -- accel/accel.sh@20 -- # val= 00:06:25.194 12:02:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.194 12:02:26 -- accel/accel.sh@19 -- # IFS=: 00:06:25.194 12:02:26 -- accel/accel.sh@19 -- # read -r var val 00:06:25.194 12:02:26 -- accel/accel.sh@20 -- # val= 00:06:25.194 12:02:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.194 12:02:26 -- accel/accel.sh@19 -- # IFS=: 00:06:25.194 12:02:26 -- accel/accel.sh@19 -- # read -r var val 00:06:25.194 12:02:26 -- accel/accel.sh@20 -- # val= 00:06:25.194 12:02:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.194 12:02:26 -- accel/accel.sh@19 -- # IFS=: 00:06:25.194 12:02:26 -- accel/accel.sh@19 -- # read -r var val 00:06:25.194 12:02:26 -- accel/accel.sh@20 -- # val= 00:06:25.194 12:02:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.194 12:02:26 -- accel/accel.sh@19 -- # IFS=: 00:06:25.194 12:02:26 -- accel/accel.sh@19 -- # read -r var val 00:06:25.194 12:02:26 -- accel/accel.sh@20 -- # val= 00:06:25.194 12:02:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.194 12:02:26 -- accel/accel.sh@19 -- # IFS=: 00:06:25.194 12:02:26 -- accel/accel.sh@19 -- # read -r var val 00:06:25.194 12:02:26 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:25.194 12:02:26 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:25.194 12:02:26 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:25.194 00:06:25.194 real 0m1.287s 00:06:25.194 user 0m1.190s 00:06:25.194 sys 0m0.108s 00:06:25.194 12:02:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:25.194 12:02:26 -- common/autotest_common.sh@10 -- # set +x 00:06:25.194 ************************************ 00:06:25.194 END TEST accel_xor 00:06:25.194 ************************************ 00:06:25.194 12:02:26 -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:25.194 12:02:26 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:25.194 12:02:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:25.194 12:02:26 -- common/autotest_common.sh@10 -- # set +x 00:06:25.454 ************************************ 00:06:25.454 START TEST accel_dif_verify 00:06:25.454 ************************************ 00:06:25.454 12:02:26 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_verify 00:06:25.454 12:02:26 -- accel/accel.sh@16 -- # local accel_opc 00:06:25.454 12:02:26 -- accel/accel.sh@17 -- # local accel_module 00:06:25.454 12:02:26 -- accel/accel.sh@19 -- # IFS=: 00:06:25.454 12:02:26 -- accel/accel.sh@19 -- # read -r var val 00:06:25.454 12:02:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:25.454 12:02:26 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:25.454 12:02:26 -- accel/accel.sh@12 -- # build_accel_config 00:06:25.454 12:02:26 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:25.454 12:02:26 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:25.454 12:02:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.454 12:02:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.454 12:02:26 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:25.454 12:02:26 -- accel/accel.sh@40 -- # local IFS=, 00:06:25.454 12:02:26 -- accel/accel.sh@41 -- # jq -r . 00:06:25.454 [2024-04-26 12:02:26.510004] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:25.454 [2024-04-26 12:02:26.510115] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3213834 ] 00:06:25.454 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.454 [2024-04-26 12:02:26.581415] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.454 [2024-04-26 12:02:26.652857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.714 12:02:26 -- accel/accel.sh@20 -- # val= 00:06:25.714 12:02:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.714 12:02:26 -- accel/accel.sh@19 -- # IFS=: 00:06:25.714 12:02:26 -- accel/accel.sh@19 -- # read -r var val 00:06:25.714 12:02:26 -- accel/accel.sh@20 -- # val= 00:06:25.714 12:02:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.714 12:02:26 -- accel/accel.sh@19 -- # IFS=: 00:06:25.714 12:02:26 -- accel/accel.sh@19 -- # read -r var val 00:06:25.714 12:02:26 -- accel/accel.sh@20 -- # val=0x1 00:06:25.714 12:02:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.714 12:02:26 -- accel/accel.sh@19 -- # IFS=: 00:06:25.714 12:02:26 -- accel/accel.sh@19 -- # read -r var val 00:06:25.714 12:02:26 -- accel/accel.sh@20 -- # val= 00:06:25.714 12:02:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.714 12:02:26 -- accel/accel.sh@19 -- # IFS=: 00:06:25.714 12:02:26 -- accel/accel.sh@19 -- # read -r var val 00:06:25.714 12:02:26 -- accel/accel.sh@20 -- # val= 00:06:25.714 12:02:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.714 12:02:26 -- accel/accel.sh@19 -- # IFS=: 00:06:25.714 12:02:26 -- accel/accel.sh@19 -- # read -r var val 00:06:25.714 12:02:26 -- accel/accel.sh@20 -- # val=dif_verify 00:06:25.714 12:02:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.714 12:02:26 -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:25.714 12:02:26 -- accel/accel.sh@19 -- # IFS=: 00:06:25.714 12:02:26 -- accel/accel.sh@19 -- # read -r var val 00:06:25.714 12:02:26 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:25.714 12:02:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.714 12:02:26 -- accel/accel.sh@19 -- # IFS=: 00:06:25.714 12:02:26 -- accel/accel.sh@19 -- # read -r var val 00:06:25.714 12:02:26 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:25.714 12:02:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.714 12:02:26 -- accel/accel.sh@19 -- # IFS=: 00:06:25.714 12:02:26 -- accel/accel.sh@19 -- # read -r var val 00:06:25.714 12:02:26 -- accel/accel.sh@20 -- # val='512 bytes' 00:06:25.714 12:02:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.714 12:02:26 -- accel/accel.sh@19 -- # IFS=: 00:06:25.714 12:02:26 -- accel/accel.sh@19 -- # read -r var val 00:06:25.714 12:02:26 -- accel/accel.sh@20 -- # val='8 bytes' 00:06:25.714 12:02:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.714 12:02:26 -- accel/accel.sh@19 -- # IFS=: 00:06:25.714 12:02:26 -- accel/accel.sh@19 -- # read -r var val 00:06:25.714 12:02:26 -- accel/accel.sh@20 -- # val= 00:06:25.714 12:02:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.714 12:02:26 -- accel/accel.sh@19 -- # IFS=: 00:06:25.714 12:02:26 -- accel/accel.sh@19 -- # read -r var val 00:06:25.714 12:02:26 -- accel/accel.sh@20 -- # val=software 00:06:25.714 12:02:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.714 12:02:26 -- accel/accel.sh@22 -- # accel_module=software 00:06:25.714 12:02:26 -- accel/accel.sh@19 -- # IFS=: 00:06:25.714 12:02:26 -- accel/accel.sh@19 -- # read -r var val 00:06:25.714 12:02:26 -- accel/accel.sh@20 -- # val=32 00:06:25.714 12:02:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.714 12:02:26 -- accel/accel.sh@19 -- # IFS=: 00:06:25.714 12:02:26 -- accel/accel.sh@19 -- # read -r var val 00:06:25.714 12:02:26 -- accel/accel.sh@20 -- # val=32 00:06:25.714 12:02:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.714 12:02:26 -- accel/accel.sh@19 -- # IFS=: 00:06:25.714 12:02:26 -- accel/accel.sh@19 -- # read -r var val 00:06:25.714 12:02:26 -- accel/accel.sh@20 -- # val=1 00:06:25.715 12:02:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.715 12:02:26 -- accel/accel.sh@19 -- # IFS=: 00:06:25.715 12:02:26 -- accel/accel.sh@19 -- # read -r var val 00:06:25.715 12:02:26 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:25.715 12:02:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.715 12:02:26 -- accel/accel.sh@19 -- # IFS=: 00:06:25.715 12:02:26 -- accel/accel.sh@19 -- # read -r var val 00:06:25.715 12:02:26 -- accel/accel.sh@20 -- # val=No 00:06:25.715 12:02:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.715 12:02:26 -- accel/accel.sh@19 -- # IFS=: 00:06:25.715 12:02:26 -- accel/accel.sh@19 -- # read -r var val 00:06:25.715 12:02:26 -- accel/accel.sh@20 -- # val= 00:06:25.715 12:02:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.715 12:02:26 -- accel/accel.sh@19 -- # IFS=: 00:06:25.715 12:02:26 -- accel/accel.sh@19 -- # read -r var val 00:06:25.715 12:02:26 -- accel/accel.sh@20 -- # val= 00:06:25.715 12:02:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.715 12:02:26 -- accel/accel.sh@19 -- # IFS=: 00:06:25.715 12:02:26 -- accel/accel.sh@19 -- # read -r var val 00:06:26.656 12:02:27 -- accel/accel.sh@20 -- # val= 00:06:26.656 12:02:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.656 12:02:27 -- accel/accel.sh@19 -- # IFS=: 00:06:26.656 12:02:27 -- accel/accel.sh@19 -- # read -r var val 00:06:26.656 12:02:27 -- accel/accel.sh@20 -- # val= 00:06:26.656 12:02:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.656 12:02:27 -- accel/accel.sh@19 -- # IFS=: 00:06:26.656 12:02:27 -- accel/accel.sh@19 -- # read -r var val 00:06:26.656 12:02:27 -- accel/accel.sh@20 -- # val= 00:06:26.656 12:02:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.656 12:02:27 -- accel/accel.sh@19 -- # IFS=: 00:06:26.656 12:02:27 -- accel/accel.sh@19 -- # read -r var val 00:06:26.656 12:02:27 -- accel/accel.sh@20 -- # val= 00:06:26.656 12:02:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.656 12:02:27 -- accel/accel.sh@19 -- # IFS=: 00:06:26.656 12:02:27 -- accel/accel.sh@19 -- # read -r var val 00:06:26.656 12:02:27 -- accel/accel.sh@20 -- # val= 00:06:26.656 12:02:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.656 12:02:27 -- accel/accel.sh@19 -- # IFS=: 00:06:26.656 12:02:27 -- accel/accel.sh@19 -- # read -r var val 00:06:26.656 12:02:27 -- accel/accel.sh@20 -- # val= 00:06:26.656 12:02:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.656 12:02:27 -- accel/accel.sh@19 -- # IFS=: 00:06:26.656 12:02:27 -- accel/accel.sh@19 -- # read -r var val 00:06:26.656 12:02:27 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:26.656 12:02:27 -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:26.656 12:02:27 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:26.656 00:06:26.656 real 0m1.303s 00:06:26.656 user 0m1.197s 00:06:26.656 sys 0m0.119s 00:06:26.656 12:02:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:26.656 12:02:27 -- common/autotest_common.sh@10 -- # set +x 00:06:26.656 ************************************ 00:06:26.656 END TEST accel_dif_verify 00:06:26.656 ************************************ 00:06:26.656 12:02:27 -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:26.656 12:02:27 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:26.656 12:02:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:26.656 12:02:27 -- common/autotest_common.sh@10 -- # set +x 00:06:26.916 ************************************ 00:06:26.916 START TEST accel_dif_generate 00:06:26.916 ************************************ 00:06:26.916 12:02:27 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate 00:06:26.916 12:02:27 -- accel/accel.sh@16 -- # local accel_opc 00:06:26.916 12:02:27 -- accel/accel.sh@17 -- # local accel_module 00:06:26.916 12:02:27 -- accel/accel.sh@19 -- # IFS=: 00:06:26.916 12:02:27 -- accel/accel.sh@19 -- # read -r var val 00:06:26.916 12:02:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:26.916 12:02:27 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:26.916 12:02:27 -- accel/accel.sh@12 -- # build_accel_config 00:06:26.916 12:02:27 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:26.916 12:02:27 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:26.916 12:02:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.916 12:02:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.916 12:02:27 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:26.916 12:02:27 -- accel/accel.sh@40 -- # local IFS=, 00:06:26.916 12:02:27 -- accel/accel.sh@41 -- # jq -r . 00:06:26.916 [2024-04-26 12:02:27.982789] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:26.916 [2024-04-26 12:02:27.982857] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3214196 ] 00:06:26.916 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.916 [2024-04-26 12:02:28.045733] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.916 [2024-04-26 12:02:28.112098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.176 12:02:28 -- accel/accel.sh@20 -- # val= 00:06:27.176 12:02:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.176 12:02:28 -- accel/accel.sh@19 -- # IFS=: 00:06:27.176 12:02:28 -- accel/accel.sh@19 -- # read -r var val 00:06:27.176 12:02:28 -- accel/accel.sh@20 -- # val= 00:06:27.176 12:02:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.176 12:02:28 -- accel/accel.sh@19 -- # IFS=: 00:06:27.176 12:02:28 -- accel/accel.sh@19 -- # read -r var val 00:06:27.176 12:02:28 -- accel/accel.sh@20 -- # val=0x1 00:06:27.176 12:02:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.176 12:02:28 -- accel/accel.sh@19 -- # IFS=: 00:06:27.176 12:02:28 -- accel/accel.sh@19 -- # read -r var val 00:06:27.176 12:02:28 -- accel/accel.sh@20 -- # val= 00:06:27.176 12:02:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.176 12:02:28 -- accel/accel.sh@19 -- # IFS=: 00:06:27.176 12:02:28 -- accel/accel.sh@19 -- # read -r var val 00:06:27.176 12:02:28 -- accel/accel.sh@20 -- # val= 00:06:27.176 12:02:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.176 12:02:28 -- accel/accel.sh@19 -- # IFS=: 00:06:27.176 12:02:28 -- accel/accel.sh@19 -- # read -r var val 00:06:27.176 12:02:28 -- accel/accel.sh@20 -- # val=dif_generate 00:06:27.176 12:02:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.176 12:02:28 -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:27.176 12:02:28 -- accel/accel.sh@19 -- # IFS=: 00:06:27.176 12:02:28 -- accel/accel.sh@19 -- # read -r var val 00:06:27.176 12:02:28 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:27.176 12:02:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.176 12:02:28 -- accel/accel.sh@19 -- # IFS=: 00:06:27.176 12:02:28 -- accel/accel.sh@19 -- # read -r var val 00:06:27.176 12:02:28 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:27.176 12:02:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.176 12:02:28 -- accel/accel.sh@19 -- # IFS=: 00:06:27.176 12:02:28 -- accel/accel.sh@19 -- # read -r var val 00:06:27.176 12:02:28 -- accel/accel.sh@20 -- # val='512 bytes' 00:06:27.176 12:02:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.176 12:02:28 -- accel/accel.sh@19 -- # IFS=: 00:06:27.176 12:02:28 -- accel/accel.sh@19 -- # read -r var val 00:06:27.176 12:02:28 -- accel/accel.sh@20 -- # val='8 bytes' 00:06:27.176 12:02:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.176 12:02:28 -- accel/accel.sh@19 -- # IFS=: 00:06:27.176 12:02:28 -- accel/accel.sh@19 -- # read -r var val 00:06:27.176 12:02:28 -- accel/accel.sh@20 -- # val= 00:06:27.176 12:02:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.176 12:02:28 -- accel/accel.sh@19 -- # IFS=: 00:06:27.176 12:02:28 -- accel/accel.sh@19 -- # read -r var val 00:06:27.176 12:02:28 -- accel/accel.sh@20 -- # val=software 00:06:27.176 12:02:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.176 12:02:28 -- accel/accel.sh@22 -- # accel_module=software 00:06:27.176 12:02:28 -- accel/accel.sh@19 -- # IFS=: 00:06:27.176 12:02:28 -- accel/accel.sh@19 -- # read -r var val 00:06:27.176 12:02:28 -- accel/accel.sh@20 -- # val=32 00:06:27.176 12:02:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.176 12:02:28 -- accel/accel.sh@19 -- # IFS=: 00:06:27.176 12:02:28 -- accel/accel.sh@19 -- # read -r var val 00:06:27.176 12:02:28 -- accel/accel.sh@20 -- # val=32 00:06:27.176 12:02:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.176 12:02:28 -- accel/accel.sh@19 -- # IFS=: 00:06:27.177 12:02:28 -- accel/accel.sh@19 -- # read -r var val 00:06:27.177 12:02:28 -- accel/accel.sh@20 -- # val=1 00:06:27.177 12:02:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.177 12:02:28 -- accel/accel.sh@19 -- # IFS=: 00:06:27.177 12:02:28 -- accel/accel.sh@19 -- # read -r var val 00:06:27.177 12:02:28 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:27.177 12:02:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.177 12:02:28 -- accel/accel.sh@19 -- # IFS=: 00:06:27.177 12:02:28 -- accel/accel.sh@19 -- # read -r var val 00:06:27.177 12:02:28 -- accel/accel.sh@20 -- # val=No 00:06:27.177 12:02:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.177 12:02:28 -- accel/accel.sh@19 -- # IFS=: 00:06:27.177 12:02:28 -- accel/accel.sh@19 -- # read -r var val 00:06:27.177 12:02:28 -- accel/accel.sh@20 -- # val= 00:06:27.177 12:02:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.177 12:02:28 -- accel/accel.sh@19 -- # IFS=: 00:06:27.177 12:02:28 -- accel/accel.sh@19 -- # read -r var val 00:06:27.177 12:02:28 -- accel/accel.sh@20 -- # val= 00:06:27.177 12:02:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.177 12:02:28 -- accel/accel.sh@19 -- # IFS=: 00:06:27.177 12:02:28 -- accel/accel.sh@19 -- # read -r var val 00:06:28.118 12:02:29 -- accel/accel.sh@20 -- # val= 00:06:28.118 12:02:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.118 12:02:29 -- accel/accel.sh@19 -- # IFS=: 00:06:28.118 12:02:29 -- accel/accel.sh@19 -- # read -r var val 00:06:28.118 12:02:29 -- accel/accel.sh@20 -- # val= 00:06:28.118 12:02:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.118 12:02:29 -- accel/accel.sh@19 -- # IFS=: 00:06:28.118 12:02:29 -- accel/accel.sh@19 -- # read -r var val 00:06:28.118 12:02:29 -- accel/accel.sh@20 -- # val= 00:06:28.118 12:02:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.118 12:02:29 -- accel/accel.sh@19 -- # IFS=: 00:06:28.118 12:02:29 -- accel/accel.sh@19 -- # read -r var val 00:06:28.118 12:02:29 -- accel/accel.sh@20 -- # val= 00:06:28.118 12:02:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.118 12:02:29 -- accel/accel.sh@19 -- # IFS=: 00:06:28.118 12:02:29 -- accel/accel.sh@19 -- # read -r var val 00:06:28.118 12:02:29 -- accel/accel.sh@20 -- # val= 00:06:28.118 12:02:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.118 12:02:29 -- accel/accel.sh@19 -- # IFS=: 00:06:28.118 12:02:29 -- accel/accel.sh@19 -- # read -r var val 00:06:28.118 12:02:29 -- accel/accel.sh@20 -- # val= 00:06:28.118 12:02:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.118 12:02:29 -- accel/accel.sh@19 -- # IFS=: 00:06:28.118 12:02:29 -- accel/accel.sh@19 -- # read -r var val 00:06:28.118 12:02:29 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:28.118 12:02:29 -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:28.118 12:02:29 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:28.118 00:06:28.118 real 0m1.287s 00:06:28.118 user 0m1.199s 00:06:28.118 sys 0m0.100s 00:06:28.118 12:02:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:28.118 12:02:29 -- common/autotest_common.sh@10 -- # set +x 00:06:28.118 ************************************ 00:06:28.118 END TEST accel_dif_generate 00:06:28.118 ************************************ 00:06:28.118 12:02:29 -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:28.118 12:02:29 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:28.118 12:02:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:28.118 12:02:29 -- common/autotest_common.sh@10 -- # set +x 00:06:28.378 ************************************ 00:06:28.378 START TEST accel_dif_generate_copy 00:06:28.378 ************************************ 00:06:28.378 12:02:29 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate_copy 00:06:28.378 12:02:29 -- accel/accel.sh@16 -- # local accel_opc 00:06:28.378 12:02:29 -- accel/accel.sh@17 -- # local accel_module 00:06:28.378 12:02:29 -- accel/accel.sh@19 -- # IFS=: 00:06:28.378 12:02:29 -- accel/accel.sh@19 -- # read -r var val 00:06:28.378 12:02:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:28.378 12:02:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:28.378 12:02:29 -- accel/accel.sh@12 -- # build_accel_config 00:06:28.378 12:02:29 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:28.378 12:02:29 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:28.378 12:02:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.378 12:02:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.378 12:02:29 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:28.378 12:02:29 -- accel/accel.sh@40 -- # local IFS=, 00:06:28.378 12:02:29 -- accel/accel.sh@41 -- # jq -r . 00:06:28.378 [2024-04-26 12:02:29.430335] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:28.378 [2024-04-26 12:02:29.430424] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3214549 ] 00:06:28.378 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.378 [2024-04-26 12:02:29.492256] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.378 [2024-04-26 12:02:29.554991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.378 12:02:29 -- accel/accel.sh@20 -- # val= 00:06:28.378 12:02:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.378 12:02:29 -- accel/accel.sh@19 -- # IFS=: 00:06:28.378 12:02:29 -- accel/accel.sh@19 -- # read -r var val 00:06:28.378 12:02:29 -- accel/accel.sh@20 -- # val= 00:06:28.378 12:02:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.378 12:02:29 -- accel/accel.sh@19 -- # IFS=: 00:06:28.378 12:02:29 -- accel/accel.sh@19 -- # read -r var val 00:06:28.378 12:02:29 -- accel/accel.sh@20 -- # val=0x1 00:06:28.378 12:02:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.378 12:02:29 -- accel/accel.sh@19 -- # IFS=: 00:06:28.378 12:02:29 -- accel/accel.sh@19 -- # read -r var val 00:06:28.378 12:02:29 -- accel/accel.sh@20 -- # val= 00:06:28.378 12:02:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.378 12:02:29 -- accel/accel.sh@19 -- # IFS=: 00:06:28.378 12:02:29 -- accel/accel.sh@19 -- # read -r var val 00:06:28.378 12:02:29 -- accel/accel.sh@20 -- # val= 00:06:28.378 12:02:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.378 12:02:29 -- accel/accel.sh@19 -- # IFS=: 00:06:28.378 12:02:29 -- accel/accel.sh@19 -- # read -r var val 00:06:28.378 12:02:29 -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:28.378 12:02:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.378 12:02:29 -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:28.378 12:02:29 -- accel/accel.sh@19 -- # IFS=: 00:06:28.378 12:02:29 -- accel/accel.sh@19 -- # read -r var val 00:06:28.378 12:02:29 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:28.378 12:02:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.378 12:02:29 -- accel/accel.sh@19 -- # IFS=: 00:06:28.378 12:02:29 -- accel/accel.sh@19 -- # read -r var val 00:06:28.378 12:02:29 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:28.378 12:02:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.378 12:02:29 -- accel/accel.sh@19 -- # IFS=: 00:06:28.378 12:02:29 -- accel/accel.sh@19 -- # read -r var val 00:06:28.378 12:02:29 -- accel/accel.sh@20 -- # val= 00:06:28.378 12:02:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.378 12:02:29 -- accel/accel.sh@19 -- # IFS=: 00:06:28.378 12:02:29 -- accel/accel.sh@19 -- # read -r var val 00:06:28.378 12:02:29 -- accel/accel.sh@20 -- # val=software 00:06:28.378 12:02:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.379 12:02:29 -- accel/accel.sh@22 -- # accel_module=software 00:06:28.379 12:02:29 -- accel/accel.sh@19 -- # IFS=: 00:06:28.379 12:02:29 -- accel/accel.sh@19 -- # read -r var val 00:06:28.379 12:02:29 -- accel/accel.sh@20 -- # val=32 00:06:28.379 12:02:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.379 12:02:29 -- accel/accel.sh@19 -- # IFS=: 00:06:28.379 12:02:29 -- accel/accel.sh@19 -- # read -r var val 00:06:28.379 12:02:29 -- accel/accel.sh@20 -- # val=32 00:06:28.379 12:02:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.379 12:02:29 -- accel/accel.sh@19 -- # IFS=: 00:06:28.379 12:02:29 -- accel/accel.sh@19 -- # read -r var val 00:06:28.379 12:02:29 -- accel/accel.sh@20 -- # val=1 00:06:28.379 12:02:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.379 12:02:29 -- accel/accel.sh@19 -- # IFS=: 00:06:28.379 12:02:29 -- accel/accel.sh@19 -- # read -r var val 00:06:28.379 12:02:29 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:28.639 12:02:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.639 12:02:29 -- accel/accel.sh@19 -- # IFS=: 00:06:28.639 12:02:29 -- accel/accel.sh@19 -- # read -r var val 00:06:28.639 12:02:29 -- accel/accel.sh@20 -- # val=No 00:06:28.639 12:02:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.639 12:02:29 -- accel/accel.sh@19 -- # IFS=: 00:06:28.639 12:02:29 -- accel/accel.sh@19 -- # read -r var val 00:06:28.639 12:02:29 -- accel/accel.sh@20 -- # val= 00:06:28.639 12:02:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.639 12:02:29 -- accel/accel.sh@19 -- # IFS=: 00:06:28.639 12:02:29 -- accel/accel.sh@19 -- # read -r var val 00:06:28.639 12:02:29 -- accel/accel.sh@20 -- # val= 00:06:28.639 12:02:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.639 12:02:29 -- accel/accel.sh@19 -- # IFS=: 00:06:28.639 12:02:29 -- accel/accel.sh@19 -- # read -r var val 00:06:29.579 12:02:30 -- accel/accel.sh@20 -- # val= 00:06:29.579 12:02:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.579 12:02:30 -- accel/accel.sh@19 -- # IFS=: 00:06:29.579 12:02:30 -- accel/accel.sh@19 -- # read -r var val 00:06:29.579 12:02:30 -- accel/accel.sh@20 -- # val= 00:06:29.579 12:02:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.579 12:02:30 -- accel/accel.sh@19 -- # IFS=: 00:06:29.579 12:02:30 -- accel/accel.sh@19 -- # read -r var val 00:06:29.579 12:02:30 -- accel/accel.sh@20 -- # val= 00:06:29.579 12:02:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.579 12:02:30 -- accel/accel.sh@19 -- # IFS=: 00:06:29.579 12:02:30 -- accel/accel.sh@19 -- # read -r var val 00:06:29.579 12:02:30 -- accel/accel.sh@20 -- # val= 00:06:29.579 12:02:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.579 12:02:30 -- accel/accel.sh@19 -- # IFS=: 00:06:29.579 12:02:30 -- accel/accel.sh@19 -- # read -r var val 00:06:29.579 12:02:30 -- accel/accel.sh@20 -- # val= 00:06:29.579 12:02:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.579 12:02:30 -- accel/accel.sh@19 -- # IFS=: 00:06:29.579 12:02:30 -- accel/accel.sh@19 -- # read -r var val 00:06:29.579 12:02:30 -- accel/accel.sh@20 -- # val= 00:06:29.579 12:02:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.579 12:02:30 -- accel/accel.sh@19 -- # IFS=: 00:06:29.579 12:02:30 -- accel/accel.sh@19 -- # read -r var val 00:06:29.579 12:02:30 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:29.579 12:02:30 -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:29.579 12:02:30 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:29.579 00:06:29.579 real 0m1.285s 00:06:29.579 user 0m1.200s 00:06:29.579 sys 0m0.095s 00:06:29.579 12:02:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:29.579 12:02:30 -- common/autotest_common.sh@10 -- # set +x 00:06:29.579 ************************************ 00:06:29.579 END TEST accel_dif_generate_copy 00:06:29.579 ************************************ 00:06:29.579 12:02:30 -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:29.579 12:02:30 -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:29.579 12:02:30 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:29.579 12:02:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:29.579 12:02:30 -- common/autotest_common.sh@10 -- # set +x 00:06:29.840 ************************************ 00:06:29.840 START TEST accel_comp 00:06:29.840 ************************************ 00:06:29.840 12:02:30 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:29.840 12:02:30 -- accel/accel.sh@16 -- # local accel_opc 00:06:29.840 12:02:30 -- accel/accel.sh@17 -- # local accel_module 00:06:29.840 12:02:30 -- accel/accel.sh@19 -- # IFS=: 00:06:29.840 12:02:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:29.840 12:02:30 -- accel/accel.sh@19 -- # read -r var val 00:06:29.840 12:02:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:29.840 12:02:30 -- accel/accel.sh@12 -- # build_accel_config 00:06:29.840 12:02:30 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:29.840 12:02:30 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:29.840 12:02:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.840 12:02:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.840 12:02:30 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:29.840 12:02:30 -- accel/accel.sh@40 -- # local IFS=, 00:06:29.840 12:02:30 -- accel/accel.sh@41 -- # jq -r . 00:06:29.840 [2024-04-26 12:02:30.876102] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:29.840 [2024-04-26 12:02:30.876149] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3214907 ] 00:06:29.840 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.840 [2024-04-26 12:02:30.934294] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.840 [2024-04-26 12:02:30.996706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.840 12:02:31 -- accel/accel.sh@20 -- # val= 00:06:29.840 12:02:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.840 12:02:31 -- accel/accel.sh@19 -- # IFS=: 00:06:29.840 12:02:31 -- accel/accel.sh@19 -- # read -r var val 00:06:29.840 12:02:31 -- accel/accel.sh@20 -- # val= 00:06:29.840 12:02:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.840 12:02:31 -- accel/accel.sh@19 -- # IFS=: 00:06:29.840 12:02:31 -- accel/accel.sh@19 -- # read -r var val 00:06:29.840 12:02:31 -- accel/accel.sh@20 -- # val= 00:06:29.840 12:02:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.840 12:02:31 -- accel/accel.sh@19 -- # IFS=: 00:06:29.840 12:02:31 -- accel/accel.sh@19 -- # read -r var val 00:06:29.840 12:02:31 -- accel/accel.sh@20 -- # val=0x1 00:06:29.840 12:02:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.840 12:02:31 -- accel/accel.sh@19 -- # IFS=: 00:06:29.840 12:02:31 -- accel/accel.sh@19 -- # read -r var val 00:06:29.840 12:02:31 -- accel/accel.sh@20 -- # val= 00:06:29.840 12:02:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.840 12:02:31 -- accel/accel.sh@19 -- # IFS=: 00:06:29.840 12:02:31 -- accel/accel.sh@19 -- # read -r var val 00:06:29.840 12:02:31 -- accel/accel.sh@20 -- # val= 00:06:29.840 12:02:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.840 12:02:31 -- accel/accel.sh@19 -- # IFS=: 00:06:29.840 12:02:31 -- accel/accel.sh@19 -- # read -r var val 00:06:29.840 12:02:31 -- accel/accel.sh@20 -- # val=compress 00:06:29.840 12:02:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.840 12:02:31 -- accel/accel.sh@23 -- # accel_opc=compress 00:06:29.840 12:02:31 -- accel/accel.sh@19 -- # IFS=: 00:06:29.840 12:02:31 -- accel/accel.sh@19 -- # read -r var val 00:06:29.840 12:02:31 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:29.840 12:02:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.840 12:02:31 -- accel/accel.sh@19 -- # IFS=: 00:06:29.840 12:02:31 -- accel/accel.sh@19 -- # read -r var val 00:06:29.840 12:02:31 -- accel/accel.sh@20 -- # val= 00:06:29.840 12:02:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.840 12:02:31 -- accel/accel.sh@19 -- # IFS=: 00:06:29.840 12:02:31 -- accel/accel.sh@19 -- # read -r var val 00:06:29.840 12:02:31 -- accel/accel.sh@20 -- # val=software 00:06:29.840 12:02:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.840 12:02:31 -- accel/accel.sh@22 -- # accel_module=software 00:06:29.840 12:02:31 -- accel/accel.sh@19 -- # IFS=: 00:06:29.840 12:02:31 -- accel/accel.sh@19 -- # read -r var val 00:06:29.840 12:02:31 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:29.840 12:02:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.840 12:02:31 -- accel/accel.sh@19 -- # IFS=: 00:06:29.840 12:02:31 -- accel/accel.sh@19 -- # read -r var val 00:06:29.840 12:02:31 -- accel/accel.sh@20 -- # val=32 00:06:29.840 12:02:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.840 12:02:31 -- accel/accel.sh@19 -- # IFS=: 00:06:29.840 12:02:31 -- accel/accel.sh@19 -- # read -r var val 00:06:29.840 12:02:31 -- accel/accel.sh@20 -- # val=32 00:06:29.840 12:02:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.840 12:02:31 -- accel/accel.sh@19 -- # IFS=: 00:06:29.840 12:02:31 -- accel/accel.sh@19 -- # read -r var val 00:06:29.840 12:02:31 -- accel/accel.sh@20 -- # val=1 00:06:29.840 12:02:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.840 12:02:31 -- accel/accel.sh@19 -- # IFS=: 00:06:29.840 12:02:31 -- accel/accel.sh@19 -- # read -r var val 00:06:29.840 12:02:31 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:29.840 12:02:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.840 12:02:31 -- accel/accel.sh@19 -- # IFS=: 00:06:29.840 12:02:31 -- accel/accel.sh@19 -- # read -r var val 00:06:29.840 12:02:31 -- accel/accel.sh@20 -- # val=No 00:06:29.840 12:02:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.840 12:02:31 -- accel/accel.sh@19 -- # IFS=: 00:06:29.840 12:02:31 -- accel/accel.sh@19 -- # read -r var val 00:06:29.840 12:02:31 -- accel/accel.sh@20 -- # val= 00:06:29.840 12:02:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.840 12:02:31 -- accel/accel.sh@19 -- # IFS=: 00:06:29.840 12:02:31 -- accel/accel.sh@19 -- # read -r var val 00:06:29.840 12:02:31 -- accel/accel.sh@20 -- # val= 00:06:29.840 12:02:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.840 12:02:31 -- accel/accel.sh@19 -- # IFS=: 00:06:29.840 12:02:31 -- accel/accel.sh@19 -- # read -r var val 00:06:31.223 12:02:32 -- accel/accel.sh@20 -- # val= 00:06:31.223 12:02:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.223 12:02:32 -- accel/accel.sh@19 -- # IFS=: 00:06:31.223 12:02:32 -- accel/accel.sh@19 -- # read -r var val 00:06:31.223 12:02:32 -- accel/accel.sh@20 -- # val= 00:06:31.223 12:02:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.223 12:02:32 -- accel/accel.sh@19 -- # IFS=: 00:06:31.223 12:02:32 -- accel/accel.sh@19 -- # read -r var val 00:06:31.223 12:02:32 -- accel/accel.sh@20 -- # val= 00:06:31.223 12:02:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.223 12:02:32 -- accel/accel.sh@19 -- # IFS=: 00:06:31.223 12:02:32 -- accel/accel.sh@19 -- # read -r var val 00:06:31.223 12:02:32 -- accel/accel.sh@20 -- # val= 00:06:31.223 12:02:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.223 12:02:32 -- accel/accel.sh@19 -- # IFS=: 00:06:31.223 12:02:32 -- accel/accel.sh@19 -- # read -r var val 00:06:31.223 12:02:32 -- accel/accel.sh@20 -- # val= 00:06:31.223 12:02:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.223 12:02:32 -- accel/accel.sh@19 -- # IFS=: 00:06:31.223 12:02:32 -- accel/accel.sh@19 -- # read -r var val 00:06:31.223 12:02:32 -- accel/accel.sh@20 -- # val= 00:06:31.223 12:02:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.223 12:02:32 -- accel/accel.sh@19 -- # IFS=: 00:06:31.223 12:02:32 -- accel/accel.sh@19 -- # read -r var val 00:06:31.223 12:02:32 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:31.223 12:02:32 -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:31.223 12:02:32 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:31.223 00:06:31.223 real 0m1.267s 00:06:31.223 user 0m1.181s 00:06:31.223 sys 0m0.098s 00:06:31.223 12:02:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:31.223 12:02:32 -- common/autotest_common.sh@10 -- # set +x 00:06:31.223 ************************************ 00:06:31.223 END TEST accel_comp 00:06:31.223 ************************************ 00:06:31.223 12:02:32 -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:31.223 12:02:32 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:31.223 12:02:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:31.223 12:02:32 -- common/autotest_common.sh@10 -- # set +x 00:06:31.223 ************************************ 00:06:31.223 START TEST accel_decomp 00:06:31.223 ************************************ 00:06:31.224 12:02:32 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:31.224 12:02:32 -- accel/accel.sh@16 -- # local accel_opc 00:06:31.224 12:02:32 -- accel/accel.sh@17 -- # local accel_module 00:06:31.224 12:02:32 -- accel/accel.sh@19 -- # IFS=: 00:06:31.224 12:02:32 -- accel/accel.sh@19 -- # read -r var val 00:06:31.224 12:02:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:31.224 12:02:32 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:31.224 12:02:32 -- accel/accel.sh@12 -- # build_accel_config 00:06:31.224 12:02:32 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:31.224 12:02:32 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:31.224 12:02:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.224 12:02:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.224 12:02:32 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:31.224 12:02:32 -- accel/accel.sh@40 -- # local IFS=, 00:06:31.224 12:02:32 -- accel/accel.sh@41 -- # jq -r . 00:06:31.224 [2024-04-26 12:02:32.300314] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:31.224 [2024-04-26 12:02:32.300399] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3215252 ] 00:06:31.224 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.224 [2024-04-26 12:02:32.371095] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.224 [2024-04-26 12:02:32.438056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.484 12:02:32 -- accel/accel.sh@20 -- # val= 00:06:31.484 12:02:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.484 12:02:32 -- accel/accel.sh@19 -- # IFS=: 00:06:31.484 12:02:32 -- accel/accel.sh@19 -- # read -r var val 00:06:31.484 12:02:32 -- accel/accel.sh@20 -- # val= 00:06:31.484 12:02:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.484 12:02:32 -- accel/accel.sh@19 -- # IFS=: 00:06:31.484 12:02:32 -- accel/accel.sh@19 -- # read -r var val 00:06:31.484 12:02:32 -- accel/accel.sh@20 -- # val= 00:06:31.484 12:02:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.484 12:02:32 -- accel/accel.sh@19 -- # IFS=: 00:06:31.484 12:02:32 -- accel/accel.sh@19 -- # read -r var val 00:06:31.484 12:02:32 -- accel/accel.sh@20 -- # val=0x1 00:06:31.484 12:02:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.484 12:02:32 -- accel/accel.sh@19 -- # IFS=: 00:06:31.484 12:02:32 -- accel/accel.sh@19 -- # read -r var val 00:06:31.484 12:02:32 -- accel/accel.sh@20 -- # val= 00:06:31.484 12:02:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.484 12:02:32 -- accel/accel.sh@19 -- # IFS=: 00:06:31.484 12:02:32 -- accel/accel.sh@19 -- # read -r var val 00:06:31.485 12:02:32 -- accel/accel.sh@20 -- # val= 00:06:31.485 12:02:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.485 12:02:32 -- accel/accel.sh@19 -- # IFS=: 00:06:31.485 12:02:32 -- accel/accel.sh@19 -- # read -r var val 00:06:31.485 12:02:32 -- accel/accel.sh@20 -- # val=decompress 00:06:31.485 12:02:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.485 12:02:32 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:31.485 12:02:32 -- accel/accel.sh@19 -- # IFS=: 00:06:31.485 12:02:32 -- accel/accel.sh@19 -- # read -r var val 00:06:31.485 12:02:32 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:31.485 12:02:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.485 12:02:32 -- accel/accel.sh@19 -- # IFS=: 00:06:31.485 12:02:32 -- accel/accel.sh@19 -- # read -r var val 00:06:31.485 12:02:32 -- accel/accel.sh@20 -- # val= 00:06:31.485 12:02:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.485 12:02:32 -- accel/accel.sh@19 -- # IFS=: 00:06:31.485 12:02:32 -- accel/accel.sh@19 -- # read -r var val 00:06:31.485 12:02:32 -- accel/accel.sh@20 -- # val=software 00:06:31.485 12:02:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.485 12:02:32 -- accel/accel.sh@22 -- # accel_module=software 00:06:31.485 12:02:32 -- accel/accel.sh@19 -- # IFS=: 00:06:31.485 12:02:32 -- accel/accel.sh@19 -- # read -r var val 00:06:31.485 12:02:32 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:31.485 12:02:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.485 12:02:32 -- accel/accel.sh@19 -- # IFS=: 00:06:31.485 12:02:32 -- accel/accel.sh@19 -- # read -r var val 00:06:31.485 12:02:32 -- accel/accel.sh@20 -- # val=32 00:06:31.485 12:02:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.485 12:02:32 -- accel/accel.sh@19 -- # IFS=: 00:06:31.485 12:02:32 -- accel/accel.sh@19 -- # read -r var val 00:06:31.485 12:02:32 -- accel/accel.sh@20 -- # val=32 00:06:31.485 12:02:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.485 12:02:32 -- accel/accel.sh@19 -- # IFS=: 00:06:31.485 12:02:32 -- accel/accel.sh@19 -- # read -r var val 00:06:31.485 12:02:32 -- accel/accel.sh@20 -- # val=1 00:06:31.485 12:02:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.485 12:02:32 -- accel/accel.sh@19 -- # IFS=: 00:06:31.485 12:02:32 -- accel/accel.sh@19 -- # read -r var val 00:06:31.485 12:02:32 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:31.485 12:02:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.485 12:02:32 -- accel/accel.sh@19 -- # IFS=: 00:06:31.485 12:02:32 -- accel/accel.sh@19 -- # read -r var val 00:06:31.485 12:02:32 -- accel/accel.sh@20 -- # val=Yes 00:06:31.485 12:02:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.485 12:02:32 -- accel/accel.sh@19 -- # IFS=: 00:06:31.485 12:02:32 -- accel/accel.sh@19 -- # read -r var val 00:06:31.485 12:02:32 -- accel/accel.sh@20 -- # val= 00:06:31.485 12:02:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.485 12:02:32 -- accel/accel.sh@19 -- # IFS=: 00:06:31.485 12:02:32 -- accel/accel.sh@19 -- # read -r var val 00:06:31.485 12:02:32 -- accel/accel.sh@20 -- # val= 00:06:31.485 12:02:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.485 12:02:32 -- accel/accel.sh@19 -- # IFS=: 00:06:31.485 12:02:32 -- accel/accel.sh@19 -- # read -r var val 00:06:32.425 12:02:33 -- accel/accel.sh@20 -- # val= 00:06:32.425 12:02:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.425 12:02:33 -- accel/accel.sh@19 -- # IFS=: 00:06:32.425 12:02:33 -- accel/accel.sh@19 -- # read -r var val 00:06:32.425 12:02:33 -- accel/accel.sh@20 -- # val= 00:06:32.425 12:02:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.425 12:02:33 -- accel/accel.sh@19 -- # IFS=: 00:06:32.425 12:02:33 -- accel/accel.sh@19 -- # read -r var val 00:06:32.425 12:02:33 -- accel/accel.sh@20 -- # val= 00:06:32.425 12:02:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.425 12:02:33 -- accel/accel.sh@19 -- # IFS=: 00:06:32.425 12:02:33 -- accel/accel.sh@19 -- # read -r var val 00:06:32.425 12:02:33 -- accel/accel.sh@20 -- # val= 00:06:32.425 12:02:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.425 12:02:33 -- accel/accel.sh@19 -- # IFS=: 00:06:32.425 12:02:33 -- accel/accel.sh@19 -- # read -r var val 00:06:32.425 12:02:33 -- accel/accel.sh@20 -- # val= 00:06:32.425 12:02:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.425 12:02:33 -- accel/accel.sh@19 -- # IFS=: 00:06:32.425 12:02:33 -- accel/accel.sh@19 -- # read -r var val 00:06:32.425 12:02:33 -- accel/accel.sh@20 -- # val= 00:06:32.425 12:02:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.425 12:02:33 -- accel/accel.sh@19 -- # IFS=: 00:06:32.425 12:02:33 -- accel/accel.sh@19 -- # read -r var val 00:06:32.425 12:02:33 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:32.425 12:02:33 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:32.425 12:02:33 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:32.425 00:06:32.425 real 0m1.298s 00:06:32.425 user 0m1.201s 00:06:32.425 sys 0m0.108s 00:06:32.425 12:02:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:32.425 12:02:33 -- common/autotest_common.sh@10 -- # set +x 00:06:32.425 ************************************ 00:06:32.425 END TEST accel_decomp 00:06:32.425 ************************************ 00:06:32.425 12:02:33 -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:32.425 12:02:33 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:32.425 12:02:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:32.425 12:02:33 -- common/autotest_common.sh@10 -- # set +x 00:06:32.685 ************************************ 00:06:32.685 START TEST accel_decmop_full 00:06:32.685 ************************************ 00:06:32.685 12:02:33 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:32.685 12:02:33 -- accel/accel.sh@16 -- # local accel_opc 00:06:32.685 12:02:33 -- accel/accel.sh@17 -- # local accel_module 00:06:32.685 12:02:33 -- accel/accel.sh@19 -- # IFS=: 00:06:32.685 12:02:33 -- accel/accel.sh@19 -- # read -r var val 00:06:32.685 12:02:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:32.685 12:02:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:32.685 12:02:33 -- accel/accel.sh@12 -- # build_accel_config 00:06:32.685 12:02:33 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:32.685 12:02:33 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:32.685 12:02:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.685 12:02:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.685 12:02:33 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:32.685 12:02:33 -- accel/accel.sh@40 -- # local IFS=, 00:06:32.685 12:02:33 -- accel/accel.sh@41 -- # jq -r . 00:06:32.685 [2024-04-26 12:02:33.787334] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:32.685 [2024-04-26 12:02:33.787432] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3215503 ] 00:06:32.685 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.685 [2024-04-26 12:02:33.853234] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.945 [2024-04-26 12:02:33.926166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.945 12:02:33 -- accel/accel.sh@20 -- # val= 00:06:32.945 12:02:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.945 12:02:33 -- accel/accel.sh@19 -- # IFS=: 00:06:32.945 12:02:33 -- accel/accel.sh@19 -- # read -r var val 00:06:32.945 12:02:33 -- accel/accel.sh@20 -- # val= 00:06:32.945 12:02:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.945 12:02:33 -- accel/accel.sh@19 -- # IFS=: 00:06:32.945 12:02:33 -- accel/accel.sh@19 -- # read -r var val 00:06:32.945 12:02:33 -- accel/accel.sh@20 -- # val= 00:06:32.945 12:02:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.945 12:02:33 -- accel/accel.sh@19 -- # IFS=: 00:06:32.945 12:02:33 -- accel/accel.sh@19 -- # read -r var val 00:06:32.945 12:02:33 -- accel/accel.sh@20 -- # val=0x1 00:06:32.945 12:02:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.945 12:02:33 -- accel/accel.sh@19 -- # IFS=: 00:06:32.945 12:02:33 -- accel/accel.sh@19 -- # read -r var val 00:06:32.945 12:02:33 -- accel/accel.sh@20 -- # val= 00:06:32.945 12:02:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.945 12:02:33 -- accel/accel.sh@19 -- # IFS=: 00:06:32.945 12:02:33 -- accel/accel.sh@19 -- # read -r var val 00:06:32.945 12:02:33 -- accel/accel.sh@20 -- # val= 00:06:32.945 12:02:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.945 12:02:33 -- accel/accel.sh@19 -- # IFS=: 00:06:32.945 12:02:33 -- accel/accel.sh@19 -- # read -r var val 00:06:32.945 12:02:33 -- accel/accel.sh@20 -- # val=decompress 00:06:32.945 12:02:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.945 12:02:33 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:32.945 12:02:33 -- accel/accel.sh@19 -- # IFS=: 00:06:32.945 12:02:33 -- accel/accel.sh@19 -- # read -r var val 00:06:32.945 12:02:33 -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:32.945 12:02:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.945 12:02:33 -- accel/accel.sh@19 -- # IFS=: 00:06:32.945 12:02:33 -- accel/accel.sh@19 -- # read -r var val 00:06:32.945 12:02:33 -- accel/accel.sh@20 -- # val= 00:06:32.945 12:02:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.945 12:02:33 -- accel/accel.sh@19 -- # IFS=: 00:06:32.945 12:02:33 -- accel/accel.sh@19 -- # read -r var val 00:06:32.945 12:02:33 -- accel/accel.sh@20 -- # val=software 00:06:32.945 12:02:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.945 12:02:33 -- accel/accel.sh@22 -- # accel_module=software 00:06:32.945 12:02:33 -- accel/accel.sh@19 -- # IFS=: 00:06:32.945 12:02:33 -- accel/accel.sh@19 -- # read -r var val 00:06:32.945 12:02:33 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:32.945 12:02:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.945 12:02:33 -- accel/accel.sh@19 -- # IFS=: 00:06:32.945 12:02:33 -- accel/accel.sh@19 -- # read -r var val 00:06:32.945 12:02:33 -- accel/accel.sh@20 -- # val=32 00:06:32.945 12:02:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.945 12:02:33 -- accel/accel.sh@19 -- # IFS=: 00:06:32.945 12:02:33 -- accel/accel.sh@19 -- # read -r var val 00:06:32.945 12:02:33 -- accel/accel.sh@20 -- # val=32 00:06:32.945 12:02:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.945 12:02:33 -- accel/accel.sh@19 -- # IFS=: 00:06:32.945 12:02:33 -- accel/accel.sh@19 -- # read -r var val 00:06:32.945 12:02:33 -- accel/accel.sh@20 -- # val=1 00:06:32.945 12:02:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.945 12:02:33 -- accel/accel.sh@19 -- # IFS=: 00:06:32.945 12:02:33 -- accel/accel.sh@19 -- # read -r var val 00:06:32.945 12:02:33 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:32.945 12:02:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.945 12:02:33 -- accel/accel.sh@19 -- # IFS=: 00:06:32.945 12:02:33 -- accel/accel.sh@19 -- # read -r var val 00:06:32.945 12:02:33 -- accel/accel.sh@20 -- # val=Yes 00:06:32.945 12:02:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.945 12:02:33 -- accel/accel.sh@19 -- # IFS=: 00:06:32.945 12:02:33 -- accel/accel.sh@19 -- # read -r var val 00:06:32.945 12:02:33 -- accel/accel.sh@20 -- # val= 00:06:32.945 12:02:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.945 12:02:33 -- accel/accel.sh@19 -- # IFS=: 00:06:32.945 12:02:33 -- accel/accel.sh@19 -- # read -r var val 00:06:32.945 12:02:33 -- accel/accel.sh@20 -- # val= 00:06:32.945 12:02:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.945 12:02:33 -- accel/accel.sh@19 -- # IFS=: 00:06:32.945 12:02:33 -- accel/accel.sh@19 -- # read -r var val 00:06:33.885 12:02:35 -- accel/accel.sh@20 -- # val= 00:06:33.885 12:02:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.885 12:02:35 -- accel/accel.sh@19 -- # IFS=: 00:06:33.885 12:02:35 -- accel/accel.sh@19 -- # read -r var val 00:06:33.885 12:02:35 -- accel/accel.sh@20 -- # val= 00:06:33.885 12:02:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.885 12:02:35 -- accel/accel.sh@19 -- # IFS=: 00:06:33.885 12:02:35 -- accel/accel.sh@19 -- # read -r var val 00:06:33.885 12:02:35 -- accel/accel.sh@20 -- # val= 00:06:33.885 12:02:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.885 12:02:35 -- accel/accel.sh@19 -- # IFS=: 00:06:33.885 12:02:35 -- accel/accel.sh@19 -- # read -r var val 00:06:33.885 12:02:35 -- accel/accel.sh@20 -- # val= 00:06:33.885 12:02:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.885 12:02:35 -- accel/accel.sh@19 -- # IFS=: 00:06:33.885 12:02:35 -- accel/accel.sh@19 -- # read -r var val 00:06:33.885 12:02:35 -- accel/accel.sh@20 -- # val= 00:06:33.885 12:02:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.885 12:02:35 -- accel/accel.sh@19 -- # IFS=: 00:06:33.885 12:02:35 -- accel/accel.sh@19 -- # read -r var val 00:06:33.885 12:02:35 -- accel/accel.sh@20 -- # val= 00:06:33.885 12:02:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.885 12:02:35 -- accel/accel.sh@19 -- # IFS=: 00:06:33.885 12:02:35 -- accel/accel.sh@19 -- # read -r var val 00:06:33.885 12:02:35 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:33.885 12:02:35 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:33.885 12:02:35 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:33.885 00:06:33.885 real 0m1.315s 00:06:33.885 user 0m1.217s 00:06:33.885 sys 0m0.109s 00:06:33.885 12:02:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:33.885 12:02:35 -- common/autotest_common.sh@10 -- # set +x 00:06:33.885 ************************************ 00:06:33.885 END TEST accel_decmop_full 00:06:33.885 ************************************ 00:06:34.147 12:02:35 -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:34.147 12:02:35 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:34.147 12:02:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:34.147 12:02:35 -- common/autotest_common.sh@10 -- # set +x 00:06:34.147 ************************************ 00:06:34.147 START TEST accel_decomp_mcore 00:06:34.147 ************************************ 00:06:34.147 12:02:35 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:34.147 12:02:35 -- accel/accel.sh@16 -- # local accel_opc 00:06:34.147 12:02:35 -- accel/accel.sh@17 -- # local accel_module 00:06:34.147 12:02:35 -- accel/accel.sh@19 -- # IFS=: 00:06:34.147 12:02:35 -- accel/accel.sh@19 -- # read -r var val 00:06:34.147 12:02:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:34.147 12:02:35 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:34.147 12:02:35 -- accel/accel.sh@12 -- # build_accel_config 00:06:34.147 12:02:35 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.147 12:02:35 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.147 12:02:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.147 12:02:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.147 12:02:35 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.147 12:02:35 -- accel/accel.sh@40 -- # local IFS=, 00:06:34.147 12:02:35 -- accel/accel.sh@41 -- # jq -r . 00:06:34.147 [2024-04-26 12:02:35.285295] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:34.147 [2024-04-26 12:02:35.285363] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3215777 ] 00:06:34.147 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.147 [2024-04-26 12:02:35.351099] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:34.407 [2024-04-26 12:02:35.427186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.407 [2024-04-26 12:02:35.427306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:34.407 [2024-04-26 12:02:35.427465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:34.407 [2024-04-26 12:02:35.427466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.407 12:02:35 -- accel/accel.sh@20 -- # val= 00:06:34.407 12:02:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.407 12:02:35 -- accel/accel.sh@19 -- # IFS=: 00:06:34.407 12:02:35 -- accel/accel.sh@19 -- # read -r var val 00:06:34.407 12:02:35 -- accel/accel.sh@20 -- # val= 00:06:34.407 12:02:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.407 12:02:35 -- accel/accel.sh@19 -- # IFS=: 00:06:34.407 12:02:35 -- accel/accel.sh@19 -- # read -r var val 00:06:34.407 12:02:35 -- accel/accel.sh@20 -- # val= 00:06:34.407 12:02:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.408 12:02:35 -- accel/accel.sh@19 -- # IFS=: 00:06:34.408 12:02:35 -- accel/accel.sh@19 -- # read -r var val 00:06:34.408 12:02:35 -- accel/accel.sh@20 -- # val=0xf 00:06:34.408 12:02:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.408 12:02:35 -- accel/accel.sh@19 -- # IFS=: 00:06:34.408 12:02:35 -- accel/accel.sh@19 -- # read -r var val 00:06:34.408 12:02:35 -- accel/accel.sh@20 -- # val= 00:06:34.408 12:02:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.408 12:02:35 -- accel/accel.sh@19 -- # IFS=: 00:06:34.408 12:02:35 -- accel/accel.sh@19 -- # read -r var val 00:06:34.408 12:02:35 -- accel/accel.sh@20 -- # val= 00:06:34.408 12:02:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.408 12:02:35 -- accel/accel.sh@19 -- # IFS=: 00:06:34.408 12:02:35 -- accel/accel.sh@19 -- # read -r var val 00:06:34.408 12:02:35 -- accel/accel.sh@20 -- # val=decompress 00:06:34.408 12:02:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.408 12:02:35 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:34.408 12:02:35 -- accel/accel.sh@19 -- # IFS=: 00:06:34.408 12:02:35 -- accel/accel.sh@19 -- # read -r var val 00:06:34.408 12:02:35 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:34.408 12:02:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.408 12:02:35 -- accel/accel.sh@19 -- # IFS=: 00:06:34.408 12:02:35 -- accel/accel.sh@19 -- # read -r var val 00:06:34.408 12:02:35 -- accel/accel.sh@20 -- # val= 00:06:34.408 12:02:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.408 12:02:35 -- accel/accel.sh@19 -- # IFS=: 00:06:34.408 12:02:35 -- accel/accel.sh@19 -- # read -r var val 00:06:34.408 12:02:35 -- accel/accel.sh@20 -- # val=software 00:06:34.408 12:02:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.408 12:02:35 -- accel/accel.sh@22 -- # accel_module=software 00:06:34.408 12:02:35 -- accel/accel.sh@19 -- # IFS=: 00:06:34.408 12:02:35 -- accel/accel.sh@19 -- # read -r var val 00:06:34.408 12:02:35 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:34.408 12:02:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.408 12:02:35 -- accel/accel.sh@19 -- # IFS=: 00:06:34.408 12:02:35 -- accel/accel.sh@19 -- # read -r var val 00:06:34.408 12:02:35 -- accel/accel.sh@20 -- # val=32 00:06:34.408 12:02:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.408 12:02:35 -- accel/accel.sh@19 -- # IFS=: 00:06:34.408 12:02:35 -- accel/accel.sh@19 -- # read -r var val 00:06:34.408 12:02:35 -- accel/accel.sh@20 -- # val=32 00:06:34.408 12:02:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.408 12:02:35 -- accel/accel.sh@19 -- # IFS=: 00:06:34.408 12:02:35 -- accel/accel.sh@19 -- # read -r var val 00:06:34.408 12:02:35 -- accel/accel.sh@20 -- # val=1 00:06:34.408 12:02:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.408 12:02:35 -- accel/accel.sh@19 -- # IFS=: 00:06:34.408 12:02:35 -- accel/accel.sh@19 -- # read -r var val 00:06:34.408 12:02:35 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:34.408 12:02:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.408 12:02:35 -- accel/accel.sh@19 -- # IFS=: 00:06:34.408 12:02:35 -- accel/accel.sh@19 -- # read -r var val 00:06:34.408 12:02:35 -- accel/accel.sh@20 -- # val=Yes 00:06:34.408 12:02:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.408 12:02:35 -- accel/accel.sh@19 -- # IFS=: 00:06:34.408 12:02:35 -- accel/accel.sh@19 -- # read -r var val 00:06:34.408 12:02:35 -- accel/accel.sh@20 -- # val= 00:06:34.408 12:02:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.408 12:02:35 -- accel/accel.sh@19 -- # IFS=: 00:06:34.408 12:02:35 -- accel/accel.sh@19 -- # read -r var val 00:06:34.408 12:02:35 -- accel/accel.sh@20 -- # val= 00:06:34.408 12:02:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.408 12:02:35 -- accel/accel.sh@19 -- # IFS=: 00:06:34.408 12:02:35 -- accel/accel.sh@19 -- # read -r var val 00:06:35.349 12:02:36 -- accel/accel.sh@20 -- # val= 00:06:35.349 12:02:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.349 12:02:36 -- accel/accel.sh@19 -- # IFS=: 00:06:35.349 12:02:36 -- accel/accel.sh@19 -- # read -r var val 00:06:35.349 12:02:36 -- accel/accel.sh@20 -- # val= 00:06:35.349 12:02:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.349 12:02:36 -- accel/accel.sh@19 -- # IFS=: 00:06:35.349 12:02:36 -- accel/accel.sh@19 -- # read -r var val 00:06:35.349 12:02:36 -- accel/accel.sh@20 -- # val= 00:06:35.349 12:02:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.349 12:02:36 -- accel/accel.sh@19 -- # IFS=: 00:06:35.349 12:02:36 -- accel/accel.sh@19 -- # read -r var val 00:06:35.349 12:02:36 -- accel/accel.sh@20 -- # val= 00:06:35.349 12:02:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.349 12:02:36 -- accel/accel.sh@19 -- # IFS=: 00:06:35.349 12:02:36 -- accel/accel.sh@19 -- # read -r var val 00:06:35.349 12:02:36 -- accel/accel.sh@20 -- # val= 00:06:35.349 12:02:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.349 12:02:36 -- accel/accel.sh@19 -- # IFS=: 00:06:35.349 12:02:36 -- accel/accel.sh@19 -- # read -r var val 00:06:35.349 12:02:36 -- accel/accel.sh@20 -- # val= 00:06:35.349 12:02:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.349 12:02:36 -- accel/accel.sh@19 -- # IFS=: 00:06:35.349 12:02:36 -- accel/accel.sh@19 -- # read -r var val 00:06:35.349 12:02:36 -- accel/accel.sh@20 -- # val= 00:06:35.349 12:02:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.349 12:02:36 -- accel/accel.sh@19 -- # IFS=: 00:06:35.349 12:02:36 -- accel/accel.sh@19 -- # read -r var val 00:06:35.349 12:02:36 -- accel/accel.sh@20 -- # val= 00:06:35.349 12:02:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.349 12:02:36 -- accel/accel.sh@19 -- # IFS=: 00:06:35.349 12:02:36 -- accel/accel.sh@19 -- # read -r var val 00:06:35.349 12:02:36 -- accel/accel.sh@20 -- # val= 00:06:35.349 12:02:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.349 12:02:36 -- accel/accel.sh@19 -- # IFS=: 00:06:35.349 12:02:36 -- accel/accel.sh@19 -- # read -r var val 00:06:35.349 12:02:36 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:35.349 12:02:36 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:35.349 12:02:36 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:35.349 00:06:35.349 real 0m1.311s 00:06:35.349 user 0m4.449s 00:06:35.349 sys 0m0.113s 00:06:35.349 12:02:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:35.349 12:02:36 -- common/autotest_common.sh@10 -- # set +x 00:06:35.349 ************************************ 00:06:35.349 END TEST accel_decomp_mcore 00:06:35.349 ************************************ 00:06:35.610 12:02:36 -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:35.610 12:02:36 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:35.610 12:02:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:35.610 12:02:36 -- common/autotest_common.sh@10 -- # set +x 00:06:35.610 ************************************ 00:06:35.610 START TEST accel_decomp_full_mcore 00:06:35.610 ************************************ 00:06:35.610 12:02:36 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:35.610 12:02:36 -- accel/accel.sh@16 -- # local accel_opc 00:06:35.610 12:02:36 -- accel/accel.sh@17 -- # local accel_module 00:06:35.610 12:02:36 -- accel/accel.sh@19 -- # IFS=: 00:06:35.610 12:02:36 -- accel/accel.sh@19 -- # read -r var val 00:06:35.610 12:02:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:35.610 12:02:36 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:35.610 12:02:36 -- accel/accel.sh@12 -- # build_accel_config 00:06:35.610 12:02:36 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:35.610 12:02:36 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:35.610 12:02:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.610 12:02:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.610 12:02:36 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:35.610 12:02:36 -- accel/accel.sh@40 -- # local IFS=, 00:06:35.610 12:02:36 -- accel/accel.sh@41 -- # jq -r . 00:06:35.610 [2024-04-26 12:02:36.779501] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:35.610 [2024-04-26 12:02:36.779570] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3216047 ] 00:06:35.610 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.870 [2024-04-26 12:02:36.847280] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:35.870 [2024-04-26 12:02:36.922653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.870 [2024-04-26 12:02:36.922784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:35.870 [2024-04-26 12:02:36.922943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:35.871 [2024-04-26 12:02:36.922943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.871 12:02:36 -- accel/accel.sh@20 -- # val= 00:06:35.871 12:02:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.871 12:02:36 -- accel/accel.sh@19 -- # IFS=: 00:06:35.871 12:02:36 -- accel/accel.sh@19 -- # read -r var val 00:06:35.871 12:02:36 -- accel/accel.sh@20 -- # val= 00:06:35.871 12:02:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.871 12:02:36 -- accel/accel.sh@19 -- # IFS=: 00:06:35.871 12:02:36 -- accel/accel.sh@19 -- # read -r var val 00:06:35.871 12:02:36 -- accel/accel.sh@20 -- # val= 00:06:35.871 12:02:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.871 12:02:36 -- accel/accel.sh@19 -- # IFS=: 00:06:35.871 12:02:36 -- accel/accel.sh@19 -- # read -r var val 00:06:35.871 12:02:36 -- accel/accel.sh@20 -- # val=0xf 00:06:35.871 12:02:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.871 12:02:36 -- accel/accel.sh@19 -- # IFS=: 00:06:35.871 12:02:36 -- accel/accel.sh@19 -- # read -r var val 00:06:35.871 12:02:36 -- accel/accel.sh@20 -- # val= 00:06:35.871 12:02:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.871 12:02:36 -- accel/accel.sh@19 -- # IFS=: 00:06:35.871 12:02:36 -- accel/accel.sh@19 -- # read -r var val 00:06:35.871 12:02:36 -- accel/accel.sh@20 -- # val= 00:06:35.871 12:02:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.871 12:02:36 -- accel/accel.sh@19 -- # IFS=: 00:06:35.871 12:02:36 -- accel/accel.sh@19 -- # read -r var val 00:06:35.871 12:02:36 -- accel/accel.sh@20 -- # val=decompress 00:06:35.871 12:02:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.871 12:02:36 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:35.871 12:02:36 -- accel/accel.sh@19 -- # IFS=: 00:06:35.871 12:02:36 -- accel/accel.sh@19 -- # read -r var val 00:06:35.871 12:02:36 -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:35.871 12:02:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.871 12:02:36 -- accel/accel.sh@19 -- # IFS=: 00:06:35.871 12:02:36 -- accel/accel.sh@19 -- # read -r var val 00:06:35.871 12:02:36 -- accel/accel.sh@20 -- # val= 00:06:35.871 12:02:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.871 12:02:36 -- accel/accel.sh@19 -- # IFS=: 00:06:35.871 12:02:36 -- accel/accel.sh@19 -- # read -r var val 00:06:35.871 12:02:36 -- accel/accel.sh@20 -- # val=software 00:06:35.871 12:02:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.871 12:02:36 -- accel/accel.sh@22 -- # accel_module=software 00:06:35.871 12:02:36 -- accel/accel.sh@19 -- # IFS=: 00:06:35.871 12:02:36 -- accel/accel.sh@19 -- # read -r var val 00:06:35.871 12:02:36 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:35.871 12:02:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.871 12:02:36 -- accel/accel.sh@19 -- # IFS=: 00:06:35.871 12:02:36 -- accel/accel.sh@19 -- # read -r var val 00:06:35.871 12:02:36 -- accel/accel.sh@20 -- # val=32 00:06:35.871 12:02:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.871 12:02:36 -- accel/accel.sh@19 -- # IFS=: 00:06:35.871 12:02:36 -- accel/accel.sh@19 -- # read -r var val 00:06:35.871 12:02:36 -- accel/accel.sh@20 -- # val=32 00:06:35.871 12:02:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.871 12:02:36 -- accel/accel.sh@19 -- # IFS=: 00:06:35.871 12:02:36 -- accel/accel.sh@19 -- # read -r var val 00:06:35.871 12:02:36 -- accel/accel.sh@20 -- # val=1 00:06:35.871 12:02:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.871 12:02:36 -- accel/accel.sh@19 -- # IFS=: 00:06:35.871 12:02:36 -- accel/accel.sh@19 -- # read -r var val 00:06:35.871 12:02:36 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:35.871 12:02:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.871 12:02:36 -- accel/accel.sh@19 -- # IFS=: 00:06:35.871 12:02:36 -- accel/accel.sh@19 -- # read -r var val 00:06:35.871 12:02:36 -- accel/accel.sh@20 -- # val=Yes 00:06:35.871 12:02:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.871 12:02:36 -- accel/accel.sh@19 -- # IFS=: 00:06:35.871 12:02:36 -- accel/accel.sh@19 -- # read -r var val 00:06:35.871 12:02:36 -- accel/accel.sh@20 -- # val= 00:06:35.871 12:02:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.871 12:02:36 -- accel/accel.sh@19 -- # IFS=: 00:06:35.871 12:02:36 -- accel/accel.sh@19 -- # read -r var val 00:06:35.871 12:02:36 -- accel/accel.sh@20 -- # val= 00:06:35.871 12:02:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.871 12:02:36 -- accel/accel.sh@19 -- # IFS=: 00:06:35.871 12:02:36 -- accel/accel.sh@19 -- # read -r var val 00:06:37.261 12:02:38 -- accel/accel.sh@20 -- # val= 00:06:37.261 12:02:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.261 12:02:38 -- accel/accel.sh@19 -- # IFS=: 00:06:37.261 12:02:38 -- accel/accel.sh@19 -- # read -r var val 00:06:37.261 12:02:38 -- accel/accel.sh@20 -- # val= 00:06:37.261 12:02:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.261 12:02:38 -- accel/accel.sh@19 -- # IFS=: 00:06:37.261 12:02:38 -- accel/accel.sh@19 -- # read -r var val 00:06:37.261 12:02:38 -- accel/accel.sh@20 -- # val= 00:06:37.261 12:02:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.261 12:02:38 -- accel/accel.sh@19 -- # IFS=: 00:06:37.261 12:02:38 -- accel/accel.sh@19 -- # read -r var val 00:06:37.261 12:02:38 -- accel/accel.sh@20 -- # val= 00:06:37.261 12:02:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.261 12:02:38 -- accel/accel.sh@19 -- # IFS=: 00:06:37.261 12:02:38 -- accel/accel.sh@19 -- # read -r var val 00:06:37.261 12:02:38 -- accel/accel.sh@20 -- # val= 00:06:37.261 12:02:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.261 12:02:38 -- accel/accel.sh@19 -- # IFS=: 00:06:37.261 12:02:38 -- accel/accel.sh@19 -- # read -r var val 00:06:37.261 12:02:38 -- accel/accel.sh@20 -- # val= 00:06:37.261 12:02:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.261 12:02:38 -- accel/accel.sh@19 -- # IFS=: 00:06:37.261 12:02:38 -- accel/accel.sh@19 -- # read -r var val 00:06:37.261 12:02:38 -- accel/accel.sh@20 -- # val= 00:06:37.261 12:02:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.261 12:02:38 -- accel/accel.sh@19 -- # IFS=: 00:06:37.261 12:02:38 -- accel/accel.sh@19 -- # read -r var val 00:06:37.261 12:02:38 -- accel/accel.sh@20 -- # val= 00:06:37.261 12:02:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.261 12:02:38 -- accel/accel.sh@19 -- # IFS=: 00:06:37.261 12:02:38 -- accel/accel.sh@19 -- # read -r var val 00:06:37.261 12:02:38 -- accel/accel.sh@20 -- # val= 00:06:37.261 12:02:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.261 12:02:38 -- accel/accel.sh@19 -- # IFS=: 00:06:37.261 12:02:38 -- accel/accel.sh@19 -- # read -r var val 00:06:37.261 12:02:38 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:37.261 12:02:38 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:37.261 12:02:38 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:37.261 00:06:37.261 real 0m1.326s 00:06:37.261 user 0m4.498s 00:06:37.261 sys 0m0.119s 00:06:37.261 12:02:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:37.261 12:02:38 -- common/autotest_common.sh@10 -- # set +x 00:06:37.261 ************************************ 00:06:37.261 END TEST accel_decomp_full_mcore 00:06:37.261 ************************************ 00:06:37.261 12:02:38 -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:37.261 12:02:38 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:37.261 12:02:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:37.261 12:02:38 -- common/autotest_common.sh@10 -- # set +x 00:06:37.261 ************************************ 00:06:37.261 START TEST accel_decomp_mthread 00:06:37.261 ************************************ 00:06:37.261 12:02:38 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:37.261 12:02:38 -- accel/accel.sh@16 -- # local accel_opc 00:06:37.261 12:02:38 -- accel/accel.sh@17 -- # local accel_module 00:06:37.261 12:02:38 -- accel/accel.sh@19 -- # IFS=: 00:06:37.261 12:02:38 -- accel/accel.sh@19 -- # read -r var val 00:06:37.261 12:02:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:37.261 12:02:38 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:37.261 12:02:38 -- accel/accel.sh@12 -- # build_accel_config 00:06:37.261 12:02:38 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:37.261 12:02:38 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:37.261 12:02:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.261 12:02:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.261 12:02:38 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:37.261 12:02:38 -- accel/accel.sh@40 -- # local IFS=, 00:06:37.261 12:02:38 -- accel/accel.sh@41 -- # jq -r . 00:06:37.261 [2024-04-26 12:02:38.289724] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:37.261 [2024-04-26 12:02:38.289802] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3216399 ] 00:06:37.261 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.262 [2024-04-26 12:02:38.355321] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.262 [2024-04-26 12:02:38.426167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.262 12:02:38 -- accel/accel.sh@20 -- # val= 00:06:37.262 12:02:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.262 12:02:38 -- accel/accel.sh@19 -- # IFS=: 00:06:37.262 12:02:38 -- accel/accel.sh@19 -- # read -r var val 00:06:37.262 12:02:38 -- accel/accel.sh@20 -- # val= 00:06:37.262 12:02:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.262 12:02:38 -- accel/accel.sh@19 -- # IFS=: 00:06:37.262 12:02:38 -- accel/accel.sh@19 -- # read -r var val 00:06:37.262 12:02:38 -- accel/accel.sh@20 -- # val= 00:06:37.262 12:02:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.262 12:02:38 -- accel/accel.sh@19 -- # IFS=: 00:06:37.262 12:02:38 -- accel/accel.sh@19 -- # read -r var val 00:06:37.262 12:02:38 -- accel/accel.sh@20 -- # val=0x1 00:06:37.262 12:02:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.262 12:02:38 -- accel/accel.sh@19 -- # IFS=: 00:06:37.262 12:02:38 -- accel/accel.sh@19 -- # read -r var val 00:06:37.262 12:02:38 -- accel/accel.sh@20 -- # val= 00:06:37.262 12:02:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.262 12:02:38 -- accel/accel.sh@19 -- # IFS=: 00:06:37.262 12:02:38 -- accel/accel.sh@19 -- # read -r var val 00:06:37.262 12:02:38 -- accel/accel.sh@20 -- # val= 00:06:37.262 12:02:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.262 12:02:38 -- accel/accel.sh@19 -- # IFS=: 00:06:37.262 12:02:38 -- accel/accel.sh@19 -- # read -r var val 00:06:37.262 12:02:38 -- accel/accel.sh@20 -- # val=decompress 00:06:37.262 12:02:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.262 12:02:38 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:37.262 12:02:38 -- accel/accel.sh@19 -- # IFS=: 00:06:37.262 12:02:38 -- accel/accel.sh@19 -- # read -r var val 00:06:37.262 12:02:38 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:37.262 12:02:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.262 12:02:38 -- accel/accel.sh@19 -- # IFS=: 00:06:37.262 12:02:38 -- accel/accel.sh@19 -- # read -r var val 00:06:37.262 12:02:38 -- accel/accel.sh@20 -- # val= 00:06:37.262 12:02:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.262 12:02:38 -- accel/accel.sh@19 -- # IFS=: 00:06:37.262 12:02:38 -- accel/accel.sh@19 -- # read -r var val 00:06:37.262 12:02:38 -- accel/accel.sh@20 -- # val=software 00:06:37.262 12:02:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.262 12:02:38 -- accel/accel.sh@22 -- # accel_module=software 00:06:37.262 12:02:38 -- accel/accel.sh@19 -- # IFS=: 00:06:37.262 12:02:38 -- accel/accel.sh@19 -- # read -r var val 00:06:37.262 12:02:38 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:37.262 12:02:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.262 12:02:38 -- accel/accel.sh@19 -- # IFS=: 00:06:37.262 12:02:38 -- accel/accel.sh@19 -- # read -r var val 00:06:37.262 12:02:38 -- accel/accel.sh@20 -- # val=32 00:06:37.262 12:02:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.262 12:02:38 -- accel/accel.sh@19 -- # IFS=: 00:06:37.262 12:02:38 -- accel/accel.sh@19 -- # read -r var val 00:06:37.262 12:02:38 -- accel/accel.sh@20 -- # val=32 00:06:37.262 12:02:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.262 12:02:38 -- accel/accel.sh@19 -- # IFS=: 00:06:37.262 12:02:38 -- accel/accel.sh@19 -- # read -r var val 00:06:37.262 12:02:38 -- accel/accel.sh@20 -- # val=2 00:06:37.262 12:02:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.262 12:02:38 -- accel/accel.sh@19 -- # IFS=: 00:06:37.262 12:02:38 -- accel/accel.sh@19 -- # read -r var val 00:06:37.262 12:02:38 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:37.262 12:02:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.262 12:02:38 -- accel/accel.sh@19 -- # IFS=: 00:06:37.262 12:02:38 -- accel/accel.sh@19 -- # read -r var val 00:06:37.262 12:02:38 -- accel/accel.sh@20 -- # val=Yes 00:06:37.262 12:02:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.262 12:02:38 -- accel/accel.sh@19 -- # IFS=: 00:06:37.262 12:02:38 -- accel/accel.sh@19 -- # read -r var val 00:06:37.262 12:02:38 -- accel/accel.sh@20 -- # val= 00:06:37.262 12:02:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.262 12:02:38 -- accel/accel.sh@19 -- # IFS=: 00:06:37.262 12:02:38 -- accel/accel.sh@19 -- # read -r var val 00:06:37.262 12:02:38 -- accel/accel.sh@20 -- # val= 00:06:37.262 12:02:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.262 12:02:38 -- accel/accel.sh@19 -- # IFS=: 00:06:37.262 12:02:38 -- accel/accel.sh@19 -- # read -r var val 00:06:38.649 12:02:39 -- accel/accel.sh@20 -- # val= 00:06:38.649 12:02:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.649 12:02:39 -- accel/accel.sh@19 -- # IFS=: 00:06:38.649 12:02:39 -- accel/accel.sh@19 -- # read -r var val 00:06:38.649 12:02:39 -- accel/accel.sh@20 -- # val= 00:06:38.649 12:02:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.649 12:02:39 -- accel/accel.sh@19 -- # IFS=: 00:06:38.649 12:02:39 -- accel/accel.sh@19 -- # read -r var val 00:06:38.649 12:02:39 -- accel/accel.sh@20 -- # val= 00:06:38.649 12:02:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.649 12:02:39 -- accel/accel.sh@19 -- # IFS=: 00:06:38.649 12:02:39 -- accel/accel.sh@19 -- # read -r var val 00:06:38.649 12:02:39 -- accel/accel.sh@20 -- # val= 00:06:38.649 12:02:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.649 12:02:39 -- accel/accel.sh@19 -- # IFS=: 00:06:38.649 12:02:39 -- accel/accel.sh@19 -- # read -r var val 00:06:38.649 12:02:39 -- accel/accel.sh@20 -- # val= 00:06:38.649 12:02:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.649 12:02:39 -- accel/accel.sh@19 -- # IFS=: 00:06:38.649 12:02:39 -- accel/accel.sh@19 -- # read -r var val 00:06:38.649 12:02:39 -- accel/accel.sh@20 -- # val= 00:06:38.649 12:02:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.649 12:02:39 -- accel/accel.sh@19 -- # IFS=: 00:06:38.649 12:02:39 -- accel/accel.sh@19 -- # read -r var val 00:06:38.649 12:02:39 -- accel/accel.sh@20 -- # val= 00:06:38.649 12:02:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.649 12:02:39 -- accel/accel.sh@19 -- # IFS=: 00:06:38.649 12:02:39 -- accel/accel.sh@19 -- # read -r var val 00:06:38.649 12:02:39 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:38.649 12:02:39 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:38.649 12:02:39 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:38.649 00:06:38.649 real 0m1.302s 00:06:38.649 user 0m1.206s 00:06:38.649 sys 0m0.107s 00:06:38.649 12:02:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:38.649 12:02:39 -- common/autotest_common.sh@10 -- # set +x 00:06:38.649 ************************************ 00:06:38.649 END TEST accel_decomp_mthread 00:06:38.649 ************************************ 00:06:38.649 12:02:39 -- accel/accel.sh@122 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:38.649 12:02:39 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:38.649 12:02:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:38.649 12:02:39 -- common/autotest_common.sh@10 -- # set +x 00:06:38.649 ************************************ 00:06:38.649 START TEST accel_deomp_full_mthread 00:06:38.649 ************************************ 00:06:38.649 12:02:39 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:38.649 12:02:39 -- accel/accel.sh@16 -- # local accel_opc 00:06:38.649 12:02:39 -- accel/accel.sh@17 -- # local accel_module 00:06:38.649 12:02:39 -- accel/accel.sh@19 -- # IFS=: 00:06:38.649 12:02:39 -- accel/accel.sh@19 -- # read -r var val 00:06:38.649 12:02:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:38.649 12:02:39 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:38.649 12:02:39 -- accel/accel.sh@12 -- # build_accel_config 00:06:38.649 12:02:39 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:38.649 12:02:39 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:38.649 12:02:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.649 12:02:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.649 12:02:39 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:38.649 12:02:39 -- accel/accel.sh@40 -- # local IFS=, 00:06:38.649 12:02:39 -- accel/accel.sh@41 -- # jq -r . 00:06:38.649 [2024-04-26 12:02:39.776095] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:38.650 [2024-04-26 12:02:39.776165] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3216755 ] 00:06:38.650 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.650 [2024-04-26 12:02:39.841145] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.911 [2024-04-26 12:02:39.912556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.911 12:02:39 -- accel/accel.sh@20 -- # val= 00:06:38.911 12:02:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.911 12:02:39 -- accel/accel.sh@19 -- # IFS=: 00:06:38.911 12:02:39 -- accel/accel.sh@19 -- # read -r var val 00:06:38.911 12:02:39 -- accel/accel.sh@20 -- # val= 00:06:38.911 12:02:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.911 12:02:39 -- accel/accel.sh@19 -- # IFS=: 00:06:38.911 12:02:39 -- accel/accel.sh@19 -- # read -r var val 00:06:38.911 12:02:39 -- accel/accel.sh@20 -- # val= 00:06:38.911 12:02:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.911 12:02:39 -- accel/accel.sh@19 -- # IFS=: 00:06:38.911 12:02:39 -- accel/accel.sh@19 -- # read -r var val 00:06:38.911 12:02:39 -- accel/accel.sh@20 -- # val=0x1 00:06:38.911 12:02:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.911 12:02:39 -- accel/accel.sh@19 -- # IFS=: 00:06:38.911 12:02:39 -- accel/accel.sh@19 -- # read -r var val 00:06:38.911 12:02:39 -- accel/accel.sh@20 -- # val= 00:06:38.911 12:02:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.911 12:02:39 -- accel/accel.sh@19 -- # IFS=: 00:06:38.911 12:02:39 -- accel/accel.sh@19 -- # read -r var val 00:06:38.911 12:02:39 -- accel/accel.sh@20 -- # val= 00:06:38.911 12:02:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.911 12:02:39 -- accel/accel.sh@19 -- # IFS=: 00:06:38.911 12:02:39 -- accel/accel.sh@19 -- # read -r var val 00:06:38.911 12:02:39 -- accel/accel.sh@20 -- # val=decompress 00:06:38.911 12:02:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.911 12:02:39 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:38.911 12:02:39 -- accel/accel.sh@19 -- # IFS=: 00:06:38.911 12:02:39 -- accel/accel.sh@19 -- # read -r var val 00:06:38.911 12:02:39 -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:38.911 12:02:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.911 12:02:39 -- accel/accel.sh@19 -- # IFS=: 00:06:38.911 12:02:39 -- accel/accel.sh@19 -- # read -r var val 00:06:38.911 12:02:39 -- accel/accel.sh@20 -- # val= 00:06:38.911 12:02:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.911 12:02:39 -- accel/accel.sh@19 -- # IFS=: 00:06:38.911 12:02:39 -- accel/accel.sh@19 -- # read -r var val 00:06:38.911 12:02:39 -- accel/accel.sh@20 -- # val=software 00:06:38.911 12:02:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.911 12:02:39 -- accel/accel.sh@22 -- # accel_module=software 00:06:38.911 12:02:39 -- accel/accel.sh@19 -- # IFS=: 00:06:38.911 12:02:39 -- accel/accel.sh@19 -- # read -r var val 00:06:38.911 12:02:39 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:38.911 12:02:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.911 12:02:39 -- accel/accel.sh@19 -- # IFS=: 00:06:38.911 12:02:39 -- accel/accel.sh@19 -- # read -r var val 00:06:38.911 12:02:39 -- accel/accel.sh@20 -- # val=32 00:06:38.911 12:02:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.911 12:02:39 -- accel/accel.sh@19 -- # IFS=: 00:06:38.911 12:02:39 -- accel/accel.sh@19 -- # read -r var val 00:06:38.911 12:02:39 -- accel/accel.sh@20 -- # val=32 00:06:38.911 12:02:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.911 12:02:39 -- accel/accel.sh@19 -- # IFS=: 00:06:38.911 12:02:39 -- accel/accel.sh@19 -- # read -r var val 00:06:38.911 12:02:39 -- accel/accel.sh@20 -- # val=2 00:06:38.911 12:02:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.911 12:02:39 -- accel/accel.sh@19 -- # IFS=: 00:06:38.911 12:02:39 -- accel/accel.sh@19 -- # read -r var val 00:06:38.911 12:02:39 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:38.911 12:02:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.911 12:02:39 -- accel/accel.sh@19 -- # IFS=: 00:06:38.911 12:02:39 -- accel/accel.sh@19 -- # read -r var val 00:06:38.911 12:02:39 -- accel/accel.sh@20 -- # val=Yes 00:06:38.911 12:02:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.911 12:02:39 -- accel/accel.sh@19 -- # IFS=: 00:06:38.911 12:02:39 -- accel/accel.sh@19 -- # read -r var val 00:06:38.911 12:02:39 -- accel/accel.sh@20 -- # val= 00:06:38.911 12:02:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.911 12:02:39 -- accel/accel.sh@19 -- # IFS=: 00:06:38.911 12:02:39 -- accel/accel.sh@19 -- # read -r var val 00:06:38.911 12:02:39 -- accel/accel.sh@20 -- # val= 00:06:38.911 12:02:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.911 12:02:39 -- accel/accel.sh@19 -- # IFS=: 00:06:38.911 12:02:39 -- accel/accel.sh@19 -- # read -r var val 00:06:40.294 12:02:41 -- accel/accel.sh@20 -- # val= 00:06:40.294 12:02:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.294 12:02:41 -- accel/accel.sh@19 -- # IFS=: 00:06:40.294 12:02:41 -- accel/accel.sh@19 -- # read -r var val 00:06:40.294 12:02:41 -- accel/accel.sh@20 -- # val= 00:06:40.294 12:02:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.294 12:02:41 -- accel/accel.sh@19 -- # IFS=: 00:06:40.294 12:02:41 -- accel/accel.sh@19 -- # read -r var val 00:06:40.294 12:02:41 -- accel/accel.sh@20 -- # val= 00:06:40.294 12:02:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.294 12:02:41 -- accel/accel.sh@19 -- # IFS=: 00:06:40.294 12:02:41 -- accel/accel.sh@19 -- # read -r var val 00:06:40.294 12:02:41 -- accel/accel.sh@20 -- # val= 00:06:40.294 12:02:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.294 12:02:41 -- accel/accel.sh@19 -- # IFS=: 00:06:40.294 12:02:41 -- accel/accel.sh@19 -- # read -r var val 00:06:40.294 12:02:41 -- accel/accel.sh@20 -- # val= 00:06:40.294 12:02:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.294 12:02:41 -- accel/accel.sh@19 -- # IFS=: 00:06:40.294 12:02:41 -- accel/accel.sh@19 -- # read -r var val 00:06:40.294 12:02:41 -- accel/accel.sh@20 -- # val= 00:06:40.294 12:02:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.294 12:02:41 -- accel/accel.sh@19 -- # IFS=: 00:06:40.294 12:02:41 -- accel/accel.sh@19 -- # read -r var val 00:06:40.294 12:02:41 -- accel/accel.sh@20 -- # val= 00:06:40.294 12:02:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.294 12:02:41 -- accel/accel.sh@19 -- # IFS=: 00:06:40.294 12:02:41 -- accel/accel.sh@19 -- # read -r var val 00:06:40.294 12:02:41 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:40.294 12:02:41 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:40.294 12:02:41 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:40.294 00:06:40.294 real 0m1.331s 00:06:40.294 user 0m1.232s 00:06:40.294 sys 0m0.109s 00:06:40.294 12:02:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:40.294 12:02:41 -- common/autotest_common.sh@10 -- # set +x 00:06:40.294 ************************************ 00:06:40.294 END TEST accel_deomp_full_mthread 00:06:40.294 ************************************ 00:06:40.294 12:02:41 -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:40.294 12:02:41 -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:40.294 12:02:41 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:40.294 12:02:41 -- accel/accel.sh@137 -- # build_accel_config 00:06:40.294 12:02:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:40.294 12:02:41 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:40.294 12:02:41 -- common/autotest_common.sh@10 -- # set +x 00:06:40.294 12:02:41 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:40.294 12:02:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.294 12:02:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.294 12:02:41 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:40.294 12:02:41 -- accel/accel.sh@40 -- # local IFS=, 00:06:40.294 12:02:41 -- accel/accel.sh@41 -- # jq -r . 00:06:40.294 ************************************ 00:06:40.294 START TEST accel_dif_functional_tests 00:06:40.294 ************************************ 00:06:40.294 12:02:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:40.294 [2024-04-26 12:02:41.323354] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:40.294 [2024-04-26 12:02:41.323431] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3217116 ] 00:06:40.294 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.294 [2024-04-26 12:02:41.388122] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:40.294 [2024-04-26 12:02:41.462210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.294 [2024-04-26 12:02:41.462328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:40.294 [2024-04-26 12:02:41.462331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.556 00:06:40.556 00:06:40.556 CUnit - A unit testing framework for C - Version 2.1-3 00:06:40.556 http://cunit.sourceforge.net/ 00:06:40.556 00:06:40.556 00:06:40.556 Suite: accel_dif 00:06:40.556 Test: verify: DIF generated, GUARD check ...passed 00:06:40.556 Test: verify: DIF generated, APPTAG check ...passed 00:06:40.556 Test: verify: DIF generated, REFTAG check ...passed 00:06:40.556 Test: verify: DIF not generated, GUARD check ...[2024-04-26 12:02:41.518169] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:40.556 [2024-04-26 12:02:41.518208] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:40.556 passed 00:06:40.556 Test: verify: DIF not generated, APPTAG check ...[2024-04-26 12:02:41.518239] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:40.556 [2024-04-26 12:02:41.518253] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:40.556 passed 00:06:40.556 Test: verify: DIF not generated, REFTAG check ...[2024-04-26 12:02:41.518269] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:40.556 [2024-04-26 12:02:41.518284] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:40.556 passed 00:06:40.556 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:40.556 Test: verify: APPTAG incorrect, APPTAG check ...[2024-04-26 12:02:41.518327] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:40.556 passed 00:06:40.556 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:40.556 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:40.556 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:40.556 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-04-26 12:02:41.518443] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:40.556 passed 00:06:40.556 Test: generate copy: DIF generated, GUARD check ...passed 00:06:40.556 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:40.556 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:40.556 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:40.556 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:40.556 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:40.556 Test: generate copy: iovecs-len validate ...[2024-04-26 12:02:41.518628] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:40.556 passed 00:06:40.556 Test: generate copy: buffer alignment validate ...passed 00:06:40.556 00:06:40.556 Run Summary: Type Total Ran Passed Failed Inactive 00:06:40.556 suites 1 1 n/a 0 0 00:06:40.556 tests 20 20 20 0 0 00:06:40.556 asserts 204 204 204 0 n/a 00:06:40.556 00:06:40.556 Elapsed time = 0.000 seconds 00:06:40.556 00:06:40.556 real 0m0.369s 00:06:40.556 user 0m0.457s 00:06:40.556 sys 0m0.134s 00:06:40.556 12:02:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:40.556 12:02:41 -- common/autotest_common.sh@10 -- # set +x 00:06:40.556 ************************************ 00:06:40.556 END TEST accel_dif_functional_tests 00:06:40.556 ************************************ 00:06:40.556 00:06:40.556 real 0m32.818s 00:06:40.556 user 0m34.683s 00:06:40.556 sys 0m5.372s 00:06:40.556 12:02:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:40.556 12:02:41 -- common/autotest_common.sh@10 -- # set +x 00:06:40.556 ************************************ 00:06:40.556 END TEST accel 00:06:40.556 ************************************ 00:06:40.556 12:02:41 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:40.556 12:02:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:40.556 12:02:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:40.556 12:02:41 -- common/autotest_common.sh@10 -- # set +x 00:06:40.817 ************************************ 00:06:40.817 START TEST accel_rpc 00:06:40.817 ************************************ 00:06:40.817 12:02:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:40.817 * Looking for test storage... 00:06:40.817 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:40.817 12:02:41 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:40.817 12:02:41 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3217258 00:06:40.817 12:02:41 -- accel/accel_rpc.sh@15 -- # waitforlisten 3217258 00:06:40.817 12:02:41 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:40.817 12:02:41 -- common/autotest_common.sh@817 -- # '[' -z 3217258 ']' 00:06:40.817 12:02:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.817 12:02:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:40.817 12:02:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.817 12:02:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:40.817 12:02:41 -- common/autotest_common.sh@10 -- # set +x 00:06:41.078 [2024-04-26 12:02:42.044804] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:41.078 [2024-04-26 12:02:42.044877] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3217258 ] 00:06:41.078 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.078 [2024-04-26 12:02:42.111928] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.078 [2024-04-26 12:02:42.184766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.650 12:02:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:41.650 12:02:42 -- common/autotest_common.sh@850 -- # return 0 00:06:41.650 12:02:42 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:41.650 12:02:42 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:41.650 12:02:42 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:41.650 12:02:42 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:41.650 12:02:42 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:41.650 12:02:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:41.650 12:02:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:41.650 12:02:42 -- common/autotest_common.sh@10 -- # set +x 00:06:41.912 ************************************ 00:06:41.912 START TEST accel_assign_opcode 00:06:41.912 ************************************ 00:06:41.912 12:02:42 -- common/autotest_common.sh@1111 -- # accel_assign_opcode_test_suite 00:06:41.912 12:02:42 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:41.912 12:02:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:41.912 12:02:42 -- common/autotest_common.sh@10 -- # set +x 00:06:41.912 [2024-04-26 12:02:42.967017] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:41.912 12:02:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:41.912 12:02:42 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:41.912 12:02:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:41.912 12:02:42 -- common/autotest_common.sh@10 -- # set +x 00:06:41.912 [2024-04-26 12:02:42.975027] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:41.912 12:02:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:41.912 12:02:42 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:41.912 12:02:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:41.912 12:02:42 -- common/autotest_common.sh@10 -- # set +x 00:06:41.912 12:02:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:41.912 12:02:43 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:41.912 12:02:43 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:41.912 12:02:43 -- accel/accel_rpc.sh@42 -- # grep software 00:06:41.912 12:02:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:41.912 12:02:43 -- common/autotest_common.sh@10 -- # set +x 00:06:42.173 12:02:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:42.173 software 00:06:42.173 00:06:42.173 real 0m0.206s 00:06:42.173 user 0m0.050s 00:06:42.173 sys 0m0.006s 00:06:42.173 12:02:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:42.173 12:02:43 -- common/autotest_common.sh@10 -- # set +x 00:06:42.173 ************************************ 00:06:42.173 END TEST accel_assign_opcode 00:06:42.173 ************************************ 00:06:42.173 12:02:43 -- accel/accel_rpc.sh@55 -- # killprocess 3217258 00:06:42.173 12:02:43 -- common/autotest_common.sh@936 -- # '[' -z 3217258 ']' 00:06:42.173 12:02:43 -- common/autotest_common.sh@940 -- # kill -0 3217258 00:06:42.173 12:02:43 -- common/autotest_common.sh@941 -- # uname 00:06:42.173 12:02:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:42.173 12:02:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3217258 00:06:42.173 12:02:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:42.173 12:02:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:42.173 12:02:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3217258' 00:06:42.173 killing process with pid 3217258 00:06:42.173 12:02:43 -- common/autotest_common.sh@955 -- # kill 3217258 00:06:42.173 12:02:43 -- common/autotest_common.sh@960 -- # wait 3217258 00:06:42.434 00:06:42.434 real 0m1.595s 00:06:42.434 user 0m1.721s 00:06:42.434 sys 0m0.465s 00:06:42.434 12:02:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:42.434 12:02:43 -- common/autotest_common.sh@10 -- # set +x 00:06:42.434 ************************************ 00:06:42.434 END TEST accel_rpc 00:06:42.434 ************************************ 00:06:42.434 12:02:43 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:42.434 12:02:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:42.434 12:02:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:42.434 12:02:43 -- common/autotest_common.sh@10 -- # set +x 00:06:42.434 ************************************ 00:06:42.434 START TEST app_cmdline 00:06:42.434 ************************************ 00:06:42.434 12:02:43 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:42.696 * Looking for test storage... 00:06:42.696 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:42.696 12:02:43 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:42.696 12:02:43 -- app/cmdline.sh@17 -- # spdk_tgt_pid=3217726 00:06:42.696 12:02:43 -- app/cmdline.sh@18 -- # waitforlisten 3217726 00:06:42.696 12:02:43 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:42.696 12:02:43 -- common/autotest_common.sh@817 -- # '[' -z 3217726 ']' 00:06:42.696 12:02:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.696 12:02:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:42.696 12:02:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.696 12:02:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:42.696 12:02:43 -- common/autotest_common.sh@10 -- # set +x 00:06:42.696 [2024-04-26 12:02:43.811552] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:42.696 [2024-04-26 12:02:43.811629] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3217726 ] 00:06:42.696 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.696 [2024-04-26 12:02:43.876254] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.957 [2024-04-26 12:02:43.949415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.529 12:02:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:43.529 12:02:44 -- common/autotest_common.sh@850 -- # return 0 00:06:43.529 12:02:44 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:43.529 { 00:06:43.529 "version": "SPDK v24.05-pre git sha1 06472fb6d", 00:06:43.529 "fields": { 00:06:43.529 "major": 24, 00:06:43.529 "minor": 5, 00:06:43.529 "patch": 0, 00:06:43.529 "suffix": "-pre", 00:06:43.529 "commit": "06472fb6d" 00:06:43.529 } 00:06:43.529 } 00:06:43.529 12:02:44 -- app/cmdline.sh@22 -- # expected_methods=() 00:06:43.529 12:02:44 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:43.529 12:02:44 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:43.529 12:02:44 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:43.529 12:02:44 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:43.529 12:02:44 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:43.529 12:02:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:43.529 12:02:44 -- app/cmdline.sh@26 -- # sort 00:06:43.529 12:02:44 -- common/autotest_common.sh@10 -- # set +x 00:06:43.529 12:02:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:43.790 12:02:44 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:43.791 12:02:44 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:43.791 12:02:44 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:43.791 12:02:44 -- common/autotest_common.sh@638 -- # local es=0 00:06:43.791 12:02:44 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:43.791 12:02:44 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:43.791 12:02:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:43.791 12:02:44 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:43.791 12:02:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:43.791 12:02:44 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:43.791 12:02:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:43.791 12:02:44 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:43.791 12:02:44 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:43.791 12:02:44 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:43.791 request: 00:06:43.791 { 00:06:43.791 "method": "env_dpdk_get_mem_stats", 00:06:43.791 "req_id": 1 00:06:43.791 } 00:06:43.791 Got JSON-RPC error response 00:06:43.791 response: 00:06:43.791 { 00:06:43.791 "code": -32601, 00:06:43.791 "message": "Method not found" 00:06:43.791 } 00:06:43.791 12:02:44 -- common/autotest_common.sh@641 -- # es=1 00:06:43.791 12:02:44 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:43.791 12:02:44 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:43.791 12:02:44 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:43.791 12:02:44 -- app/cmdline.sh@1 -- # killprocess 3217726 00:06:43.791 12:02:44 -- common/autotest_common.sh@936 -- # '[' -z 3217726 ']' 00:06:43.791 12:02:44 -- common/autotest_common.sh@940 -- # kill -0 3217726 00:06:43.791 12:02:44 -- common/autotest_common.sh@941 -- # uname 00:06:43.791 12:02:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:43.791 12:02:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3217726 00:06:43.791 12:02:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:43.791 12:02:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:43.791 12:02:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3217726' 00:06:43.791 killing process with pid 3217726 00:06:43.791 12:02:44 -- common/autotest_common.sh@955 -- # kill 3217726 00:06:43.791 12:02:44 -- common/autotest_common.sh@960 -- # wait 3217726 00:06:44.052 00:06:44.052 real 0m1.547s 00:06:44.052 user 0m1.830s 00:06:44.052 sys 0m0.419s 00:06:44.052 12:02:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:44.052 12:02:45 -- common/autotest_common.sh@10 -- # set +x 00:06:44.052 ************************************ 00:06:44.052 END TEST app_cmdline 00:06:44.052 ************************************ 00:06:44.052 12:02:45 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:44.052 12:02:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:44.052 12:02:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:44.052 12:02:45 -- common/autotest_common.sh@10 -- # set +x 00:06:44.314 ************************************ 00:06:44.314 START TEST version 00:06:44.314 ************************************ 00:06:44.314 12:02:45 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:44.314 * Looking for test storage... 00:06:44.314 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:44.314 12:02:45 -- app/version.sh@17 -- # get_header_version major 00:06:44.314 12:02:45 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:44.314 12:02:45 -- app/version.sh@14 -- # cut -f2 00:06:44.314 12:02:45 -- app/version.sh@14 -- # tr -d '"' 00:06:44.314 12:02:45 -- app/version.sh@17 -- # major=24 00:06:44.314 12:02:45 -- app/version.sh@18 -- # get_header_version minor 00:06:44.314 12:02:45 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:44.314 12:02:45 -- app/version.sh@14 -- # cut -f2 00:06:44.314 12:02:45 -- app/version.sh@14 -- # tr -d '"' 00:06:44.314 12:02:45 -- app/version.sh@18 -- # minor=5 00:06:44.314 12:02:45 -- app/version.sh@19 -- # get_header_version patch 00:06:44.314 12:02:45 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:44.314 12:02:45 -- app/version.sh@14 -- # cut -f2 00:06:44.314 12:02:45 -- app/version.sh@14 -- # tr -d '"' 00:06:44.314 12:02:45 -- app/version.sh@19 -- # patch=0 00:06:44.314 12:02:45 -- app/version.sh@20 -- # get_header_version suffix 00:06:44.314 12:02:45 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:44.314 12:02:45 -- app/version.sh@14 -- # cut -f2 00:06:44.314 12:02:45 -- app/version.sh@14 -- # tr -d '"' 00:06:44.314 12:02:45 -- app/version.sh@20 -- # suffix=-pre 00:06:44.314 12:02:45 -- app/version.sh@22 -- # version=24.5 00:06:44.314 12:02:45 -- app/version.sh@25 -- # (( patch != 0 )) 00:06:44.314 12:02:45 -- app/version.sh@28 -- # version=24.5rc0 00:06:44.314 12:02:45 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:44.314 12:02:45 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:44.576 12:02:45 -- app/version.sh@30 -- # py_version=24.5rc0 00:06:44.576 12:02:45 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:06:44.576 00:06:44.576 real 0m0.178s 00:06:44.576 user 0m0.093s 00:06:44.576 sys 0m0.123s 00:06:44.576 12:02:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:44.576 12:02:45 -- common/autotest_common.sh@10 -- # set +x 00:06:44.576 ************************************ 00:06:44.576 END TEST version 00:06:44.576 ************************************ 00:06:44.576 12:02:45 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:06:44.576 12:02:45 -- spdk/autotest.sh@194 -- # uname -s 00:06:44.576 12:02:45 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:44.576 12:02:45 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:44.576 12:02:45 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:44.576 12:02:45 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:44.576 12:02:45 -- spdk/autotest.sh@254 -- # '[' 0 -eq 1 ']' 00:06:44.576 12:02:45 -- spdk/autotest.sh@258 -- # timing_exit lib 00:06:44.576 12:02:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:44.576 12:02:45 -- common/autotest_common.sh@10 -- # set +x 00:06:44.576 12:02:45 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:06:44.576 12:02:45 -- spdk/autotest.sh@268 -- # '[' 0 -eq 1 ']' 00:06:44.576 12:02:45 -- spdk/autotest.sh@277 -- # '[' 1 -eq 1 ']' 00:06:44.576 12:02:45 -- spdk/autotest.sh@278 -- # export NET_TYPE 00:06:44.576 12:02:45 -- spdk/autotest.sh@281 -- # '[' tcp = rdma ']' 00:06:44.576 12:02:45 -- spdk/autotest.sh@284 -- # '[' tcp = tcp ']' 00:06:44.576 12:02:45 -- spdk/autotest.sh@285 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:44.576 12:02:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:44.576 12:02:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:44.576 12:02:45 -- common/autotest_common.sh@10 -- # set +x 00:06:44.576 ************************************ 00:06:44.576 START TEST nvmf_tcp 00:06:44.576 ************************************ 00:06:44.576 12:02:45 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:44.838 * Looking for test storage... 00:06:44.838 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:44.838 12:02:45 -- nvmf/nvmf.sh@10 -- # uname -s 00:06:44.838 12:02:45 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:44.838 12:02:45 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:44.838 12:02:45 -- nvmf/common.sh@7 -- # uname -s 00:06:44.838 12:02:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:44.838 12:02:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:44.838 12:02:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:44.838 12:02:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:44.838 12:02:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:44.838 12:02:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:44.838 12:02:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:44.838 12:02:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:44.838 12:02:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:44.838 12:02:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:44.838 12:02:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:44.838 12:02:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:44.838 12:02:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:44.838 12:02:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:44.838 12:02:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:44.838 12:02:45 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:44.838 12:02:45 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:44.838 12:02:45 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:44.838 12:02:45 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:44.838 12:02:45 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:44.838 12:02:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.839 12:02:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.839 12:02:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.839 12:02:45 -- paths/export.sh@5 -- # export PATH 00:06:44.839 12:02:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.839 12:02:45 -- nvmf/common.sh@47 -- # : 0 00:06:44.839 12:02:45 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:44.839 12:02:45 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:44.839 12:02:45 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:44.839 12:02:45 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:44.839 12:02:45 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:44.839 12:02:45 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:44.839 12:02:45 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:44.839 12:02:45 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:44.839 12:02:45 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:44.839 12:02:45 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:44.839 12:02:45 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:44.839 12:02:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:44.839 12:02:45 -- common/autotest_common.sh@10 -- # set +x 00:06:44.839 12:02:45 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:44.839 12:02:45 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:44.839 12:02:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:44.839 12:02:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:44.839 12:02:45 -- common/autotest_common.sh@10 -- # set +x 00:06:45.100 ************************************ 00:06:45.100 START TEST nvmf_example 00:06:45.100 ************************************ 00:06:45.100 12:02:46 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:45.100 * Looking for test storage... 00:06:45.100 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:45.100 12:02:46 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:45.100 12:02:46 -- nvmf/common.sh@7 -- # uname -s 00:06:45.100 12:02:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:45.100 12:02:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:45.100 12:02:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:45.100 12:02:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:45.100 12:02:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:45.100 12:02:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:45.100 12:02:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:45.100 12:02:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:45.100 12:02:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:45.100 12:02:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:45.100 12:02:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:45.100 12:02:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:45.100 12:02:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:45.100 12:02:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:45.100 12:02:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:45.100 12:02:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:45.100 12:02:46 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:45.100 12:02:46 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:45.100 12:02:46 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:45.100 12:02:46 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:45.100 12:02:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.100 12:02:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.100 12:02:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.100 12:02:46 -- paths/export.sh@5 -- # export PATH 00:06:45.100 12:02:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.100 12:02:46 -- nvmf/common.sh@47 -- # : 0 00:06:45.100 12:02:46 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:45.100 12:02:46 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:45.100 12:02:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:45.101 12:02:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:45.101 12:02:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:45.101 12:02:46 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:45.101 12:02:46 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:45.101 12:02:46 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:45.101 12:02:46 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:45.101 12:02:46 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:45.101 12:02:46 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:45.101 12:02:46 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:45.101 12:02:46 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:45.101 12:02:46 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:45.101 12:02:46 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:45.101 12:02:46 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:45.101 12:02:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:45.101 12:02:46 -- common/autotest_common.sh@10 -- # set +x 00:06:45.101 12:02:46 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:45.101 12:02:46 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:06:45.101 12:02:46 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:45.101 12:02:46 -- nvmf/common.sh@437 -- # prepare_net_devs 00:06:45.101 12:02:46 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:06:45.101 12:02:46 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:06:45.101 12:02:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:45.101 12:02:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:45.101 12:02:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:45.101 12:02:46 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:06:45.101 12:02:46 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:06:45.101 12:02:46 -- nvmf/common.sh@285 -- # xtrace_disable 00:06:45.101 12:02:46 -- common/autotest_common.sh@10 -- # set +x 00:06:53.242 12:02:52 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:53.242 12:02:52 -- nvmf/common.sh@291 -- # pci_devs=() 00:06:53.242 12:02:52 -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:53.242 12:02:52 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:53.242 12:02:52 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:53.242 12:02:52 -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:53.242 12:02:52 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:53.242 12:02:52 -- nvmf/common.sh@295 -- # net_devs=() 00:06:53.242 12:02:52 -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:53.242 12:02:52 -- nvmf/common.sh@296 -- # e810=() 00:06:53.242 12:02:52 -- nvmf/common.sh@296 -- # local -ga e810 00:06:53.242 12:02:52 -- nvmf/common.sh@297 -- # x722=() 00:06:53.242 12:02:52 -- nvmf/common.sh@297 -- # local -ga x722 00:06:53.242 12:02:52 -- nvmf/common.sh@298 -- # mlx=() 00:06:53.242 12:02:52 -- nvmf/common.sh@298 -- # local -ga mlx 00:06:53.242 12:02:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:53.242 12:02:52 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:53.242 12:02:52 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:53.242 12:02:52 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:53.242 12:02:52 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:53.242 12:02:52 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:53.242 12:02:52 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:53.242 12:02:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:53.242 12:02:52 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:53.242 12:02:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:53.242 12:02:52 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:53.242 12:02:52 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:53.242 12:02:52 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:53.242 12:02:52 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:53.242 12:02:52 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:53.242 12:02:52 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:53.242 12:02:52 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:53.242 12:02:52 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:53.242 12:02:52 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:53.242 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:53.242 12:02:52 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:53.242 12:02:52 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:53.242 12:02:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:53.242 12:02:52 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:53.242 12:02:52 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:53.242 12:02:52 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:53.242 12:02:52 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:53.242 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:53.242 12:02:52 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:53.242 12:02:52 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:53.242 12:02:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:53.242 12:02:52 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:53.242 12:02:52 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:53.242 12:02:52 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:53.242 12:02:52 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:53.242 12:02:52 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:53.242 12:02:52 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:53.242 12:02:52 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:53.242 12:02:52 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:53.242 12:02:52 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:53.242 12:02:52 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:53.242 Found net devices under 0000:31:00.0: cvl_0_0 00:06:53.242 12:02:52 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:53.242 12:02:52 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:53.242 12:02:52 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:53.242 12:02:52 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:53.242 12:02:52 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:53.242 12:02:52 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:53.242 Found net devices under 0000:31:00.1: cvl_0_1 00:06:53.242 12:02:52 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:53.242 12:02:52 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:06:53.242 12:02:52 -- nvmf/common.sh@403 -- # is_hw=yes 00:06:53.242 12:02:52 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:06:53.242 12:02:52 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:06:53.242 12:02:52 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:06:53.242 12:02:52 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:53.242 12:02:52 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:53.242 12:02:52 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:53.242 12:02:52 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:53.242 12:02:52 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:53.242 12:02:52 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:53.242 12:02:52 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:53.242 12:02:52 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:53.242 12:02:52 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:53.242 12:02:52 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:53.242 12:02:52 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:53.242 12:02:52 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:53.242 12:02:53 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:53.242 12:02:53 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:53.242 12:02:53 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:53.242 12:02:53 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:53.242 12:02:53 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:53.242 12:02:53 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:53.242 12:02:53 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:53.242 12:02:53 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:53.242 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:53.242 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.574 ms 00:06:53.242 00:06:53.242 --- 10.0.0.2 ping statistics --- 00:06:53.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:53.242 rtt min/avg/max/mdev = 0.574/0.574/0.574/0.000 ms 00:06:53.242 12:02:53 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:53.242 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:53.242 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.335 ms 00:06:53.242 00:06:53.242 --- 10.0.0.1 ping statistics --- 00:06:53.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:53.242 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:06:53.242 12:02:53 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:53.242 12:02:53 -- nvmf/common.sh@411 -- # return 0 00:06:53.242 12:02:53 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:06:53.242 12:02:53 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:53.242 12:02:53 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:06:53.243 12:02:53 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:06:53.243 12:02:53 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:53.243 12:02:53 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:06:53.243 12:02:53 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:06:53.243 12:02:53 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:53.243 12:02:53 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:53.243 12:02:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:53.243 12:02:53 -- common/autotest_common.sh@10 -- # set +x 00:06:53.243 12:02:53 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:53.243 12:02:53 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:53.243 12:02:53 -- target/nvmf_example.sh@34 -- # nvmfpid=3222121 00:06:53.243 12:02:53 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:53.243 12:02:53 -- target/nvmf_example.sh@36 -- # waitforlisten 3222121 00:06:53.243 12:02:53 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:53.243 12:02:53 -- common/autotest_common.sh@817 -- # '[' -z 3222121 ']' 00:06:53.243 12:02:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.243 12:02:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:53.243 12:02:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.243 12:02:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:53.243 12:02:53 -- common/autotest_common.sh@10 -- # set +x 00:06:53.243 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.243 12:02:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:53.243 12:02:54 -- common/autotest_common.sh@850 -- # return 0 00:06:53.243 12:02:54 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:53.243 12:02:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:53.243 12:02:54 -- common/autotest_common.sh@10 -- # set +x 00:06:53.243 12:02:54 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:53.243 12:02:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:53.243 12:02:54 -- common/autotest_common.sh@10 -- # set +x 00:06:53.243 12:02:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:53.243 12:02:54 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:53.243 12:02:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:53.243 12:02:54 -- common/autotest_common.sh@10 -- # set +x 00:06:53.243 12:02:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:53.243 12:02:54 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:53.243 12:02:54 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:53.243 12:02:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:53.243 12:02:54 -- common/autotest_common.sh@10 -- # set +x 00:06:53.243 12:02:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:53.243 12:02:54 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:53.243 12:02:54 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:53.243 12:02:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:53.243 12:02:54 -- common/autotest_common.sh@10 -- # set +x 00:06:53.243 12:02:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:53.243 12:02:54 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:53.243 12:02:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:53.243 12:02:54 -- common/autotest_common.sh@10 -- # set +x 00:06:53.243 12:02:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:53.243 12:02:54 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:53.243 12:02:54 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:53.243 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.479 Initializing NVMe Controllers 00:07:05.479 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:05.479 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:05.479 Initialization complete. Launching workers. 00:07:05.479 ======================================================== 00:07:05.479 Latency(us) 00:07:05.479 Device Information : IOPS MiB/s Average min max 00:07:05.479 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18578.43 72.57 3444.19 639.01 20288.57 00:07:05.479 ======================================================== 00:07:05.479 Total : 18578.43 72.57 3444.19 639.01 20288.57 00:07:05.479 00:07:05.479 12:03:04 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:05.479 12:03:04 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:05.479 12:03:04 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:05.479 12:03:04 -- nvmf/common.sh@117 -- # sync 00:07:05.479 12:03:04 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:05.479 12:03:04 -- nvmf/common.sh@120 -- # set +e 00:07:05.479 12:03:04 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:05.479 12:03:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:05.479 rmmod nvme_tcp 00:07:05.479 rmmod nvme_fabrics 00:07:05.479 rmmod nvme_keyring 00:07:05.479 12:03:04 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:05.479 12:03:04 -- nvmf/common.sh@124 -- # set -e 00:07:05.479 12:03:04 -- nvmf/common.sh@125 -- # return 0 00:07:05.479 12:03:04 -- nvmf/common.sh@478 -- # '[' -n 3222121 ']' 00:07:05.479 12:03:04 -- nvmf/common.sh@479 -- # killprocess 3222121 00:07:05.479 12:03:04 -- common/autotest_common.sh@936 -- # '[' -z 3222121 ']' 00:07:05.479 12:03:04 -- common/autotest_common.sh@940 -- # kill -0 3222121 00:07:05.479 12:03:04 -- common/autotest_common.sh@941 -- # uname 00:07:05.479 12:03:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:05.479 12:03:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3222121 00:07:05.479 12:03:04 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:07:05.479 12:03:04 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:07:05.479 12:03:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3222121' 00:07:05.479 killing process with pid 3222121 00:07:05.479 12:03:04 -- common/autotest_common.sh@955 -- # kill 3222121 00:07:05.479 12:03:04 -- common/autotest_common.sh@960 -- # wait 3222121 00:07:05.479 nvmf threads initialize successfully 00:07:05.479 bdev subsystem init successfully 00:07:05.479 created a nvmf target service 00:07:05.479 create targets's poll groups done 00:07:05.479 all subsystems of target started 00:07:05.479 nvmf target is running 00:07:05.479 all subsystems of target stopped 00:07:05.479 destroy targets's poll groups done 00:07:05.479 destroyed the nvmf target service 00:07:05.479 bdev subsystem finish successfully 00:07:05.479 nvmf threads destroy successfully 00:07:05.479 12:03:04 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:05.479 12:03:04 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:05.479 12:03:04 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:05.479 12:03:04 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:05.479 12:03:04 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:05.479 12:03:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:05.479 12:03:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:05.479 12:03:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:05.820 12:03:06 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:05.820 12:03:06 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:05.820 12:03:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:05.820 12:03:06 -- common/autotest_common.sh@10 -- # set +x 00:07:05.820 00:07:05.820 real 0m20.867s 00:07:05.820 user 0m46.803s 00:07:05.820 sys 0m6.242s 00:07:05.820 12:03:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:05.820 12:03:06 -- common/autotest_common.sh@10 -- # set +x 00:07:05.820 ************************************ 00:07:05.820 END TEST nvmf_example 00:07:05.820 ************************************ 00:07:05.820 12:03:06 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:05.820 12:03:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:05.820 12:03:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:05.820 12:03:06 -- common/autotest_common.sh@10 -- # set +x 00:07:06.084 ************************************ 00:07:06.084 START TEST nvmf_filesystem 00:07:06.084 ************************************ 00:07:06.084 12:03:07 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:06.084 * Looking for test storage... 00:07:06.084 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:06.084 12:03:07 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:06.084 12:03:07 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:06.084 12:03:07 -- common/autotest_common.sh@34 -- # set -e 00:07:06.084 12:03:07 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:06.084 12:03:07 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:06.084 12:03:07 -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:06.084 12:03:07 -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:06.084 12:03:07 -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:06.084 12:03:07 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:06.084 12:03:07 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:06.084 12:03:07 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:06.084 12:03:07 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:06.084 12:03:07 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:06.084 12:03:07 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:06.084 12:03:07 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:06.084 12:03:07 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:06.084 12:03:07 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:06.084 12:03:07 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:06.084 12:03:07 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:06.084 12:03:07 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:06.084 12:03:07 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:06.084 12:03:07 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:06.084 12:03:07 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:06.084 12:03:07 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:06.084 12:03:07 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:06.084 12:03:07 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:06.084 12:03:07 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:06.084 12:03:07 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:06.084 12:03:07 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:06.084 12:03:07 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:06.084 12:03:07 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:06.084 12:03:07 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:06.084 12:03:07 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:06.084 12:03:07 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:06.084 12:03:07 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:06.084 12:03:07 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:06.084 12:03:07 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:06.084 12:03:07 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:06.084 12:03:07 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:06.084 12:03:07 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:06.084 12:03:07 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:06.084 12:03:07 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:06.084 12:03:07 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:06.084 12:03:07 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:06.084 12:03:07 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:06.084 12:03:07 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:06.084 12:03:07 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:06.084 12:03:07 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:06.084 12:03:07 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:06.084 12:03:07 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:06.084 12:03:07 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:06.084 12:03:07 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:06.084 12:03:07 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:06.084 12:03:07 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:07:06.084 12:03:07 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:07:06.084 12:03:07 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:06.084 12:03:07 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:07:06.084 12:03:07 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:07:06.084 12:03:07 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:07:06.084 12:03:07 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:07:06.084 12:03:07 -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:07:06.084 12:03:07 -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=n 00:07:06.085 12:03:07 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:07:06.085 12:03:07 -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:07:06.085 12:03:07 -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:07:06.085 12:03:07 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:07:06.085 12:03:07 -- common/build_config.sh@59 -- # CONFIG_GOLANG=n 00:07:06.085 12:03:07 -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:07:06.085 12:03:07 -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:07:06.085 12:03:07 -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR= 00:07:06.085 12:03:07 -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:07:06.085 12:03:07 -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:07:06.085 12:03:07 -- common/build_config.sh@65 -- # CONFIG_SHARED=y 00:07:06.085 12:03:07 -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=n 00:07:06.085 12:03:07 -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:07:06.085 12:03:07 -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:06.085 12:03:07 -- common/build_config.sh@69 -- # CONFIG_FC=n 00:07:06.085 12:03:07 -- common/build_config.sh@70 -- # CONFIG_AVAHI=n 00:07:06.085 12:03:07 -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:07:06.085 12:03:07 -- common/build_config.sh@72 -- # CONFIG_RAID5F=n 00:07:06.085 12:03:07 -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:07:06.085 12:03:07 -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:07:06.085 12:03:07 -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:07:06.085 12:03:07 -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:07:06.085 12:03:07 -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:07:06.085 12:03:07 -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:07:06.085 12:03:07 -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:07:06.085 12:03:07 -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:06.085 12:03:07 -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:07:06.085 12:03:07 -- common/build_config.sh@82 -- # CONFIG_URING=n 00:07:06.085 12:03:07 -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:06.085 12:03:07 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:06.085 12:03:07 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:06.085 12:03:07 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:06.085 12:03:07 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:06.085 12:03:07 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:06.085 12:03:07 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:06.085 12:03:07 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:06.085 12:03:07 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:06.085 12:03:07 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:06.085 12:03:07 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:06.085 12:03:07 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:06.085 12:03:07 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:06.085 12:03:07 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:06.085 12:03:07 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:06.085 12:03:07 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:06.085 #define SPDK_CONFIG_H 00:07:06.085 #define SPDK_CONFIG_APPS 1 00:07:06.085 #define SPDK_CONFIG_ARCH native 00:07:06.085 #undef SPDK_CONFIG_ASAN 00:07:06.085 #undef SPDK_CONFIG_AVAHI 00:07:06.085 #undef SPDK_CONFIG_CET 00:07:06.085 #define SPDK_CONFIG_COVERAGE 1 00:07:06.085 #define SPDK_CONFIG_CROSS_PREFIX 00:07:06.085 #undef SPDK_CONFIG_CRYPTO 00:07:06.085 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:06.085 #undef SPDK_CONFIG_CUSTOMOCF 00:07:06.085 #undef SPDK_CONFIG_DAOS 00:07:06.085 #define SPDK_CONFIG_DAOS_DIR 00:07:06.085 #define SPDK_CONFIG_DEBUG 1 00:07:06.085 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:06.085 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:06.085 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:06.085 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:06.085 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:06.085 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:06.085 #define SPDK_CONFIG_EXAMPLES 1 00:07:06.085 #undef SPDK_CONFIG_FC 00:07:06.085 #define SPDK_CONFIG_FC_PATH 00:07:06.085 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:06.085 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:06.085 #undef SPDK_CONFIG_FUSE 00:07:06.085 #undef SPDK_CONFIG_FUZZER 00:07:06.085 #define SPDK_CONFIG_FUZZER_LIB 00:07:06.085 #undef SPDK_CONFIG_GOLANG 00:07:06.085 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:06.085 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:06.085 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:06.085 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:07:06.085 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:06.085 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:06.085 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:06.085 #define SPDK_CONFIG_IDXD 1 00:07:06.085 #undef SPDK_CONFIG_IDXD_KERNEL 00:07:06.085 #undef SPDK_CONFIG_IPSEC_MB 00:07:06.085 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:06.085 #define SPDK_CONFIG_ISAL 1 00:07:06.085 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:06.085 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:06.085 #define SPDK_CONFIG_LIBDIR 00:07:06.085 #undef SPDK_CONFIG_LTO 00:07:06.085 #define SPDK_CONFIG_MAX_LCORES 00:07:06.085 #define SPDK_CONFIG_NVME_CUSE 1 00:07:06.085 #undef SPDK_CONFIG_OCF 00:07:06.085 #define SPDK_CONFIG_OCF_PATH 00:07:06.085 #define SPDK_CONFIG_OPENSSL_PATH 00:07:06.085 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:06.085 #define SPDK_CONFIG_PGO_DIR 00:07:06.085 #undef SPDK_CONFIG_PGO_USE 00:07:06.085 #define SPDK_CONFIG_PREFIX /usr/local 00:07:06.085 #undef SPDK_CONFIG_RAID5F 00:07:06.085 #undef SPDK_CONFIG_RBD 00:07:06.085 #define SPDK_CONFIG_RDMA 1 00:07:06.085 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:06.085 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:06.085 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:06.085 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:06.085 #define SPDK_CONFIG_SHARED 1 00:07:06.085 #undef SPDK_CONFIG_SMA 00:07:06.085 #define SPDK_CONFIG_TESTS 1 00:07:06.085 #undef SPDK_CONFIG_TSAN 00:07:06.085 #define SPDK_CONFIG_UBLK 1 00:07:06.085 #define SPDK_CONFIG_UBSAN 1 00:07:06.085 #undef SPDK_CONFIG_UNIT_TESTS 00:07:06.085 #undef SPDK_CONFIG_URING 00:07:06.085 #define SPDK_CONFIG_URING_PATH 00:07:06.085 #undef SPDK_CONFIG_URING_ZNS 00:07:06.085 #undef SPDK_CONFIG_USDT 00:07:06.085 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:06.085 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:06.085 #define SPDK_CONFIG_VFIO_USER 1 00:07:06.085 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:06.085 #define SPDK_CONFIG_VHOST 1 00:07:06.085 #define SPDK_CONFIG_VIRTIO 1 00:07:06.085 #undef SPDK_CONFIG_VTUNE 00:07:06.085 #define SPDK_CONFIG_VTUNE_DIR 00:07:06.085 #define SPDK_CONFIG_WERROR 1 00:07:06.085 #define SPDK_CONFIG_WPDK_DIR 00:07:06.085 #undef SPDK_CONFIG_XNVME 00:07:06.085 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:06.085 12:03:07 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:06.085 12:03:07 -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:06.085 12:03:07 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:06.085 12:03:07 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:06.085 12:03:07 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:06.085 12:03:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.085 12:03:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.085 12:03:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.085 12:03:07 -- paths/export.sh@5 -- # export PATH 00:07:06.085 12:03:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.085 12:03:07 -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:06.085 12:03:07 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:06.085 12:03:07 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:06.085 12:03:07 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:06.085 12:03:07 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:06.085 12:03:07 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:06.085 12:03:07 -- pm/common@67 -- # TEST_TAG=N/A 00:07:06.085 12:03:07 -- pm/common@68 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:06.085 12:03:07 -- pm/common@70 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:06.085 12:03:07 -- pm/common@71 -- # uname -s 00:07:06.085 12:03:07 -- pm/common@71 -- # PM_OS=Linux 00:07:06.085 12:03:07 -- pm/common@73 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:06.085 12:03:07 -- pm/common@74 -- # [[ Linux == FreeBSD ]] 00:07:06.086 12:03:07 -- pm/common@76 -- # [[ Linux == Linux ]] 00:07:06.086 12:03:07 -- pm/common@76 -- # [[ ............................... != QEMU ]] 00:07:06.086 12:03:07 -- pm/common@76 -- # [[ ! -e /.dockerenv ]] 00:07:06.086 12:03:07 -- pm/common@79 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:06.086 12:03:07 -- pm/common@80 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:06.086 12:03:07 -- pm/common@83 -- # MONITOR_RESOURCES_PIDS=() 00:07:06.086 12:03:07 -- pm/common@83 -- # declare -A MONITOR_RESOURCES_PIDS 00:07:06.086 12:03:07 -- pm/common@85 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:06.086 12:03:07 -- common/autotest_common.sh@57 -- # : 0 00:07:06.086 12:03:07 -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:07:06.086 12:03:07 -- common/autotest_common.sh@61 -- # : 0 00:07:06.086 12:03:07 -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:06.086 12:03:07 -- common/autotest_common.sh@63 -- # : 0 00:07:06.086 12:03:07 -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:07:06.086 12:03:07 -- common/autotest_common.sh@65 -- # : 1 00:07:06.086 12:03:07 -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:06.086 12:03:07 -- common/autotest_common.sh@67 -- # : 0 00:07:06.086 12:03:07 -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:07:06.086 12:03:07 -- common/autotest_common.sh@69 -- # : 00:07:06.086 12:03:07 -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:07:06.086 12:03:07 -- common/autotest_common.sh@71 -- # : 0 00:07:06.086 12:03:07 -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:07:06.086 12:03:07 -- common/autotest_common.sh@73 -- # : 0 00:07:06.086 12:03:07 -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:07:06.086 12:03:07 -- common/autotest_common.sh@75 -- # : 0 00:07:06.086 12:03:07 -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:07:06.086 12:03:07 -- common/autotest_common.sh@77 -- # : 0 00:07:06.086 12:03:07 -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:06.086 12:03:07 -- common/autotest_common.sh@79 -- # : 0 00:07:06.086 12:03:07 -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:07:06.086 12:03:07 -- common/autotest_common.sh@81 -- # : 0 00:07:06.086 12:03:07 -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:07:06.086 12:03:07 -- common/autotest_common.sh@83 -- # : 0 00:07:06.086 12:03:07 -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:07:06.086 12:03:07 -- common/autotest_common.sh@85 -- # : 1 00:07:06.086 12:03:07 -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:07:06.086 12:03:07 -- common/autotest_common.sh@87 -- # : 0 00:07:06.086 12:03:07 -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:07:06.086 12:03:07 -- common/autotest_common.sh@89 -- # : 0 00:07:06.086 12:03:07 -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:07:06.086 12:03:07 -- common/autotest_common.sh@91 -- # : 1 00:07:06.086 12:03:07 -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:07:06.086 12:03:07 -- common/autotest_common.sh@93 -- # : 1 00:07:06.086 12:03:07 -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:07:06.086 12:03:07 -- common/autotest_common.sh@95 -- # : 0 00:07:06.086 12:03:07 -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:06.086 12:03:07 -- common/autotest_common.sh@97 -- # : 0 00:07:06.086 12:03:07 -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:07:06.086 12:03:07 -- common/autotest_common.sh@99 -- # : 0 00:07:06.086 12:03:07 -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:07:06.086 12:03:07 -- common/autotest_common.sh@101 -- # : tcp 00:07:06.086 12:03:07 -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:06.349 12:03:07 -- common/autotest_common.sh@103 -- # : 0 00:07:06.349 12:03:07 -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:07:06.349 12:03:07 -- common/autotest_common.sh@105 -- # : 0 00:07:06.349 12:03:07 -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:07:06.349 12:03:07 -- common/autotest_common.sh@107 -- # : 0 00:07:06.349 12:03:07 -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:07:06.349 12:03:07 -- common/autotest_common.sh@109 -- # : 0 00:07:06.349 12:03:07 -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:07:06.349 12:03:07 -- common/autotest_common.sh@111 -- # : 0 00:07:06.349 12:03:07 -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:07:06.349 12:03:07 -- common/autotest_common.sh@113 -- # : 0 00:07:06.349 12:03:07 -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:07:06.349 12:03:07 -- common/autotest_common.sh@115 -- # : 0 00:07:06.349 12:03:07 -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:07:06.349 12:03:07 -- common/autotest_common.sh@117 -- # : 0 00:07:06.349 12:03:07 -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:06.349 12:03:07 -- common/autotest_common.sh@119 -- # : 0 00:07:06.349 12:03:07 -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:07:06.349 12:03:07 -- common/autotest_common.sh@121 -- # : 1 00:07:06.349 12:03:07 -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:07:06.349 12:03:07 -- common/autotest_common.sh@123 -- # : 00:07:06.349 12:03:07 -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:06.349 12:03:07 -- common/autotest_common.sh@125 -- # : 0 00:07:06.349 12:03:07 -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:07:06.349 12:03:07 -- common/autotest_common.sh@127 -- # : 0 00:07:06.349 12:03:07 -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:07:06.349 12:03:07 -- common/autotest_common.sh@129 -- # : 0 00:07:06.349 12:03:07 -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:07:06.349 12:03:07 -- common/autotest_common.sh@131 -- # : 0 00:07:06.349 12:03:07 -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:07:06.349 12:03:07 -- common/autotest_common.sh@133 -- # : 0 00:07:06.349 12:03:07 -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:07:06.349 12:03:07 -- common/autotest_common.sh@135 -- # : 0 00:07:06.349 12:03:07 -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:07:06.349 12:03:07 -- common/autotest_common.sh@137 -- # : 00:07:06.349 12:03:07 -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:07:06.349 12:03:07 -- common/autotest_common.sh@139 -- # : true 00:07:06.349 12:03:07 -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:07:06.349 12:03:07 -- common/autotest_common.sh@141 -- # : 0 00:07:06.349 12:03:07 -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:07:06.349 12:03:07 -- common/autotest_common.sh@143 -- # : 0 00:07:06.349 12:03:07 -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:07:06.349 12:03:07 -- common/autotest_common.sh@145 -- # : 0 00:07:06.349 12:03:07 -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:07:06.349 12:03:07 -- common/autotest_common.sh@147 -- # : 0 00:07:06.349 12:03:07 -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:07:06.349 12:03:07 -- common/autotest_common.sh@149 -- # : 0 00:07:06.349 12:03:07 -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:07:06.349 12:03:07 -- common/autotest_common.sh@151 -- # : 0 00:07:06.349 12:03:07 -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:07:06.349 12:03:07 -- common/autotest_common.sh@153 -- # : e810 00:07:06.349 12:03:07 -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:07:06.349 12:03:07 -- common/autotest_common.sh@155 -- # : 0 00:07:06.349 12:03:07 -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:07:06.349 12:03:07 -- common/autotest_common.sh@157 -- # : 0 00:07:06.349 12:03:07 -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:07:06.349 12:03:07 -- common/autotest_common.sh@159 -- # : 0 00:07:06.349 12:03:07 -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:07:06.349 12:03:07 -- common/autotest_common.sh@161 -- # : 0 00:07:06.349 12:03:07 -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:07:06.349 12:03:07 -- common/autotest_common.sh@163 -- # : 0 00:07:06.349 12:03:07 -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:07:06.349 12:03:07 -- common/autotest_common.sh@166 -- # : 00:07:06.349 12:03:07 -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:07:06.349 12:03:07 -- common/autotest_common.sh@168 -- # : 0 00:07:06.349 12:03:07 -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:07:06.349 12:03:07 -- common/autotest_common.sh@170 -- # : 0 00:07:06.349 12:03:07 -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:06.349 12:03:07 -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:06.349 12:03:07 -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:06.349 12:03:07 -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:06.349 12:03:07 -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:06.349 12:03:07 -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:06.349 12:03:07 -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:06.349 12:03:07 -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:06.349 12:03:07 -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:06.349 12:03:07 -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:06.349 12:03:07 -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:06.349 12:03:07 -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:06.349 12:03:07 -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:06.349 12:03:07 -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:06.349 12:03:07 -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:07:06.349 12:03:07 -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:06.349 12:03:07 -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:06.349 12:03:07 -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:06.349 12:03:07 -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:06.349 12:03:07 -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:06.349 12:03:07 -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:07:06.349 12:03:07 -- common/autotest_common.sh@199 -- # cat 00:07:06.349 12:03:07 -- common/autotest_common.sh@225 -- # echo leak:libfuse3.so 00:07:06.349 12:03:07 -- common/autotest_common.sh@227 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:06.349 12:03:07 -- common/autotest_common.sh@227 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:06.349 12:03:07 -- common/autotest_common.sh@229 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:06.349 12:03:07 -- common/autotest_common.sh@229 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:06.349 12:03:07 -- common/autotest_common.sh@231 -- # '[' -z /var/spdk/dependencies ']' 00:07:06.349 12:03:07 -- common/autotest_common.sh@234 -- # export DEPENDENCY_DIR 00:07:06.349 12:03:07 -- common/autotest_common.sh@238 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:06.349 12:03:07 -- common/autotest_common.sh@238 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:06.350 12:03:07 -- common/autotest_common.sh@239 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:06.350 12:03:07 -- common/autotest_common.sh@239 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:06.350 12:03:07 -- common/autotest_common.sh@242 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:06.350 12:03:07 -- common/autotest_common.sh@242 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:06.350 12:03:07 -- common/autotest_common.sh@243 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:06.350 12:03:07 -- common/autotest_common.sh@243 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:06.350 12:03:07 -- common/autotest_common.sh@245 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:06.350 12:03:07 -- common/autotest_common.sh@245 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:06.350 12:03:07 -- common/autotest_common.sh@248 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:06.350 12:03:07 -- common/autotest_common.sh@248 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:06.350 12:03:07 -- common/autotest_common.sh@251 -- # '[' 0 -eq 0 ']' 00:07:06.350 12:03:07 -- common/autotest_common.sh@252 -- # export valgrind= 00:07:06.350 12:03:07 -- common/autotest_common.sh@252 -- # valgrind= 00:07:06.350 12:03:07 -- common/autotest_common.sh@258 -- # uname -s 00:07:06.350 12:03:07 -- common/autotest_common.sh@258 -- # '[' Linux = Linux ']' 00:07:06.350 12:03:07 -- common/autotest_common.sh@259 -- # HUGEMEM=4096 00:07:06.350 12:03:07 -- common/autotest_common.sh@260 -- # export CLEAR_HUGE=yes 00:07:06.350 12:03:07 -- common/autotest_common.sh@260 -- # CLEAR_HUGE=yes 00:07:06.350 12:03:07 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:07:06.350 12:03:07 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:07:06.350 12:03:07 -- common/autotest_common.sh@268 -- # MAKE=make 00:07:06.350 12:03:07 -- common/autotest_common.sh@269 -- # MAKEFLAGS=-j144 00:07:06.350 12:03:07 -- common/autotest_common.sh@285 -- # export HUGEMEM=4096 00:07:06.350 12:03:07 -- common/autotest_common.sh@285 -- # HUGEMEM=4096 00:07:06.350 12:03:07 -- common/autotest_common.sh@287 -- # NO_HUGE=() 00:07:06.350 12:03:07 -- common/autotest_common.sh@288 -- # TEST_MODE= 00:07:06.350 12:03:07 -- common/autotest_common.sh@289 -- # for i in "$@" 00:07:06.350 12:03:07 -- common/autotest_common.sh@290 -- # case "$i" in 00:07:06.350 12:03:07 -- common/autotest_common.sh@295 -- # TEST_TRANSPORT=tcp 00:07:06.350 12:03:07 -- common/autotest_common.sh@307 -- # [[ -z 3225043 ]] 00:07:06.350 12:03:07 -- common/autotest_common.sh@307 -- # kill -0 3225043 00:07:06.350 12:03:07 -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:07:06.350 12:03:07 -- common/autotest_common.sh@317 -- # [[ -v testdir ]] 00:07:06.350 12:03:07 -- common/autotest_common.sh@319 -- # local requested_size=2147483648 00:07:06.350 12:03:07 -- common/autotest_common.sh@320 -- # local mount target_dir 00:07:06.350 12:03:07 -- common/autotest_common.sh@322 -- # local -A mounts fss sizes avails uses 00:07:06.350 12:03:07 -- common/autotest_common.sh@323 -- # local source fs size avail mount use 00:07:06.350 12:03:07 -- common/autotest_common.sh@325 -- # local storage_fallback storage_candidates 00:07:06.350 12:03:07 -- common/autotest_common.sh@327 -- # mktemp -udt spdk.XXXXXX 00:07:06.350 12:03:07 -- common/autotest_common.sh@327 -- # storage_fallback=/tmp/spdk.hS1m7g 00:07:06.350 12:03:07 -- common/autotest_common.sh@332 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:06.350 12:03:07 -- common/autotest_common.sh@334 -- # [[ -n '' ]] 00:07:06.350 12:03:07 -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:07:06.350 12:03:07 -- common/autotest_common.sh@344 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.hS1m7g/tests/target /tmp/spdk.hS1m7g 00:07:06.350 12:03:07 -- common/autotest_common.sh@347 -- # requested_size=2214592512 00:07:06.350 12:03:07 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:06.350 12:03:07 -- common/autotest_common.sh@316 -- # df -T 00:07:06.350 12:03:07 -- common/autotest_common.sh@316 -- # grep -v Filesystem 00:07:06.350 12:03:07 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_devtmpfs 00:07:06.350 12:03:07 -- common/autotest_common.sh@350 -- # fss["$mount"]=devtmpfs 00:07:06.350 12:03:07 -- common/autotest_common.sh@351 -- # avails["$mount"]=67108864 00:07:06.350 12:03:07 -- common/autotest_common.sh@351 -- # sizes["$mount"]=67108864 00:07:06.350 12:03:07 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:07:06.350 12:03:07 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:06.350 12:03:07 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/pmem0 00:07:06.350 12:03:07 -- common/autotest_common.sh@350 -- # fss["$mount"]=ext2 00:07:06.350 12:03:07 -- common/autotest_common.sh@351 -- # avails["$mount"]=1052192768 00:07:06.350 12:03:07 -- common/autotest_common.sh@351 -- # sizes["$mount"]=5284429824 00:07:06.350 12:03:07 -- common/autotest_common.sh@352 -- # uses["$mount"]=4232237056 00:07:06.350 12:03:07 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:06.350 12:03:07 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_root 00:07:06.350 12:03:07 -- common/autotest_common.sh@350 -- # fss["$mount"]=overlay 00:07:06.350 12:03:07 -- common/autotest_common.sh@351 -- # avails["$mount"]=123284959232 00:07:06.350 12:03:07 -- common/autotest_common.sh@351 -- # sizes["$mount"]=129371000832 00:07:06.350 12:03:07 -- common/autotest_common.sh@352 -- # uses["$mount"]=6086041600 00:07:06.350 12:03:07 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:06.350 12:03:07 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:07:06.350 12:03:07 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:07:06.350 12:03:07 -- common/autotest_common.sh@351 -- # avails["$mount"]=64682885120 00:07:06.350 12:03:07 -- common/autotest_common.sh@351 -- # sizes["$mount"]=64685498368 00:07:06.350 12:03:07 -- common/autotest_common.sh@352 -- # uses["$mount"]=2613248 00:07:06.350 12:03:07 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:06.350 12:03:07 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:07:06.350 12:03:07 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:07:06.350 12:03:07 -- common/autotest_common.sh@351 -- # avails["$mount"]=25864454144 00:07:06.350 12:03:07 -- common/autotest_common.sh@351 -- # sizes["$mount"]=25874202624 00:07:06.350 12:03:07 -- common/autotest_common.sh@352 -- # uses["$mount"]=9748480 00:07:06.350 12:03:07 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:06.350 12:03:07 -- common/autotest_common.sh@350 -- # mounts["$mount"]=efivarfs 00:07:06.350 12:03:07 -- common/autotest_common.sh@350 -- # fss["$mount"]=efivarfs 00:07:06.350 12:03:07 -- common/autotest_common.sh@351 -- # avails["$mount"]=189440 00:07:06.350 12:03:07 -- common/autotest_common.sh@351 -- # sizes["$mount"]=507904 00:07:06.350 12:03:07 -- common/autotest_common.sh@352 -- # uses["$mount"]=314368 00:07:06.350 12:03:07 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:06.350 12:03:07 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:07:06.350 12:03:07 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:07:06.350 12:03:07 -- common/autotest_common.sh@351 -- # avails["$mount"]=64684953600 00:07:06.350 12:03:07 -- common/autotest_common.sh@351 -- # sizes["$mount"]=64685502464 00:07:06.350 12:03:07 -- common/autotest_common.sh@352 -- # uses["$mount"]=548864 00:07:06.350 12:03:07 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:06.350 12:03:07 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:07:06.350 12:03:07 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:07:06.350 12:03:07 -- common/autotest_common.sh@351 -- # avails["$mount"]=12937093120 00:07:06.350 12:03:07 -- common/autotest_common.sh@351 -- # sizes["$mount"]=12937097216 00:07:06.350 12:03:07 -- common/autotest_common.sh@352 -- # uses["$mount"]=4096 00:07:06.350 12:03:07 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:06.350 12:03:07 -- common/autotest_common.sh@355 -- # printf '* Looking for test storage...\n' 00:07:06.350 * Looking for test storage... 00:07:06.350 12:03:07 -- common/autotest_common.sh@357 -- # local target_space new_size 00:07:06.350 12:03:07 -- common/autotest_common.sh@358 -- # for target_dir in "${storage_candidates[@]}" 00:07:06.350 12:03:07 -- common/autotest_common.sh@361 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:06.350 12:03:07 -- common/autotest_common.sh@361 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:06.350 12:03:07 -- common/autotest_common.sh@361 -- # mount=/ 00:07:06.350 12:03:07 -- common/autotest_common.sh@363 -- # target_space=123284959232 00:07:06.350 12:03:07 -- common/autotest_common.sh@364 -- # (( target_space == 0 || target_space < requested_size )) 00:07:06.350 12:03:07 -- common/autotest_common.sh@367 -- # (( target_space >= requested_size )) 00:07:06.350 12:03:07 -- common/autotest_common.sh@369 -- # [[ overlay == tmpfs ]] 00:07:06.350 12:03:07 -- common/autotest_common.sh@369 -- # [[ overlay == ramfs ]] 00:07:06.350 12:03:07 -- common/autotest_common.sh@369 -- # [[ / == / ]] 00:07:06.350 12:03:07 -- common/autotest_common.sh@370 -- # new_size=8300634112 00:07:06.350 12:03:07 -- common/autotest_common.sh@371 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:06.350 12:03:07 -- common/autotest_common.sh@376 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:06.350 12:03:07 -- common/autotest_common.sh@376 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:06.350 12:03:07 -- common/autotest_common.sh@377 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:06.350 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:06.350 12:03:07 -- common/autotest_common.sh@378 -- # return 0 00:07:06.350 12:03:07 -- common/autotest_common.sh@1668 -- # set -o errtrace 00:07:06.350 12:03:07 -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:07:06.350 12:03:07 -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:06.350 12:03:07 -- common/autotest_common.sh@1672 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:06.350 12:03:07 -- common/autotest_common.sh@1673 -- # true 00:07:06.350 12:03:07 -- common/autotest_common.sh@1675 -- # xtrace_fd 00:07:06.350 12:03:07 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:06.350 12:03:07 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:06.350 12:03:07 -- common/autotest_common.sh@27 -- # exec 00:07:06.350 12:03:07 -- common/autotest_common.sh@29 -- # exec 00:07:06.350 12:03:07 -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:06.350 12:03:07 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:06.350 12:03:07 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:06.350 12:03:07 -- common/autotest_common.sh@18 -- # set -x 00:07:06.350 12:03:07 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:06.350 12:03:07 -- nvmf/common.sh@7 -- # uname -s 00:07:06.350 12:03:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:06.350 12:03:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:06.351 12:03:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:06.351 12:03:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:06.351 12:03:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:06.351 12:03:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:06.351 12:03:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:06.351 12:03:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:06.351 12:03:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:06.351 12:03:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:06.351 12:03:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:06.351 12:03:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:06.351 12:03:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:06.351 12:03:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:06.351 12:03:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:06.351 12:03:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:06.351 12:03:07 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:06.351 12:03:07 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:06.351 12:03:07 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:06.351 12:03:07 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:06.351 12:03:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.351 12:03:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.351 12:03:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.351 12:03:07 -- paths/export.sh@5 -- # export PATH 00:07:06.351 12:03:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.351 12:03:07 -- nvmf/common.sh@47 -- # : 0 00:07:06.351 12:03:07 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:06.351 12:03:07 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:06.351 12:03:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:06.351 12:03:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:06.351 12:03:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:06.351 12:03:07 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:06.351 12:03:07 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:06.351 12:03:07 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:06.351 12:03:07 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:06.351 12:03:07 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:06.351 12:03:07 -- target/filesystem.sh@15 -- # nvmftestinit 00:07:06.351 12:03:07 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:06.351 12:03:07 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:06.351 12:03:07 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:06.351 12:03:07 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:06.351 12:03:07 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:06.351 12:03:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:06.351 12:03:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:06.351 12:03:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:06.351 12:03:07 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:06.351 12:03:07 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:06.351 12:03:07 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:06.351 12:03:07 -- common/autotest_common.sh@10 -- # set +x 00:07:14.494 12:03:14 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:14.494 12:03:14 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:14.494 12:03:14 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:14.494 12:03:14 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:14.494 12:03:14 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:14.494 12:03:14 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:14.494 12:03:14 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:14.494 12:03:14 -- nvmf/common.sh@295 -- # net_devs=() 00:07:14.494 12:03:14 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:14.494 12:03:14 -- nvmf/common.sh@296 -- # e810=() 00:07:14.494 12:03:14 -- nvmf/common.sh@296 -- # local -ga e810 00:07:14.494 12:03:14 -- nvmf/common.sh@297 -- # x722=() 00:07:14.494 12:03:14 -- nvmf/common.sh@297 -- # local -ga x722 00:07:14.494 12:03:14 -- nvmf/common.sh@298 -- # mlx=() 00:07:14.494 12:03:14 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:14.494 12:03:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:14.494 12:03:14 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:14.494 12:03:14 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:14.494 12:03:14 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:14.494 12:03:14 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:14.495 12:03:14 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:14.495 12:03:14 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:14.495 12:03:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:14.495 12:03:14 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:14.495 12:03:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:14.495 12:03:14 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:14.495 12:03:14 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:14.495 12:03:14 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:14.495 12:03:14 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:14.495 12:03:14 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:14.495 12:03:14 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:14.495 12:03:14 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:14.495 12:03:14 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:14.495 12:03:14 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:14.495 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:14.495 12:03:14 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:14.495 12:03:14 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:14.495 12:03:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:14.495 12:03:14 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:14.495 12:03:14 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:14.495 12:03:14 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:14.495 12:03:14 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:14.495 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:14.495 12:03:14 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:14.495 12:03:14 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:14.495 12:03:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:14.495 12:03:14 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:14.495 12:03:14 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:14.495 12:03:14 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:14.495 12:03:14 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:14.495 12:03:14 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:14.495 12:03:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:14.495 12:03:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:14.495 12:03:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:14.495 12:03:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:14.495 12:03:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:14.495 Found net devices under 0000:31:00.0: cvl_0_0 00:07:14.495 12:03:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:14.495 12:03:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:14.495 12:03:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:14.495 12:03:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:14.495 12:03:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:14.495 12:03:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:14.495 Found net devices under 0000:31:00.1: cvl_0_1 00:07:14.495 12:03:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:14.495 12:03:14 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:14.495 12:03:14 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:14.495 12:03:14 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:14.495 12:03:14 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:14.495 12:03:14 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:14.495 12:03:14 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:14.495 12:03:14 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:14.495 12:03:14 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:14.495 12:03:14 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:14.495 12:03:14 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:14.495 12:03:14 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:14.495 12:03:14 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:14.495 12:03:14 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:14.495 12:03:14 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:14.495 12:03:14 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:14.495 12:03:14 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:14.495 12:03:14 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:14.495 12:03:14 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:14.495 12:03:14 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:14.495 12:03:14 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:14.495 12:03:14 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:14.495 12:03:14 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:14.495 12:03:14 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:14.495 12:03:14 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:14.495 12:03:14 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:14.495 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:14.495 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.667 ms 00:07:14.495 00:07:14.495 --- 10.0.0.2 ping statistics --- 00:07:14.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:14.495 rtt min/avg/max/mdev = 0.667/0.667/0.667/0.000 ms 00:07:14.495 12:03:14 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:14.495 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:14.495 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:07:14.495 00:07:14.495 --- 10.0.0.1 ping statistics --- 00:07:14.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:14.495 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:07:14.495 12:03:14 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:14.495 12:03:14 -- nvmf/common.sh@411 -- # return 0 00:07:14.495 12:03:14 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:14.495 12:03:14 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:14.495 12:03:14 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:14.495 12:03:14 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:14.495 12:03:14 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:14.495 12:03:14 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:14.495 12:03:14 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:14.495 12:03:14 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:14.495 12:03:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:14.495 12:03:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:14.495 12:03:14 -- common/autotest_common.sh@10 -- # set +x 00:07:14.495 ************************************ 00:07:14.495 START TEST nvmf_filesystem_no_in_capsule 00:07:14.495 ************************************ 00:07:14.495 12:03:14 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 0 00:07:14.495 12:03:14 -- target/filesystem.sh@47 -- # in_capsule=0 00:07:14.495 12:03:14 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:14.495 12:03:14 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:14.495 12:03:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:14.495 12:03:14 -- common/autotest_common.sh@10 -- # set +x 00:07:14.495 12:03:14 -- nvmf/common.sh@470 -- # nvmfpid=3229494 00:07:14.495 12:03:14 -- nvmf/common.sh@471 -- # waitforlisten 3229494 00:07:14.495 12:03:14 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:14.495 12:03:14 -- common/autotest_common.sh@817 -- # '[' -z 3229494 ']' 00:07:14.495 12:03:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.495 12:03:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:14.495 12:03:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.495 12:03:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:14.495 12:03:14 -- common/autotest_common.sh@10 -- # set +x 00:07:14.495 [2024-04-26 12:03:14.926557] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:07:14.495 [2024-04-26 12:03:14.926611] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:14.495 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.495 [2024-04-26 12:03:14.998421] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:14.495 [2024-04-26 12:03:15.073692] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:14.495 [2024-04-26 12:03:15.073734] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:14.495 [2024-04-26 12:03:15.073743] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:14.495 [2024-04-26 12:03:15.073750] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:14.495 [2024-04-26 12:03:15.073757] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:14.495 [2024-04-26 12:03:15.073898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.495 [2024-04-26 12:03:15.074160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:14.495 [2024-04-26 12:03:15.074316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:14.495 [2024-04-26 12:03:15.074318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.495 12:03:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:14.495 12:03:15 -- common/autotest_common.sh@850 -- # return 0 00:07:14.495 12:03:15 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:14.495 12:03:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:14.495 12:03:15 -- common/autotest_common.sh@10 -- # set +x 00:07:14.756 12:03:15 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:14.756 12:03:15 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:14.756 12:03:15 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:14.756 12:03:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.756 12:03:15 -- common/autotest_common.sh@10 -- # set +x 00:07:14.756 [2024-04-26 12:03:15.756459] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:14.756 12:03:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.756 12:03:15 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:14.756 12:03:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.756 12:03:15 -- common/autotest_common.sh@10 -- # set +x 00:07:14.756 Malloc1 00:07:14.756 12:03:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.756 12:03:15 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:14.756 12:03:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.756 12:03:15 -- common/autotest_common.sh@10 -- # set +x 00:07:14.756 12:03:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.756 12:03:15 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:14.756 12:03:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.756 12:03:15 -- common/autotest_common.sh@10 -- # set +x 00:07:14.756 12:03:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.756 12:03:15 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:14.756 12:03:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.756 12:03:15 -- common/autotest_common.sh@10 -- # set +x 00:07:14.756 [2024-04-26 12:03:15.885228] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:14.756 12:03:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.756 12:03:15 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:14.756 12:03:15 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:07:14.756 12:03:15 -- common/autotest_common.sh@1365 -- # local bdev_info 00:07:14.756 12:03:15 -- common/autotest_common.sh@1366 -- # local bs 00:07:14.756 12:03:15 -- common/autotest_common.sh@1367 -- # local nb 00:07:14.756 12:03:15 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:14.756 12:03:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.756 12:03:15 -- common/autotest_common.sh@10 -- # set +x 00:07:14.756 12:03:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.757 12:03:15 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:07:14.757 { 00:07:14.757 "name": "Malloc1", 00:07:14.757 "aliases": [ 00:07:14.757 "aafc1f24-ac55-45d3-94f1-7da271a78793" 00:07:14.757 ], 00:07:14.757 "product_name": "Malloc disk", 00:07:14.757 "block_size": 512, 00:07:14.757 "num_blocks": 1048576, 00:07:14.757 "uuid": "aafc1f24-ac55-45d3-94f1-7da271a78793", 00:07:14.757 "assigned_rate_limits": { 00:07:14.757 "rw_ios_per_sec": 0, 00:07:14.757 "rw_mbytes_per_sec": 0, 00:07:14.757 "r_mbytes_per_sec": 0, 00:07:14.757 "w_mbytes_per_sec": 0 00:07:14.757 }, 00:07:14.757 "claimed": true, 00:07:14.757 "claim_type": "exclusive_write", 00:07:14.757 "zoned": false, 00:07:14.757 "supported_io_types": { 00:07:14.757 "read": true, 00:07:14.757 "write": true, 00:07:14.757 "unmap": true, 00:07:14.757 "write_zeroes": true, 00:07:14.757 "flush": true, 00:07:14.757 "reset": true, 00:07:14.757 "compare": false, 00:07:14.757 "compare_and_write": false, 00:07:14.757 "abort": true, 00:07:14.757 "nvme_admin": false, 00:07:14.757 "nvme_io": false 00:07:14.757 }, 00:07:14.757 "memory_domains": [ 00:07:14.757 { 00:07:14.757 "dma_device_id": "system", 00:07:14.757 "dma_device_type": 1 00:07:14.757 }, 00:07:14.757 { 00:07:14.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:14.757 "dma_device_type": 2 00:07:14.757 } 00:07:14.757 ], 00:07:14.757 "driver_specific": {} 00:07:14.757 } 00:07:14.757 ]' 00:07:14.757 12:03:15 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:07:14.757 12:03:15 -- common/autotest_common.sh@1369 -- # bs=512 00:07:14.757 12:03:15 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:07:15.017 12:03:16 -- common/autotest_common.sh@1370 -- # nb=1048576 00:07:15.017 12:03:16 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:07:15.017 12:03:16 -- common/autotest_common.sh@1374 -- # echo 512 00:07:15.017 12:03:16 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:15.017 12:03:16 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:16.401 12:03:17 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:16.401 12:03:17 -- common/autotest_common.sh@1184 -- # local i=0 00:07:16.401 12:03:17 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:16.401 12:03:17 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:16.401 12:03:17 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:18.944 12:03:19 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:18.944 12:03:19 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:18.944 12:03:19 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:18.944 12:03:19 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:18.944 12:03:19 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:18.944 12:03:19 -- common/autotest_common.sh@1194 -- # return 0 00:07:18.944 12:03:19 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:18.944 12:03:19 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:18.944 12:03:19 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:18.944 12:03:19 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:18.944 12:03:19 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:18.944 12:03:19 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:18.944 12:03:19 -- setup/common.sh@80 -- # echo 536870912 00:07:18.945 12:03:19 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:18.945 12:03:19 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:18.945 12:03:19 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:18.945 12:03:19 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:18.945 12:03:19 -- target/filesystem.sh@69 -- # partprobe 00:07:18.945 12:03:20 -- target/filesystem.sh@70 -- # sleep 1 00:07:19.886 12:03:21 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:19.886 12:03:21 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:19.886 12:03:21 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:19.886 12:03:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:19.886 12:03:21 -- common/autotest_common.sh@10 -- # set +x 00:07:20.147 ************************************ 00:07:20.147 START TEST filesystem_ext4 00:07:20.147 ************************************ 00:07:20.147 12:03:21 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:20.147 12:03:21 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:20.147 12:03:21 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:20.147 12:03:21 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:20.147 12:03:21 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:07:20.147 12:03:21 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:20.147 12:03:21 -- common/autotest_common.sh@914 -- # local i=0 00:07:20.147 12:03:21 -- common/autotest_common.sh@915 -- # local force 00:07:20.147 12:03:21 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:07:20.147 12:03:21 -- common/autotest_common.sh@918 -- # force=-F 00:07:20.147 12:03:21 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:20.147 mke2fs 1.46.5 (30-Dec-2021) 00:07:20.147 Discarding device blocks: 0/522240 done 00:07:20.147 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:20.147 Filesystem UUID: 771b8397-e6c0-46b7-b32f-c0f0f3bf72f8 00:07:20.147 Superblock backups stored on blocks: 00:07:20.148 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:20.148 00:07:20.148 Allocating group tables: 0/64 done 00:07:20.148 Writing inode tables: 0/64 done 00:07:23.450 Creating journal (8192 blocks): done 00:07:23.450 Writing superblocks and filesystem accounting information: 0/64 done 00:07:23.450 00:07:23.450 12:03:23 -- common/autotest_common.sh@931 -- # return 0 00:07:23.450 12:03:23 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:23.450 12:03:24 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:23.450 12:03:24 -- target/filesystem.sh@25 -- # sync 00:07:23.450 12:03:24 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:23.450 12:03:24 -- target/filesystem.sh@27 -- # sync 00:07:23.450 12:03:24 -- target/filesystem.sh@29 -- # i=0 00:07:23.450 12:03:24 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:23.711 12:03:24 -- target/filesystem.sh@37 -- # kill -0 3229494 00:07:23.711 12:03:24 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:23.711 12:03:24 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:23.711 12:03:24 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:23.711 12:03:24 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:23.711 00:07:23.711 real 0m3.534s 00:07:23.711 user 0m0.032s 00:07:23.711 sys 0m0.067s 00:07:23.711 12:03:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:23.711 12:03:24 -- common/autotest_common.sh@10 -- # set +x 00:07:23.711 ************************************ 00:07:23.711 END TEST filesystem_ext4 00:07:23.711 ************************************ 00:07:23.711 12:03:24 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:23.711 12:03:24 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:23.711 12:03:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:23.711 12:03:24 -- common/autotest_common.sh@10 -- # set +x 00:07:23.711 ************************************ 00:07:23.711 START TEST filesystem_btrfs 00:07:23.711 ************************************ 00:07:23.711 12:03:24 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:23.711 12:03:24 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:23.711 12:03:24 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:23.711 12:03:24 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:23.711 12:03:24 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:07:23.711 12:03:24 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:23.711 12:03:24 -- common/autotest_common.sh@914 -- # local i=0 00:07:23.711 12:03:24 -- common/autotest_common.sh@915 -- # local force 00:07:23.711 12:03:24 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:07:23.711 12:03:24 -- common/autotest_common.sh@920 -- # force=-f 00:07:23.711 12:03:24 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:23.972 btrfs-progs v6.6.2 00:07:23.972 See https://btrfs.readthedocs.io for more information. 00:07:23.972 00:07:23.972 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:23.972 NOTE: several default settings have changed in version 5.15, please make sure 00:07:23.972 this does not affect your deployments: 00:07:23.972 - DUP for metadata (-m dup) 00:07:23.972 - enabled no-holes (-O no-holes) 00:07:23.972 - enabled free-space-tree (-R free-space-tree) 00:07:23.972 00:07:23.972 Label: (null) 00:07:23.972 UUID: b4cf2bac-1e2f-4934-99d2-acfa31689da6 00:07:23.972 Node size: 16384 00:07:23.972 Sector size: 4096 00:07:23.972 Filesystem size: 510.00MiB 00:07:23.972 Block group profiles: 00:07:23.972 Data: single 8.00MiB 00:07:23.972 Metadata: DUP 32.00MiB 00:07:23.972 System: DUP 8.00MiB 00:07:23.972 SSD detected: yes 00:07:23.972 Zoned device: no 00:07:23.972 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:23.972 Runtime features: free-space-tree 00:07:23.972 Checksum: crc32c 00:07:23.972 Number of devices: 1 00:07:23.972 Devices: 00:07:23.972 ID SIZE PATH 00:07:23.972 1 510.00MiB /dev/nvme0n1p1 00:07:23.972 00:07:23.972 12:03:25 -- common/autotest_common.sh@931 -- # return 0 00:07:23.972 12:03:25 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:24.544 12:03:25 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:24.544 12:03:25 -- target/filesystem.sh@25 -- # sync 00:07:24.544 12:03:25 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:24.544 12:03:25 -- target/filesystem.sh@27 -- # sync 00:07:24.544 12:03:25 -- target/filesystem.sh@29 -- # i=0 00:07:24.544 12:03:25 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:24.544 12:03:25 -- target/filesystem.sh@37 -- # kill -0 3229494 00:07:24.544 12:03:25 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:24.544 12:03:25 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:24.544 12:03:25 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:24.544 12:03:25 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:24.544 00:07:24.544 real 0m0.832s 00:07:24.544 user 0m0.022s 00:07:24.544 sys 0m0.138s 00:07:24.544 12:03:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:24.544 12:03:25 -- common/autotest_common.sh@10 -- # set +x 00:07:24.544 ************************************ 00:07:24.544 END TEST filesystem_btrfs 00:07:24.544 ************************************ 00:07:24.806 12:03:25 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:24.806 12:03:25 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:24.806 12:03:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:24.806 12:03:25 -- common/autotest_common.sh@10 -- # set +x 00:07:24.806 ************************************ 00:07:24.806 START TEST filesystem_xfs 00:07:24.806 ************************************ 00:07:24.806 12:03:25 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:07:24.806 12:03:25 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:24.806 12:03:25 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:24.806 12:03:25 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:24.806 12:03:25 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:07:24.806 12:03:25 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:24.806 12:03:25 -- common/autotest_common.sh@914 -- # local i=0 00:07:24.806 12:03:25 -- common/autotest_common.sh@915 -- # local force 00:07:24.806 12:03:25 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:07:24.806 12:03:25 -- common/autotest_common.sh@920 -- # force=-f 00:07:24.806 12:03:25 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:24.806 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:24.806 = sectsz=512 attr=2, projid32bit=1 00:07:24.806 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:24.806 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:24.806 data = bsize=4096 blocks=130560, imaxpct=25 00:07:24.806 = sunit=0 swidth=0 blks 00:07:24.806 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:24.806 log =internal log bsize=4096 blocks=16384, version=2 00:07:24.806 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:24.806 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:25.748 Discarding blocks...Done. 00:07:25.748 12:03:26 -- common/autotest_common.sh@931 -- # return 0 00:07:25.748 12:03:26 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:27.659 12:03:28 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:27.659 12:03:28 -- target/filesystem.sh@25 -- # sync 00:07:27.659 12:03:28 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:27.659 12:03:28 -- target/filesystem.sh@27 -- # sync 00:07:27.659 12:03:28 -- target/filesystem.sh@29 -- # i=0 00:07:27.659 12:03:28 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:27.659 12:03:28 -- target/filesystem.sh@37 -- # kill -0 3229494 00:07:27.659 12:03:28 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:27.659 12:03:28 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:27.659 12:03:28 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:27.659 12:03:28 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:27.659 00:07:27.659 real 0m2.828s 00:07:27.659 user 0m0.021s 00:07:27.659 sys 0m0.080s 00:07:27.659 12:03:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:27.659 12:03:28 -- common/autotest_common.sh@10 -- # set +x 00:07:27.659 ************************************ 00:07:27.659 END TEST filesystem_xfs 00:07:27.659 ************************************ 00:07:27.659 12:03:28 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:27.659 12:03:28 -- target/filesystem.sh@93 -- # sync 00:07:27.659 12:03:28 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:27.920 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:27.920 12:03:28 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:27.920 12:03:28 -- common/autotest_common.sh@1205 -- # local i=0 00:07:27.920 12:03:28 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:27.920 12:03:28 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:27.920 12:03:28 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:27.920 12:03:28 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:27.920 12:03:29 -- common/autotest_common.sh@1217 -- # return 0 00:07:27.920 12:03:29 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:27.920 12:03:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:27.920 12:03:29 -- common/autotest_common.sh@10 -- # set +x 00:07:27.920 12:03:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:27.920 12:03:29 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:27.920 12:03:29 -- target/filesystem.sh@101 -- # killprocess 3229494 00:07:27.920 12:03:29 -- common/autotest_common.sh@936 -- # '[' -z 3229494 ']' 00:07:27.920 12:03:29 -- common/autotest_common.sh@940 -- # kill -0 3229494 00:07:27.920 12:03:29 -- common/autotest_common.sh@941 -- # uname 00:07:27.920 12:03:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:27.920 12:03:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3229494 00:07:27.920 12:03:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:27.920 12:03:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:27.920 12:03:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3229494' 00:07:27.920 killing process with pid 3229494 00:07:27.920 12:03:29 -- common/autotest_common.sh@955 -- # kill 3229494 00:07:27.920 12:03:29 -- common/autotest_common.sh@960 -- # wait 3229494 00:07:28.180 12:03:29 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:28.180 00:07:28.180 real 0m14.443s 00:07:28.180 user 0m57.073s 00:07:28.180 sys 0m1.412s 00:07:28.180 12:03:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:28.180 12:03:29 -- common/autotest_common.sh@10 -- # set +x 00:07:28.180 ************************************ 00:07:28.180 END TEST nvmf_filesystem_no_in_capsule 00:07:28.180 ************************************ 00:07:28.180 12:03:29 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:28.180 12:03:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:28.180 12:03:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:28.180 12:03:29 -- common/autotest_common.sh@10 -- # set +x 00:07:28.441 ************************************ 00:07:28.441 START TEST nvmf_filesystem_in_capsule 00:07:28.441 ************************************ 00:07:28.441 12:03:29 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 4096 00:07:28.441 12:03:29 -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:28.441 12:03:29 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:28.441 12:03:29 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:28.441 12:03:29 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:28.441 12:03:29 -- common/autotest_common.sh@10 -- # set +x 00:07:28.441 12:03:29 -- nvmf/common.sh@470 -- # nvmfpid=3232476 00:07:28.441 12:03:29 -- nvmf/common.sh@471 -- # waitforlisten 3232476 00:07:28.441 12:03:29 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:28.441 12:03:29 -- common/autotest_common.sh@817 -- # '[' -z 3232476 ']' 00:07:28.441 12:03:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.441 12:03:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:28.441 12:03:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.441 12:03:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:28.441 12:03:29 -- common/autotest_common.sh@10 -- # set +x 00:07:28.441 [2024-04-26 12:03:29.549077] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:07:28.441 [2024-04-26 12:03:29.549135] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:28.441 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.441 [2024-04-26 12:03:29.621391] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:28.702 [2024-04-26 12:03:29.695387] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:28.702 [2024-04-26 12:03:29.695429] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:28.702 [2024-04-26 12:03:29.695437] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:28.702 [2024-04-26 12:03:29.695443] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:28.702 [2024-04-26 12:03:29.695449] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:28.702 [2024-04-26 12:03:29.695588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:28.702 [2024-04-26 12:03:29.695708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:28.702 [2024-04-26 12:03:29.695879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.702 [2024-04-26 12:03:29.695879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:29.274 12:03:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:29.274 12:03:30 -- common/autotest_common.sh@850 -- # return 0 00:07:29.274 12:03:30 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:29.274 12:03:30 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:29.274 12:03:30 -- common/autotest_common.sh@10 -- # set +x 00:07:29.274 12:03:30 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:29.274 12:03:30 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:29.274 12:03:30 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:29.274 12:03:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:29.274 12:03:30 -- common/autotest_common.sh@10 -- # set +x 00:07:29.274 [2024-04-26 12:03:30.377429] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:29.274 12:03:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:29.274 12:03:30 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:29.274 12:03:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:29.274 12:03:30 -- common/autotest_common.sh@10 -- # set +x 00:07:29.274 Malloc1 00:07:29.274 12:03:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:29.274 12:03:30 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:29.274 12:03:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:29.274 12:03:30 -- common/autotest_common.sh@10 -- # set +x 00:07:29.274 12:03:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:29.274 12:03:30 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:29.274 12:03:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:29.274 12:03:30 -- common/autotest_common.sh@10 -- # set +x 00:07:29.536 12:03:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:29.536 12:03:30 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:29.536 12:03:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:29.536 12:03:30 -- common/autotest_common.sh@10 -- # set +x 00:07:29.536 [2024-04-26 12:03:30.503829] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:29.536 12:03:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:29.536 12:03:30 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:29.536 12:03:30 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:07:29.536 12:03:30 -- common/autotest_common.sh@1365 -- # local bdev_info 00:07:29.536 12:03:30 -- common/autotest_common.sh@1366 -- # local bs 00:07:29.536 12:03:30 -- common/autotest_common.sh@1367 -- # local nb 00:07:29.536 12:03:30 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:29.536 12:03:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:29.536 12:03:30 -- common/autotest_common.sh@10 -- # set +x 00:07:29.536 12:03:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:29.536 12:03:30 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:07:29.536 { 00:07:29.536 "name": "Malloc1", 00:07:29.536 "aliases": [ 00:07:29.536 "728bc43a-71be-449a-abb4-be71952b3083" 00:07:29.536 ], 00:07:29.536 "product_name": "Malloc disk", 00:07:29.536 "block_size": 512, 00:07:29.536 "num_blocks": 1048576, 00:07:29.536 "uuid": "728bc43a-71be-449a-abb4-be71952b3083", 00:07:29.536 "assigned_rate_limits": { 00:07:29.536 "rw_ios_per_sec": 0, 00:07:29.536 "rw_mbytes_per_sec": 0, 00:07:29.536 "r_mbytes_per_sec": 0, 00:07:29.536 "w_mbytes_per_sec": 0 00:07:29.536 }, 00:07:29.536 "claimed": true, 00:07:29.536 "claim_type": "exclusive_write", 00:07:29.536 "zoned": false, 00:07:29.536 "supported_io_types": { 00:07:29.536 "read": true, 00:07:29.536 "write": true, 00:07:29.536 "unmap": true, 00:07:29.536 "write_zeroes": true, 00:07:29.536 "flush": true, 00:07:29.536 "reset": true, 00:07:29.536 "compare": false, 00:07:29.536 "compare_and_write": false, 00:07:29.536 "abort": true, 00:07:29.536 "nvme_admin": false, 00:07:29.536 "nvme_io": false 00:07:29.536 }, 00:07:29.536 "memory_domains": [ 00:07:29.536 { 00:07:29.536 "dma_device_id": "system", 00:07:29.536 "dma_device_type": 1 00:07:29.536 }, 00:07:29.536 { 00:07:29.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:29.536 "dma_device_type": 2 00:07:29.536 } 00:07:29.536 ], 00:07:29.536 "driver_specific": {} 00:07:29.536 } 00:07:29.536 ]' 00:07:29.536 12:03:30 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:07:29.536 12:03:30 -- common/autotest_common.sh@1369 -- # bs=512 00:07:29.536 12:03:30 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:07:29.536 12:03:30 -- common/autotest_common.sh@1370 -- # nb=1048576 00:07:29.536 12:03:30 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:07:29.536 12:03:30 -- common/autotest_common.sh@1374 -- # echo 512 00:07:29.536 12:03:30 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:29.536 12:03:30 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:31.450 12:03:32 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:31.450 12:03:32 -- common/autotest_common.sh@1184 -- # local i=0 00:07:31.450 12:03:32 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:31.450 12:03:32 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:31.450 12:03:32 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:33.361 12:03:34 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:33.361 12:03:34 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:33.361 12:03:34 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:33.361 12:03:34 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:33.361 12:03:34 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:33.361 12:03:34 -- common/autotest_common.sh@1194 -- # return 0 00:07:33.361 12:03:34 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:33.361 12:03:34 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:33.361 12:03:34 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:33.361 12:03:34 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:33.361 12:03:34 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:33.361 12:03:34 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:33.361 12:03:34 -- setup/common.sh@80 -- # echo 536870912 00:07:33.362 12:03:34 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:33.362 12:03:34 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:33.362 12:03:34 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:33.362 12:03:34 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:33.362 12:03:34 -- target/filesystem.sh@69 -- # partprobe 00:07:33.621 12:03:34 -- target/filesystem.sh@70 -- # sleep 1 00:07:35.005 12:03:35 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:35.005 12:03:35 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:35.005 12:03:35 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:35.005 12:03:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:35.005 12:03:35 -- common/autotest_common.sh@10 -- # set +x 00:07:35.005 ************************************ 00:07:35.005 START TEST filesystem_in_capsule_ext4 00:07:35.005 ************************************ 00:07:35.005 12:03:35 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:35.005 12:03:35 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:35.005 12:03:35 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:35.005 12:03:35 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:35.005 12:03:35 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:07:35.005 12:03:35 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:35.005 12:03:35 -- common/autotest_common.sh@914 -- # local i=0 00:07:35.005 12:03:35 -- common/autotest_common.sh@915 -- # local force 00:07:35.005 12:03:35 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:07:35.005 12:03:35 -- common/autotest_common.sh@918 -- # force=-F 00:07:35.005 12:03:35 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:35.005 mke2fs 1.46.5 (30-Dec-2021) 00:07:35.005 Discarding device blocks: 0/522240 done 00:07:35.005 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:35.005 Filesystem UUID: 58852c9f-642e-4d92-8b77-ee7f30263616 00:07:35.005 Superblock backups stored on blocks: 00:07:35.005 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:35.005 00:07:35.005 Allocating group tables: 0/64 done 00:07:35.005 Writing inode tables: 0/64 done 00:07:38.305 Creating journal (8192 blocks): done 00:07:38.306 Writing superblocks and filesystem accounting information: 0/64 done 00:07:38.306 00:07:38.306 12:03:38 -- common/autotest_common.sh@931 -- # return 0 00:07:38.306 12:03:38 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:38.306 12:03:39 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:38.306 12:03:39 -- target/filesystem.sh@25 -- # sync 00:07:38.306 12:03:39 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:38.306 12:03:39 -- target/filesystem.sh@27 -- # sync 00:07:38.306 12:03:39 -- target/filesystem.sh@29 -- # i=0 00:07:38.306 12:03:39 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:38.306 12:03:39 -- target/filesystem.sh@37 -- # kill -0 3232476 00:07:38.306 12:03:39 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:38.306 12:03:39 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:38.306 12:03:39 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:38.306 12:03:39 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:38.306 00:07:38.306 real 0m3.408s 00:07:38.306 user 0m0.031s 00:07:38.306 sys 0m0.069s 00:07:38.306 12:03:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:38.306 12:03:39 -- common/autotest_common.sh@10 -- # set +x 00:07:38.306 ************************************ 00:07:38.306 END TEST filesystem_in_capsule_ext4 00:07:38.306 ************************************ 00:07:38.306 12:03:39 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:38.306 12:03:39 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:38.306 12:03:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:38.306 12:03:39 -- common/autotest_common.sh@10 -- # set +x 00:07:38.565 ************************************ 00:07:38.565 START TEST filesystem_in_capsule_btrfs 00:07:38.565 ************************************ 00:07:38.565 12:03:39 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:38.565 12:03:39 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:38.565 12:03:39 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:38.565 12:03:39 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:38.565 12:03:39 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:07:38.565 12:03:39 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:38.565 12:03:39 -- common/autotest_common.sh@914 -- # local i=0 00:07:38.565 12:03:39 -- common/autotest_common.sh@915 -- # local force 00:07:38.565 12:03:39 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:07:38.565 12:03:39 -- common/autotest_common.sh@920 -- # force=-f 00:07:38.565 12:03:39 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:38.825 btrfs-progs v6.6.2 00:07:38.825 See https://btrfs.readthedocs.io for more information. 00:07:38.825 00:07:38.825 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:38.825 NOTE: several default settings have changed in version 5.15, please make sure 00:07:38.825 this does not affect your deployments: 00:07:38.825 - DUP for metadata (-m dup) 00:07:38.825 - enabled no-holes (-O no-holes) 00:07:38.825 - enabled free-space-tree (-R free-space-tree) 00:07:38.825 00:07:38.825 Label: (null) 00:07:38.825 UUID: e62d16fa-ccab-492d-9549-b45ff2a47a78 00:07:38.825 Node size: 16384 00:07:38.825 Sector size: 4096 00:07:38.825 Filesystem size: 510.00MiB 00:07:38.825 Block group profiles: 00:07:38.825 Data: single 8.00MiB 00:07:38.825 Metadata: DUP 32.00MiB 00:07:38.825 System: DUP 8.00MiB 00:07:38.825 SSD detected: yes 00:07:38.825 Zoned device: no 00:07:38.825 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:38.825 Runtime features: free-space-tree 00:07:38.825 Checksum: crc32c 00:07:38.825 Number of devices: 1 00:07:38.825 Devices: 00:07:38.825 ID SIZE PATH 00:07:38.825 1 510.00MiB /dev/nvme0n1p1 00:07:38.825 00:07:38.825 12:03:39 -- common/autotest_common.sh@931 -- # return 0 00:07:38.825 12:03:39 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:39.395 12:03:40 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:39.395 12:03:40 -- target/filesystem.sh@25 -- # sync 00:07:39.395 12:03:40 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:39.395 12:03:40 -- target/filesystem.sh@27 -- # sync 00:07:39.395 12:03:40 -- target/filesystem.sh@29 -- # i=0 00:07:39.395 12:03:40 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:39.395 12:03:40 -- target/filesystem.sh@37 -- # kill -0 3232476 00:07:39.395 12:03:40 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:39.395 12:03:40 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:39.395 12:03:40 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:39.395 12:03:40 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:39.395 00:07:39.395 real 0m0.902s 00:07:39.395 user 0m0.028s 00:07:39.395 sys 0m0.133s 00:07:39.395 12:03:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:39.395 12:03:40 -- common/autotest_common.sh@10 -- # set +x 00:07:39.395 ************************************ 00:07:39.395 END TEST filesystem_in_capsule_btrfs 00:07:39.396 ************************************ 00:07:39.396 12:03:40 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:39.396 12:03:40 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:39.396 12:03:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:39.396 12:03:40 -- common/autotest_common.sh@10 -- # set +x 00:07:39.655 ************************************ 00:07:39.655 START TEST filesystem_in_capsule_xfs 00:07:39.655 ************************************ 00:07:39.655 12:03:40 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:07:39.655 12:03:40 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:39.655 12:03:40 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:39.655 12:03:40 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:39.655 12:03:40 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:07:39.655 12:03:40 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:39.656 12:03:40 -- common/autotest_common.sh@914 -- # local i=0 00:07:39.656 12:03:40 -- common/autotest_common.sh@915 -- # local force 00:07:39.656 12:03:40 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:07:39.656 12:03:40 -- common/autotest_common.sh@920 -- # force=-f 00:07:39.656 12:03:40 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:39.656 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:39.656 = sectsz=512 attr=2, projid32bit=1 00:07:39.656 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:39.656 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:39.656 data = bsize=4096 blocks=130560, imaxpct=25 00:07:39.656 = sunit=0 swidth=0 blks 00:07:39.656 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:39.656 log =internal log bsize=4096 blocks=16384, version=2 00:07:39.656 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:39.656 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:40.225 Discarding blocks...Done. 00:07:40.225 12:03:41 -- common/autotest_common.sh@931 -- # return 0 00:07:40.225 12:03:41 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:42.209 12:03:43 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:42.209 12:03:43 -- target/filesystem.sh@25 -- # sync 00:07:42.209 12:03:43 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:42.209 12:03:43 -- target/filesystem.sh@27 -- # sync 00:07:42.209 12:03:43 -- target/filesystem.sh@29 -- # i=0 00:07:42.209 12:03:43 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:42.209 12:03:43 -- target/filesystem.sh@37 -- # kill -0 3232476 00:07:42.209 12:03:43 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:42.209 12:03:43 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:42.209 12:03:43 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:42.209 12:03:43 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:42.209 00:07:42.209 real 0m2.617s 00:07:42.209 user 0m0.027s 00:07:42.209 sys 0m0.075s 00:07:42.209 12:03:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:42.209 12:03:43 -- common/autotest_common.sh@10 -- # set +x 00:07:42.209 ************************************ 00:07:42.209 END TEST filesystem_in_capsule_xfs 00:07:42.209 ************************************ 00:07:42.209 12:03:43 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:42.469 12:03:43 -- target/filesystem.sh@93 -- # sync 00:07:42.469 12:03:43 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:42.469 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:42.469 12:03:43 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:42.469 12:03:43 -- common/autotest_common.sh@1205 -- # local i=0 00:07:42.729 12:03:43 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:42.729 12:03:43 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:42.729 12:03:43 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:42.729 12:03:43 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:42.729 12:03:43 -- common/autotest_common.sh@1217 -- # return 0 00:07:42.729 12:03:43 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:42.729 12:03:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:42.729 12:03:43 -- common/autotest_common.sh@10 -- # set +x 00:07:42.729 12:03:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:42.729 12:03:43 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:42.729 12:03:43 -- target/filesystem.sh@101 -- # killprocess 3232476 00:07:42.729 12:03:43 -- common/autotest_common.sh@936 -- # '[' -z 3232476 ']' 00:07:42.729 12:03:43 -- common/autotest_common.sh@940 -- # kill -0 3232476 00:07:42.729 12:03:43 -- common/autotest_common.sh@941 -- # uname 00:07:42.729 12:03:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:42.729 12:03:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3232476 00:07:42.729 12:03:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:42.729 12:03:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:42.729 12:03:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3232476' 00:07:42.729 killing process with pid 3232476 00:07:42.729 12:03:43 -- common/autotest_common.sh@955 -- # kill 3232476 00:07:42.730 12:03:43 -- common/autotest_common.sh@960 -- # wait 3232476 00:07:42.990 12:03:44 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:42.990 00:07:42.990 real 0m14.531s 00:07:42.990 user 0m57.424s 00:07:42.990 sys 0m1.425s 00:07:42.990 12:03:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:42.990 12:03:44 -- common/autotest_common.sh@10 -- # set +x 00:07:42.990 ************************************ 00:07:42.990 END TEST nvmf_filesystem_in_capsule 00:07:42.990 ************************************ 00:07:42.990 12:03:44 -- target/filesystem.sh@108 -- # nvmftestfini 00:07:42.990 12:03:44 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:42.990 12:03:44 -- nvmf/common.sh@117 -- # sync 00:07:42.990 12:03:44 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:42.990 12:03:44 -- nvmf/common.sh@120 -- # set +e 00:07:42.990 12:03:44 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:42.990 12:03:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:42.990 rmmod nvme_tcp 00:07:42.990 rmmod nvme_fabrics 00:07:42.990 rmmod nvme_keyring 00:07:42.990 12:03:44 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:42.990 12:03:44 -- nvmf/common.sh@124 -- # set -e 00:07:42.990 12:03:44 -- nvmf/common.sh@125 -- # return 0 00:07:42.990 12:03:44 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:07:42.990 12:03:44 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:42.990 12:03:44 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:42.990 12:03:44 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:42.990 12:03:44 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:42.990 12:03:44 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:42.990 12:03:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:42.990 12:03:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:42.990 12:03:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.536 12:03:46 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:45.536 00:07:45.536 real 0m39.067s 00:07:45.536 user 1m56.809s 00:07:45.536 sys 0m8.504s 00:07:45.536 12:03:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:45.536 12:03:46 -- common/autotest_common.sh@10 -- # set +x 00:07:45.536 ************************************ 00:07:45.536 END TEST nvmf_filesystem 00:07:45.536 ************************************ 00:07:45.536 12:03:46 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:45.536 12:03:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:45.536 12:03:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:45.536 12:03:46 -- common/autotest_common.sh@10 -- # set +x 00:07:45.536 ************************************ 00:07:45.536 START TEST nvmf_discovery 00:07:45.536 ************************************ 00:07:45.536 12:03:46 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:45.536 * Looking for test storage... 00:07:45.536 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:45.536 12:03:46 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:45.536 12:03:46 -- nvmf/common.sh@7 -- # uname -s 00:07:45.536 12:03:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:45.536 12:03:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:45.536 12:03:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:45.536 12:03:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:45.536 12:03:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:45.536 12:03:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:45.536 12:03:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:45.536 12:03:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:45.536 12:03:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:45.536 12:03:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:45.536 12:03:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:45.536 12:03:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:45.536 12:03:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:45.536 12:03:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:45.536 12:03:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:45.536 12:03:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:45.536 12:03:46 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:45.536 12:03:46 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:45.536 12:03:46 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:45.536 12:03:46 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:45.536 12:03:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.536 12:03:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.536 12:03:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.536 12:03:46 -- paths/export.sh@5 -- # export PATH 00:07:45.536 12:03:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.536 12:03:46 -- nvmf/common.sh@47 -- # : 0 00:07:45.536 12:03:46 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:45.536 12:03:46 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:45.536 12:03:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:45.536 12:03:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:45.536 12:03:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:45.536 12:03:46 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:45.536 12:03:46 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:45.536 12:03:46 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:45.536 12:03:46 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:45.536 12:03:46 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:45.536 12:03:46 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:45.536 12:03:46 -- target/discovery.sh@15 -- # hash nvme 00:07:45.536 12:03:46 -- target/discovery.sh@20 -- # nvmftestinit 00:07:45.536 12:03:46 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:45.536 12:03:46 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:45.536 12:03:46 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:45.536 12:03:46 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:45.536 12:03:46 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:45.536 12:03:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:45.536 12:03:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:45.536 12:03:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.536 12:03:46 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:45.536 12:03:46 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:45.536 12:03:46 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:45.536 12:03:46 -- common/autotest_common.sh@10 -- # set +x 00:07:52.124 12:03:53 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:52.124 12:03:53 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:52.124 12:03:53 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:52.124 12:03:53 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:52.124 12:03:53 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:52.124 12:03:53 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:52.124 12:03:53 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:52.124 12:03:53 -- nvmf/common.sh@295 -- # net_devs=() 00:07:52.124 12:03:53 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:52.124 12:03:53 -- nvmf/common.sh@296 -- # e810=() 00:07:52.124 12:03:53 -- nvmf/common.sh@296 -- # local -ga e810 00:07:52.124 12:03:53 -- nvmf/common.sh@297 -- # x722=() 00:07:52.124 12:03:53 -- nvmf/common.sh@297 -- # local -ga x722 00:07:52.124 12:03:53 -- nvmf/common.sh@298 -- # mlx=() 00:07:52.124 12:03:53 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:52.124 12:03:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:52.124 12:03:53 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:52.124 12:03:53 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:52.124 12:03:53 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:52.124 12:03:53 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:52.124 12:03:53 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:52.124 12:03:53 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:52.124 12:03:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:52.124 12:03:53 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:52.124 12:03:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:52.124 12:03:53 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:52.124 12:03:53 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:52.124 12:03:53 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:52.124 12:03:53 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:52.124 12:03:53 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:52.124 12:03:53 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:52.125 12:03:53 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:52.125 12:03:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:52.125 12:03:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:52.125 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:52.125 12:03:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:52.125 12:03:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:52.125 12:03:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:52.125 12:03:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:52.125 12:03:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:52.125 12:03:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:52.125 12:03:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:52.125 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:52.125 12:03:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:52.125 12:03:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:52.125 12:03:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:52.125 12:03:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:52.125 12:03:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:52.125 12:03:53 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:52.125 12:03:53 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:52.125 12:03:53 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:52.125 12:03:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:52.125 12:03:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:52.125 12:03:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:52.125 12:03:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:52.125 12:03:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:52.125 Found net devices under 0000:31:00.0: cvl_0_0 00:07:52.125 12:03:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:52.125 12:03:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:52.125 12:03:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:52.125 12:03:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:52.125 12:03:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:52.125 12:03:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:52.125 Found net devices under 0000:31:00.1: cvl_0_1 00:07:52.125 12:03:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:52.125 12:03:53 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:52.125 12:03:53 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:52.125 12:03:53 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:52.125 12:03:53 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:52.125 12:03:53 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:52.125 12:03:53 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:52.125 12:03:53 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:52.125 12:03:53 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:52.125 12:03:53 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:52.125 12:03:53 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:52.125 12:03:53 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:52.125 12:03:53 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:52.125 12:03:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:52.125 12:03:53 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:52.125 12:03:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:52.125 12:03:53 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:52.386 12:03:53 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:52.386 12:03:53 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:52.386 12:03:53 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:52.386 12:03:53 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:52.386 12:03:53 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:52.386 12:03:53 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:52.646 12:03:53 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:52.646 12:03:53 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:52.646 12:03:53 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:52.646 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:52.646 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.663 ms 00:07:52.646 00:07:52.647 --- 10.0.0.2 ping statistics --- 00:07:52.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.647 rtt min/avg/max/mdev = 0.663/0.663/0.663/0.000 ms 00:07:52.647 12:03:53 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:52.647 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:52.647 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.308 ms 00:07:52.647 00:07:52.647 --- 10.0.0.1 ping statistics --- 00:07:52.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.647 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:07:52.647 12:03:53 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:52.647 12:03:53 -- nvmf/common.sh@411 -- # return 0 00:07:52.647 12:03:53 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:52.647 12:03:53 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:52.647 12:03:53 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:52.647 12:03:53 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:52.647 12:03:53 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:52.647 12:03:53 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:52.647 12:03:53 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:52.647 12:03:53 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:52.647 12:03:53 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:52.647 12:03:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:52.647 12:03:53 -- common/autotest_common.sh@10 -- # set +x 00:07:52.647 12:03:53 -- nvmf/common.sh@470 -- # nvmfpid=3239838 00:07:52.647 12:03:53 -- nvmf/common.sh@471 -- # waitforlisten 3239838 00:07:52.647 12:03:53 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:52.647 12:03:53 -- common/autotest_common.sh@817 -- # '[' -z 3239838 ']' 00:07:52.647 12:03:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.647 12:03:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:52.647 12:03:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.647 12:03:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:52.647 12:03:53 -- common/autotest_common.sh@10 -- # set +x 00:07:52.647 [2024-04-26 12:03:53.743848] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:07:52.647 [2024-04-26 12:03:53.743904] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:52.647 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.647 [2024-04-26 12:03:53.815721] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:52.906 [2024-04-26 12:03:53.887600] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:52.906 [2024-04-26 12:03:53.887646] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:52.906 [2024-04-26 12:03:53.887655] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:52.906 [2024-04-26 12:03:53.887662] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:52.906 [2024-04-26 12:03:53.887667] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:52.906 [2024-04-26 12:03:53.887820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.906 [2024-04-26 12:03:53.887939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:52.906 [2024-04-26 12:03:53.887986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.906 [2024-04-26 12:03:53.887987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:53.490 12:03:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:53.490 12:03:54 -- common/autotest_common.sh@850 -- # return 0 00:07:53.490 12:03:54 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:53.490 12:03:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:53.490 12:03:54 -- common/autotest_common.sh@10 -- # set +x 00:07:53.490 12:03:54 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:53.490 12:03:54 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:53.490 12:03:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.490 12:03:54 -- common/autotest_common.sh@10 -- # set +x 00:07:53.490 [2024-04-26 12:03:54.573455] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:53.490 12:03:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.490 12:03:54 -- target/discovery.sh@26 -- # seq 1 4 00:07:53.490 12:03:54 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:53.490 12:03:54 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:53.490 12:03:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.490 12:03:54 -- common/autotest_common.sh@10 -- # set +x 00:07:53.490 Null1 00:07:53.490 12:03:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.490 12:03:54 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:53.490 12:03:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.490 12:03:54 -- common/autotest_common.sh@10 -- # set +x 00:07:53.490 12:03:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.490 12:03:54 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:53.490 12:03:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.490 12:03:54 -- common/autotest_common.sh@10 -- # set +x 00:07:53.490 12:03:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.490 12:03:54 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:53.490 12:03:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.490 12:03:54 -- common/autotest_common.sh@10 -- # set +x 00:07:53.490 [2024-04-26 12:03:54.633777] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:53.490 12:03:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.490 12:03:54 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:53.490 12:03:54 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:53.490 12:03:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.490 12:03:54 -- common/autotest_common.sh@10 -- # set +x 00:07:53.490 Null2 00:07:53.490 12:03:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.490 12:03:54 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:53.490 12:03:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.490 12:03:54 -- common/autotest_common.sh@10 -- # set +x 00:07:53.490 12:03:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.490 12:03:54 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:53.490 12:03:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.490 12:03:54 -- common/autotest_common.sh@10 -- # set +x 00:07:53.490 12:03:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.490 12:03:54 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:53.490 12:03:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.490 12:03:54 -- common/autotest_common.sh@10 -- # set +x 00:07:53.490 12:03:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.490 12:03:54 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:53.490 12:03:54 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:53.490 12:03:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.490 12:03:54 -- common/autotest_common.sh@10 -- # set +x 00:07:53.490 Null3 00:07:53.490 12:03:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.490 12:03:54 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:53.490 12:03:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.490 12:03:54 -- common/autotest_common.sh@10 -- # set +x 00:07:53.751 12:03:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.751 12:03:54 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:53.751 12:03:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.751 12:03:54 -- common/autotest_common.sh@10 -- # set +x 00:07:53.751 12:03:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.751 12:03:54 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:53.751 12:03:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.751 12:03:54 -- common/autotest_common.sh@10 -- # set +x 00:07:53.751 12:03:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.751 12:03:54 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:53.751 12:03:54 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:53.751 12:03:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.751 12:03:54 -- common/autotest_common.sh@10 -- # set +x 00:07:53.751 Null4 00:07:53.751 12:03:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.751 12:03:54 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:53.751 12:03:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.751 12:03:54 -- common/autotest_common.sh@10 -- # set +x 00:07:53.751 12:03:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.751 12:03:54 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:53.751 12:03:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.751 12:03:54 -- common/autotest_common.sh@10 -- # set +x 00:07:53.751 12:03:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.751 12:03:54 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:53.751 12:03:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.751 12:03:54 -- common/autotest_common.sh@10 -- # set +x 00:07:53.751 12:03:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.751 12:03:54 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:53.751 12:03:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.751 12:03:54 -- common/autotest_common.sh@10 -- # set +x 00:07:53.751 12:03:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.751 12:03:54 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:53.751 12:03:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.751 12:03:54 -- common/autotest_common.sh@10 -- # set +x 00:07:53.751 12:03:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.751 12:03:54 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:07:53.751 00:07:53.751 Discovery Log Number of Records 6, Generation counter 6 00:07:53.751 =====Discovery Log Entry 0====== 00:07:53.751 trtype: tcp 00:07:53.751 adrfam: ipv4 00:07:53.751 subtype: current discovery subsystem 00:07:53.751 treq: not required 00:07:53.751 portid: 0 00:07:53.751 trsvcid: 4420 00:07:53.751 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:53.751 traddr: 10.0.0.2 00:07:53.751 eflags: explicit discovery connections, duplicate discovery information 00:07:53.751 sectype: none 00:07:53.751 =====Discovery Log Entry 1====== 00:07:53.751 trtype: tcp 00:07:53.751 adrfam: ipv4 00:07:53.751 subtype: nvme subsystem 00:07:53.751 treq: not required 00:07:53.751 portid: 0 00:07:53.751 trsvcid: 4420 00:07:53.751 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:53.751 traddr: 10.0.0.2 00:07:53.751 eflags: none 00:07:53.751 sectype: none 00:07:53.751 =====Discovery Log Entry 2====== 00:07:53.751 trtype: tcp 00:07:53.751 adrfam: ipv4 00:07:53.751 subtype: nvme subsystem 00:07:53.751 treq: not required 00:07:53.751 portid: 0 00:07:53.751 trsvcid: 4420 00:07:53.751 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:53.751 traddr: 10.0.0.2 00:07:53.751 eflags: none 00:07:53.751 sectype: none 00:07:53.751 =====Discovery Log Entry 3====== 00:07:53.751 trtype: tcp 00:07:53.751 adrfam: ipv4 00:07:53.751 subtype: nvme subsystem 00:07:53.751 treq: not required 00:07:53.751 portid: 0 00:07:53.751 trsvcid: 4420 00:07:53.751 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:53.751 traddr: 10.0.0.2 00:07:53.751 eflags: none 00:07:53.751 sectype: none 00:07:53.751 =====Discovery Log Entry 4====== 00:07:53.751 trtype: tcp 00:07:53.751 adrfam: ipv4 00:07:53.751 subtype: nvme subsystem 00:07:53.751 treq: not required 00:07:53.751 portid: 0 00:07:53.751 trsvcid: 4420 00:07:53.751 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:53.751 traddr: 10.0.0.2 00:07:53.751 eflags: none 00:07:53.751 sectype: none 00:07:53.751 =====Discovery Log Entry 5====== 00:07:53.751 trtype: tcp 00:07:53.751 adrfam: ipv4 00:07:53.751 subtype: discovery subsystem referral 00:07:53.751 treq: not required 00:07:53.751 portid: 0 00:07:53.751 trsvcid: 4430 00:07:53.751 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:53.751 traddr: 10.0.0.2 00:07:53.751 eflags: none 00:07:53.751 sectype: none 00:07:53.751 12:03:54 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:53.751 Perform nvmf subsystem discovery via RPC 00:07:53.751 12:03:54 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:53.751 12:03:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.751 12:03:54 -- common/autotest_common.sh@10 -- # set +x 00:07:53.751 [2024-04-26 12:03:54.934621] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:07:53.751 [ 00:07:53.751 { 00:07:53.751 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:53.751 "subtype": "Discovery", 00:07:53.751 "listen_addresses": [ 00:07:53.751 { 00:07:53.751 "transport": "TCP", 00:07:53.751 "trtype": "TCP", 00:07:53.751 "adrfam": "IPv4", 00:07:53.751 "traddr": "10.0.0.2", 00:07:53.751 "trsvcid": "4420" 00:07:53.751 } 00:07:53.751 ], 00:07:53.751 "allow_any_host": true, 00:07:53.751 "hosts": [] 00:07:53.751 }, 00:07:53.751 { 00:07:53.751 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:53.751 "subtype": "NVMe", 00:07:53.751 "listen_addresses": [ 00:07:53.751 { 00:07:53.751 "transport": "TCP", 00:07:53.751 "trtype": "TCP", 00:07:53.751 "adrfam": "IPv4", 00:07:53.751 "traddr": "10.0.0.2", 00:07:53.751 "trsvcid": "4420" 00:07:53.751 } 00:07:53.751 ], 00:07:53.751 "allow_any_host": true, 00:07:53.751 "hosts": [], 00:07:53.751 "serial_number": "SPDK00000000000001", 00:07:53.751 "model_number": "SPDK bdev Controller", 00:07:53.751 "max_namespaces": 32, 00:07:53.751 "min_cntlid": 1, 00:07:53.751 "max_cntlid": 65519, 00:07:53.751 "namespaces": [ 00:07:53.751 { 00:07:53.751 "nsid": 1, 00:07:53.751 "bdev_name": "Null1", 00:07:53.751 "name": "Null1", 00:07:53.751 "nguid": "885061CF6123461A9F3C4A2FC80FC1E3", 00:07:53.751 "uuid": "885061cf-6123-461a-9f3c-4a2fc80fc1e3" 00:07:53.751 } 00:07:53.751 ] 00:07:53.751 }, 00:07:53.751 { 00:07:53.751 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:53.751 "subtype": "NVMe", 00:07:53.751 "listen_addresses": [ 00:07:53.751 { 00:07:53.751 "transport": "TCP", 00:07:53.751 "trtype": "TCP", 00:07:53.751 "adrfam": "IPv4", 00:07:53.751 "traddr": "10.0.0.2", 00:07:53.751 "trsvcid": "4420" 00:07:53.752 } 00:07:53.752 ], 00:07:53.752 "allow_any_host": true, 00:07:53.752 "hosts": [], 00:07:53.752 "serial_number": "SPDK00000000000002", 00:07:53.752 "model_number": "SPDK bdev Controller", 00:07:53.752 "max_namespaces": 32, 00:07:53.752 "min_cntlid": 1, 00:07:53.752 "max_cntlid": 65519, 00:07:53.752 "namespaces": [ 00:07:53.752 { 00:07:53.752 "nsid": 1, 00:07:53.752 "bdev_name": "Null2", 00:07:53.752 "name": "Null2", 00:07:53.752 "nguid": "9ECA065F833F47C1BCFE3A197972356B", 00:07:53.752 "uuid": "9eca065f-833f-47c1-bcfe-3a197972356b" 00:07:53.752 } 00:07:53.752 ] 00:07:53.752 }, 00:07:53.752 { 00:07:53.752 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:53.752 "subtype": "NVMe", 00:07:53.752 "listen_addresses": [ 00:07:53.752 { 00:07:53.752 "transport": "TCP", 00:07:53.752 "trtype": "TCP", 00:07:53.752 "adrfam": "IPv4", 00:07:53.752 "traddr": "10.0.0.2", 00:07:53.752 "trsvcid": "4420" 00:07:53.752 } 00:07:53.752 ], 00:07:53.752 "allow_any_host": true, 00:07:53.752 "hosts": [], 00:07:53.752 "serial_number": "SPDK00000000000003", 00:07:53.752 "model_number": "SPDK bdev Controller", 00:07:53.752 "max_namespaces": 32, 00:07:53.752 "min_cntlid": 1, 00:07:53.752 "max_cntlid": 65519, 00:07:53.752 "namespaces": [ 00:07:53.752 { 00:07:53.752 "nsid": 1, 00:07:53.752 "bdev_name": "Null3", 00:07:53.752 "name": "Null3", 00:07:53.752 "nguid": "2E748E0C5236439691C84AD8A3CA3460", 00:07:53.752 "uuid": "2e748e0c-5236-4396-91c8-4ad8a3ca3460" 00:07:53.752 } 00:07:53.752 ] 00:07:53.752 }, 00:07:53.752 { 00:07:53.752 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:53.752 "subtype": "NVMe", 00:07:53.752 "listen_addresses": [ 00:07:53.752 { 00:07:53.752 "transport": "TCP", 00:07:53.752 "trtype": "TCP", 00:07:53.752 "adrfam": "IPv4", 00:07:53.752 "traddr": "10.0.0.2", 00:07:53.752 "trsvcid": "4420" 00:07:53.752 } 00:07:53.752 ], 00:07:53.752 "allow_any_host": true, 00:07:53.752 "hosts": [], 00:07:53.752 "serial_number": "SPDK00000000000004", 00:07:53.752 "model_number": "SPDK bdev Controller", 00:07:53.752 "max_namespaces": 32, 00:07:53.752 "min_cntlid": 1, 00:07:53.752 "max_cntlid": 65519, 00:07:53.752 "namespaces": [ 00:07:53.752 { 00:07:53.752 "nsid": 1, 00:07:53.752 "bdev_name": "Null4", 00:07:53.752 "name": "Null4", 00:07:53.752 "nguid": "C17004290CE2409DA08329AF0C593325", 00:07:53.752 "uuid": "c1700429-0ce2-409d-a083-29af0c593325" 00:07:53.752 } 00:07:53.752 ] 00:07:53.752 } 00:07:53.752 ] 00:07:53.752 12:03:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.752 12:03:54 -- target/discovery.sh@42 -- # seq 1 4 00:07:53.752 12:03:54 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:53.752 12:03:54 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:53.752 12:03:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.752 12:03:54 -- common/autotest_common.sh@10 -- # set +x 00:07:54.012 12:03:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:54.012 12:03:54 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:54.012 12:03:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:54.012 12:03:54 -- common/autotest_common.sh@10 -- # set +x 00:07:54.012 12:03:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:54.012 12:03:54 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:54.012 12:03:54 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:54.012 12:03:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:54.012 12:03:54 -- common/autotest_common.sh@10 -- # set +x 00:07:54.012 12:03:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:54.012 12:03:54 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:54.012 12:03:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:54.012 12:03:54 -- common/autotest_common.sh@10 -- # set +x 00:07:54.012 12:03:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:54.012 12:03:55 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:54.012 12:03:55 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:54.012 12:03:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:54.012 12:03:55 -- common/autotest_common.sh@10 -- # set +x 00:07:54.012 12:03:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:54.012 12:03:55 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:54.012 12:03:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:54.012 12:03:55 -- common/autotest_common.sh@10 -- # set +x 00:07:54.012 12:03:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:54.012 12:03:55 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:54.012 12:03:55 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:54.012 12:03:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:54.012 12:03:55 -- common/autotest_common.sh@10 -- # set +x 00:07:54.012 12:03:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:54.012 12:03:55 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:54.012 12:03:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:54.012 12:03:55 -- common/autotest_common.sh@10 -- # set +x 00:07:54.012 12:03:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:54.012 12:03:55 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:54.012 12:03:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:54.012 12:03:55 -- common/autotest_common.sh@10 -- # set +x 00:07:54.012 12:03:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:54.012 12:03:55 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:54.012 12:03:55 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:54.012 12:03:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:54.012 12:03:55 -- common/autotest_common.sh@10 -- # set +x 00:07:54.012 12:03:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:54.012 12:03:55 -- target/discovery.sh@49 -- # check_bdevs= 00:07:54.012 12:03:55 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:54.012 12:03:55 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:54.012 12:03:55 -- target/discovery.sh@57 -- # nvmftestfini 00:07:54.012 12:03:55 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:54.012 12:03:55 -- nvmf/common.sh@117 -- # sync 00:07:54.013 12:03:55 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:54.013 12:03:55 -- nvmf/common.sh@120 -- # set +e 00:07:54.013 12:03:55 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:54.013 12:03:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:54.013 rmmod nvme_tcp 00:07:54.013 rmmod nvme_fabrics 00:07:54.013 rmmod nvme_keyring 00:07:54.013 12:03:55 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:54.013 12:03:55 -- nvmf/common.sh@124 -- # set -e 00:07:54.013 12:03:55 -- nvmf/common.sh@125 -- # return 0 00:07:54.013 12:03:55 -- nvmf/common.sh@478 -- # '[' -n 3239838 ']' 00:07:54.013 12:03:55 -- nvmf/common.sh@479 -- # killprocess 3239838 00:07:54.013 12:03:55 -- common/autotest_common.sh@936 -- # '[' -z 3239838 ']' 00:07:54.013 12:03:55 -- common/autotest_common.sh@940 -- # kill -0 3239838 00:07:54.013 12:03:55 -- common/autotest_common.sh@941 -- # uname 00:07:54.013 12:03:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:54.013 12:03:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3239838 00:07:54.013 12:03:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:54.013 12:03:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:54.013 12:03:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3239838' 00:07:54.013 killing process with pid 3239838 00:07:54.013 12:03:55 -- common/autotest_common.sh@955 -- # kill 3239838 00:07:54.013 [2024-04-26 12:03:55.225086] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:07:54.013 12:03:55 -- common/autotest_common.sh@960 -- # wait 3239838 00:07:54.273 12:03:55 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:54.273 12:03:55 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:54.273 12:03:55 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:54.273 12:03:55 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:54.273 12:03:55 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:54.273 12:03:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:54.273 12:03:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:54.273 12:03:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:56.847 12:03:57 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:56.847 00:07:56.847 real 0m11.048s 00:07:56.847 user 0m8.164s 00:07:56.847 sys 0m5.570s 00:07:56.847 12:03:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:56.847 12:03:57 -- common/autotest_common.sh@10 -- # set +x 00:07:56.847 ************************************ 00:07:56.847 END TEST nvmf_discovery 00:07:56.847 ************************************ 00:07:56.847 12:03:57 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:56.847 12:03:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:56.847 12:03:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:56.847 12:03:57 -- common/autotest_common.sh@10 -- # set +x 00:07:56.847 ************************************ 00:07:56.847 START TEST nvmf_referrals 00:07:56.847 ************************************ 00:07:56.847 12:03:57 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:56.847 * Looking for test storage... 00:07:56.847 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:56.847 12:03:57 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:56.847 12:03:57 -- nvmf/common.sh@7 -- # uname -s 00:07:56.847 12:03:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:56.847 12:03:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:56.847 12:03:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:56.847 12:03:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:56.847 12:03:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:56.847 12:03:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:56.847 12:03:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:56.847 12:03:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:56.847 12:03:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:56.847 12:03:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:56.847 12:03:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:56.847 12:03:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:56.847 12:03:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:56.847 12:03:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:56.847 12:03:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:56.847 12:03:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:56.847 12:03:57 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:56.847 12:03:57 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:56.847 12:03:57 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:56.847 12:03:57 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:56.847 12:03:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.847 12:03:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.847 12:03:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.847 12:03:57 -- paths/export.sh@5 -- # export PATH 00:07:56.847 12:03:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.847 12:03:57 -- nvmf/common.sh@47 -- # : 0 00:07:56.847 12:03:57 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:56.847 12:03:57 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:56.847 12:03:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:56.847 12:03:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:56.847 12:03:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:56.847 12:03:57 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:56.847 12:03:57 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:56.847 12:03:57 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:56.847 12:03:57 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:56.847 12:03:57 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:56.847 12:03:57 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:56.847 12:03:57 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:56.847 12:03:57 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:56.847 12:03:57 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:56.847 12:03:57 -- target/referrals.sh@37 -- # nvmftestinit 00:07:56.847 12:03:57 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:56.847 12:03:57 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:56.847 12:03:57 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:56.847 12:03:57 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:56.847 12:03:57 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:56.847 12:03:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:56.847 12:03:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:56.847 12:03:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:56.847 12:03:57 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:56.847 12:03:57 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:56.847 12:03:57 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:56.847 12:03:57 -- common/autotest_common.sh@10 -- # set +x 00:08:03.433 12:04:04 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:03.433 12:04:04 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:03.433 12:04:04 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:03.433 12:04:04 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:03.433 12:04:04 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:03.433 12:04:04 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:03.433 12:04:04 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:03.433 12:04:04 -- nvmf/common.sh@295 -- # net_devs=() 00:08:03.433 12:04:04 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:03.433 12:04:04 -- nvmf/common.sh@296 -- # e810=() 00:08:03.433 12:04:04 -- nvmf/common.sh@296 -- # local -ga e810 00:08:03.433 12:04:04 -- nvmf/common.sh@297 -- # x722=() 00:08:03.433 12:04:04 -- nvmf/common.sh@297 -- # local -ga x722 00:08:03.433 12:04:04 -- nvmf/common.sh@298 -- # mlx=() 00:08:03.433 12:04:04 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:03.433 12:04:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:03.433 12:04:04 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:03.433 12:04:04 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:03.433 12:04:04 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:03.433 12:04:04 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:03.433 12:04:04 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:03.433 12:04:04 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:03.433 12:04:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:03.433 12:04:04 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:03.433 12:04:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:03.433 12:04:04 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:03.433 12:04:04 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:03.433 12:04:04 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:03.433 12:04:04 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:03.433 12:04:04 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:03.433 12:04:04 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:03.433 12:04:04 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:03.433 12:04:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:03.433 12:04:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:03.433 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:03.433 12:04:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:03.433 12:04:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:03.433 12:04:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:03.433 12:04:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:03.433 12:04:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:03.433 12:04:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:03.433 12:04:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:03.433 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:03.433 12:04:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:03.433 12:04:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:03.433 12:04:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:03.433 12:04:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:03.433 12:04:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:03.433 12:04:04 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:03.433 12:04:04 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:03.433 12:04:04 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:03.433 12:04:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:03.433 12:04:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:03.433 12:04:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:03.433 12:04:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:03.433 12:04:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:03.433 Found net devices under 0000:31:00.0: cvl_0_0 00:08:03.433 12:04:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:03.433 12:04:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:03.433 12:04:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:03.433 12:04:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:03.433 12:04:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:03.433 12:04:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:03.433 Found net devices under 0000:31:00.1: cvl_0_1 00:08:03.433 12:04:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:03.433 12:04:04 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:03.433 12:04:04 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:03.433 12:04:04 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:03.433 12:04:04 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:03.433 12:04:04 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:03.433 12:04:04 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:03.433 12:04:04 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:03.433 12:04:04 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:03.433 12:04:04 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:03.433 12:04:04 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:03.433 12:04:04 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:03.433 12:04:04 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:03.433 12:04:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:03.433 12:04:04 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:03.433 12:04:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:03.433 12:04:04 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:03.433 12:04:04 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:03.433 12:04:04 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:03.693 12:04:04 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:03.693 12:04:04 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:03.693 12:04:04 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:03.693 12:04:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:03.693 12:04:04 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:03.693 12:04:04 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:03.693 12:04:04 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:03.693 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:03.693 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.701 ms 00:08:03.693 00:08:03.693 --- 10.0.0.2 ping statistics --- 00:08:03.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:03.693 rtt min/avg/max/mdev = 0.701/0.701/0.701/0.000 ms 00:08:03.693 12:04:04 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:03.693 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:03.693 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.330 ms 00:08:03.693 00:08:03.693 --- 10.0.0.1 ping statistics --- 00:08:03.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:03.693 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:08:03.693 12:04:04 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:03.693 12:04:04 -- nvmf/common.sh@411 -- # return 0 00:08:03.693 12:04:04 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:03.693 12:04:04 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:03.693 12:04:04 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:03.693 12:04:04 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:03.693 12:04:04 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:03.694 12:04:04 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:03.694 12:04:04 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:03.954 12:04:04 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:03.954 12:04:04 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:03.954 12:04:04 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:03.954 12:04:04 -- common/autotest_common.sh@10 -- # set +x 00:08:03.954 12:04:04 -- nvmf/common.sh@470 -- # nvmfpid=3244578 00:08:03.954 12:04:04 -- nvmf/common.sh@471 -- # waitforlisten 3244578 00:08:03.954 12:04:04 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:03.954 12:04:04 -- common/autotest_common.sh@817 -- # '[' -z 3244578 ']' 00:08:03.954 12:04:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.954 12:04:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:03.954 12:04:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.954 12:04:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:03.954 12:04:04 -- common/autotest_common.sh@10 -- # set +x 00:08:03.954 [2024-04-26 12:04:04.999298] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:08:03.954 [2024-04-26 12:04:04.999365] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:03.954 EAL: No free 2048 kB hugepages reported on node 1 00:08:03.954 [2024-04-26 12:04:05.073133] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:03.954 [2024-04-26 12:04:05.145813] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:03.954 [2024-04-26 12:04:05.145864] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:03.954 [2024-04-26 12:04:05.145874] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:03.954 [2024-04-26 12:04:05.145883] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:03.954 [2024-04-26 12:04:05.145889] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:03.954 [2024-04-26 12:04:05.146124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:03.954 [2024-04-26 12:04:05.146300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:03.954 [2024-04-26 12:04:05.146458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:03.954 [2024-04-26 12:04:05.146459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.895 12:04:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:04.895 12:04:05 -- common/autotest_common.sh@850 -- # return 0 00:08:04.895 12:04:05 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:04.895 12:04:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:04.895 12:04:05 -- common/autotest_common.sh@10 -- # set +x 00:08:04.895 12:04:05 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:04.895 12:04:05 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:04.895 12:04:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:04.895 12:04:05 -- common/autotest_common.sh@10 -- # set +x 00:08:04.895 [2024-04-26 12:04:05.832392] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:04.895 12:04:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:04.895 12:04:05 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:04.895 12:04:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:04.895 12:04:05 -- common/autotest_common.sh@10 -- # set +x 00:08:04.895 [2024-04-26 12:04:05.848572] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:04.895 12:04:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:04.895 12:04:05 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:04.895 12:04:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:04.895 12:04:05 -- common/autotest_common.sh@10 -- # set +x 00:08:04.895 12:04:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:04.895 12:04:05 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:04.895 12:04:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:04.895 12:04:05 -- common/autotest_common.sh@10 -- # set +x 00:08:04.895 12:04:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:04.895 12:04:05 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:04.895 12:04:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:04.895 12:04:05 -- common/autotest_common.sh@10 -- # set +x 00:08:04.895 12:04:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:04.895 12:04:05 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:04.895 12:04:05 -- target/referrals.sh@48 -- # jq length 00:08:04.895 12:04:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:04.895 12:04:05 -- common/autotest_common.sh@10 -- # set +x 00:08:04.895 12:04:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:04.895 12:04:05 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:04.895 12:04:05 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:04.895 12:04:05 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:04.895 12:04:05 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:04.895 12:04:05 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:04.895 12:04:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:04.895 12:04:05 -- target/referrals.sh@21 -- # sort 00:08:04.895 12:04:05 -- common/autotest_common.sh@10 -- # set +x 00:08:04.895 12:04:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:04.895 12:04:05 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:04.895 12:04:05 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:04.895 12:04:05 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:04.895 12:04:05 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:04.895 12:04:05 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:04.895 12:04:05 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:04.895 12:04:05 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:04.895 12:04:05 -- target/referrals.sh@26 -- # sort 00:08:05.155 12:04:06 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:05.155 12:04:06 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:05.155 12:04:06 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:05.155 12:04:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.155 12:04:06 -- common/autotest_common.sh@10 -- # set +x 00:08:05.155 12:04:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.155 12:04:06 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:05.155 12:04:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.155 12:04:06 -- common/autotest_common.sh@10 -- # set +x 00:08:05.155 12:04:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.155 12:04:06 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:05.155 12:04:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.155 12:04:06 -- common/autotest_common.sh@10 -- # set +x 00:08:05.155 12:04:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.155 12:04:06 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:05.155 12:04:06 -- target/referrals.sh@56 -- # jq length 00:08:05.155 12:04:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.155 12:04:06 -- common/autotest_common.sh@10 -- # set +x 00:08:05.155 12:04:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.155 12:04:06 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:05.155 12:04:06 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:05.155 12:04:06 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:05.155 12:04:06 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:05.155 12:04:06 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:05.155 12:04:06 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:05.155 12:04:06 -- target/referrals.sh@26 -- # sort 00:08:05.155 12:04:06 -- target/referrals.sh@26 -- # echo 00:08:05.155 12:04:06 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:05.155 12:04:06 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:05.155 12:04:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.155 12:04:06 -- common/autotest_common.sh@10 -- # set +x 00:08:05.155 12:04:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.155 12:04:06 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:05.155 12:04:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.155 12:04:06 -- common/autotest_common.sh@10 -- # set +x 00:08:05.155 12:04:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.155 12:04:06 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:05.155 12:04:06 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:05.155 12:04:06 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:05.155 12:04:06 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:05.155 12:04:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.155 12:04:06 -- target/referrals.sh@21 -- # sort 00:08:05.155 12:04:06 -- common/autotest_common.sh@10 -- # set +x 00:08:05.415 12:04:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.415 12:04:06 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:05.415 12:04:06 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:05.415 12:04:06 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:05.415 12:04:06 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:05.415 12:04:06 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:05.415 12:04:06 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:05.415 12:04:06 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:05.415 12:04:06 -- target/referrals.sh@26 -- # sort 00:08:05.415 12:04:06 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:05.415 12:04:06 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:05.415 12:04:06 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:05.415 12:04:06 -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:05.415 12:04:06 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:05.415 12:04:06 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:05.415 12:04:06 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:05.674 12:04:06 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:05.674 12:04:06 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:05.674 12:04:06 -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:05.674 12:04:06 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:05.674 12:04:06 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:05.674 12:04:06 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:05.674 12:04:06 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:05.674 12:04:06 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:05.674 12:04:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.674 12:04:06 -- common/autotest_common.sh@10 -- # set +x 00:08:05.674 12:04:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.674 12:04:06 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:05.674 12:04:06 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:05.674 12:04:06 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:05.674 12:04:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.674 12:04:06 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:05.674 12:04:06 -- common/autotest_common.sh@10 -- # set +x 00:08:05.674 12:04:06 -- target/referrals.sh@21 -- # sort 00:08:05.674 12:04:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.674 12:04:06 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:05.933 12:04:06 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:05.933 12:04:06 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:05.933 12:04:06 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:05.933 12:04:06 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:05.933 12:04:06 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:05.933 12:04:06 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:05.933 12:04:06 -- target/referrals.sh@26 -- # sort 00:08:05.933 12:04:06 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:05.933 12:04:07 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:05.933 12:04:07 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:05.933 12:04:07 -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:05.933 12:04:07 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:05.933 12:04:07 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:05.933 12:04:07 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:05.933 12:04:07 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:05.933 12:04:07 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:05.933 12:04:07 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:05.933 12:04:07 -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:05.933 12:04:07 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:05.933 12:04:07 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:06.193 12:04:07 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:06.193 12:04:07 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:06.193 12:04:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:06.193 12:04:07 -- common/autotest_common.sh@10 -- # set +x 00:08:06.193 12:04:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:06.193 12:04:07 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:06.193 12:04:07 -- target/referrals.sh@82 -- # jq length 00:08:06.193 12:04:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:06.193 12:04:07 -- common/autotest_common.sh@10 -- # set +x 00:08:06.193 12:04:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:06.193 12:04:07 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:06.193 12:04:07 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:06.193 12:04:07 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:06.193 12:04:07 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:06.193 12:04:07 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:06.193 12:04:07 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:06.193 12:04:07 -- target/referrals.sh@26 -- # sort 00:08:06.193 12:04:07 -- target/referrals.sh@26 -- # echo 00:08:06.193 12:04:07 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:06.193 12:04:07 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:06.193 12:04:07 -- target/referrals.sh@86 -- # nvmftestfini 00:08:06.193 12:04:07 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:06.193 12:04:07 -- nvmf/common.sh@117 -- # sync 00:08:06.193 12:04:07 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:06.193 12:04:07 -- nvmf/common.sh@120 -- # set +e 00:08:06.193 12:04:07 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:06.193 12:04:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:06.193 rmmod nvme_tcp 00:08:06.453 rmmod nvme_fabrics 00:08:06.453 rmmod nvme_keyring 00:08:06.453 12:04:07 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:06.453 12:04:07 -- nvmf/common.sh@124 -- # set -e 00:08:06.453 12:04:07 -- nvmf/common.sh@125 -- # return 0 00:08:06.453 12:04:07 -- nvmf/common.sh@478 -- # '[' -n 3244578 ']' 00:08:06.453 12:04:07 -- nvmf/common.sh@479 -- # killprocess 3244578 00:08:06.453 12:04:07 -- common/autotest_common.sh@936 -- # '[' -z 3244578 ']' 00:08:06.453 12:04:07 -- common/autotest_common.sh@940 -- # kill -0 3244578 00:08:06.453 12:04:07 -- common/autotest_common.sh@941 -- # uname 00:08:06.453 12:04:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:06.453 12:04:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3244578 00:08:06.453 12:04:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:06.453 12:04:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:06.453 12:04:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3244578' 00:08:06.453 killing process with pid 3244578 00:08:06.453 12:04:07 -- common/autotest_common.sh@955 -- # kill 3244578 00:08:06.453 12:04:07 -- common/autotest_common.sh@960 -- # wait 3244578 00:08:06.453 12:04:07 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:06.453 12:04:07 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:06.453 12:04:07 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:06.454 12:04:07 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:06.454 12:04:07 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:06.454 12:04:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:06.454 12:04:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:06.454 12:04:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.996 12:04:09 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:08.997 00:08:08.997 real 0m12.100s 00:08:08.997 user 0m13.074s 00:08:08.997 sys 0m5.928s 00:08:08.997 12:04:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:08.997 12:04:09 -- common/autotest_common.sh@10 -- # set +x 00:08:08.997 ************************************ 00:08:08.997 END TEST nvmf_referrals 00:08:08.997 ************************************ 00:08:08.997 12:04:09 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:08.997 12:04:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:08.997 12:04:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:08.997 12:04:09 -- common/autotest_common.sh@10 -- # set +x 00:08:08.997 ************************************ 00:08:08.997 START TEST nvmf_connect_disconnect 00:08:08.997 ************************************ 00:08:08.997 12:04:09 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:08.997 * Looking for test storage... 00:08:08.997 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:08.997 12:04:10 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:08.997 12:04:10 -- nvmf/common.sh@7 -- # uname -s 00:08:08.997 12:04:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:08.997 12:04:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:08.997 12:04:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:08.997 12:04:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:08.997 12:04:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:08.997 12:04:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:08.997 12:04:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:08.997 12:04:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:08.997 12:04:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:08.997 12:04:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:08.997 12:04:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:08.997 12:04:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:08.997 12:04:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:08.997 12:04:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:08.997 12:04:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:08.997 12:04:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:08.997 12:04:10 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:08.997 12:04:10 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:08.997 12:04:10 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:08.997 12:04:10 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:08.997 12:04:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.997 12:04:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.997 12:04:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.997 12:04:10 -- paths/export.sh@5 -- # export PATH 00:08:08.997 12:04:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.997 12:04:10 -- nvmf/common.sh@47 -- # : 0 00:08:08.997 12:04:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:08.997 12:04:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:08.997 12:04:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:08.997 12:04:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:08.997 12:04:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:08.997 12:04:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:08.997 12:04:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:08.997 12:04:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:08.997 12:04:10 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:08.997 12:04:10 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:08.997 12:04:10 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:08.997 12:04:10 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:08.997 12:04:10 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:08.997 12:04:10 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:08.997 12:04:10 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:08.997 12:04:10 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:08.997 12:04:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.997 12:04:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:08.997 12:04:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.997 12:04:10 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:08.997 12:04:10 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:08.997 12:04:10 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:08.997 12:04:10 -- common/autotest_common.sh@10 -- # set +x 00:08:17.195 12:04:16 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:17.195 12:04:16 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:17.195 12:04:16 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:17.195 12:04:16 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:17.195 12:04:16 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:17.195 12:04:16 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:17.195 12:04:16 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:17.195 12:04:16 -- nvmf/common.sh@295 -- # net_devs=() 00:08:17.195 12:04:16 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:17.195 12:04:16 -- nvmf/common.sh@296 -- # e810=() 00:08:17.195 12:04:16 -- nvmf/common.sh@296 -- # local -ga e810 00:08:17.195 12:04:16 -- nvmf/common.sh@297 -- # x722=() 00:08:17.195 12:04:16 -- nvmf/common.sh@297 -- # local -ga x722 00:08:17.195 12:04:16 -- nvmf/common.sh@298 -- # mlx=() 00:08:17.195 12:04:16 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:17.195 12:04:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:17.195 12:04:16 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:17.195 12:04:16 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:17.195 12:04:16 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:17.196 12:04:16 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:17.196 12:04:16 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:17.196 12:04:16 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:17.196 12:04:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:17.196 12:04:16 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:17.196 12:04:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:17.196 12:04:16 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:17.196 12:04:16 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:17.196 12:04:16 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:17.196 12:04:16 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:17.196 12:04:16 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:17.196 12:04:16 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:17.196 12:04:16 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:17.196 12:04:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:17.196 12:04:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:17.196 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:17.196 12:04:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:17.196 12:04:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:17.196 12:04:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:17.196 12:04:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:17.196 12:04:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:17.196 12:04:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:17.196 12:04:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:17.196 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:17.196 12:04:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:17.196 12:04:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:17.196 12:04:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:17.196 12:04:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:17.196 12:04:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:17.196 12:04:16 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:17.196 12:04:16 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:17.196 12:04:16 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:17.196 12:04:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:17.196 12:04:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:17.196 12:04:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:17.196 12:04:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:17.196 12:04:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:17.196 Found net devices under 0000:31:00.0: cvl_0_0 00:08:17.196 12:04:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:17.196 12:04:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:17.196 12:04:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:17.196 12:04:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:17.196 12:04:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:17.196 12:04:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:17.196 Found net devices under 0000:31:00.1: cvl_0_1 00:08:17.196 12:04:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:17.196 12:04:16 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:17.196 12:04:16 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:17.196 12:04:16 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:17.196 12:04:16 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:17.196 12:04:16 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:17.196 12:04:16 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:17.196 12:04:16 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:17.196 12:04:16 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:17.196 12:04:16 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:17.196 12:04:16 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:17.196 12:04:16 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:17.196 12:04:16 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:17.196 12:04:16 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:17.196 12:04:16 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:17.196 12:04:16 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:17.196 12:04:16 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:17.196 12:04:16 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:17.196 12:04:16 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:17.196 12:04:17 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:17.196 12:04:17 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:17.196 12:04:17 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:17.196 12:04:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:17.196 12:04:17 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:17.196 12:04:17 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:17.196 12:04:17 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:17.196 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:17.196 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.590 ms 00:08:17.196 00:08:17.196 --- 10.0.0.2 ping statistics --- 00:08:17.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.196 rtt min/avg/max/mdev = 0.590/0.590/0.590/0.000 ms 00:08:17.196 12:04:17 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:17.196 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:17.196 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:08:17.196 00:08:17.196 --- 10.0.0.1 ping statistics --- 00:08:17.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.196 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:08:17.196 12:04:17 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:17.196 12:04:17 -- nvmf/common.sh@411 -- # return 0 00:08:17.196 12:04:17 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:17.196 12:04:17 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:17.196 12:04:17 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:17.196 12:04:17 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:17.196 12:04:17 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:17.196 12:04:17 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:17.196 12:04:17 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:17.196 12:04:17 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:17.196 12:04:17 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:17.196 12:04:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:17.196 12:04:17 -- common/autotest_common.sh@10 -- # set +x 00:08:17.196 12:04:17 -- nvmf/common.sh@470 -- # nvmfpid=3249423 00:08:17.196 12:04:17 -- nvmf/common.sh@471 -- # waitforlisten 3249423 00:08:17.196 12:04:17 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:17.196 12:04:17 -- common/autotest_common.sh@817 -- # '[' -z 3249423 ']' 00:08:17.196 12:04:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.196 12:04:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:17.196 12:04:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.196 12:04:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:17.196 12:04:17 -- common/autotest_common.sh@10 -- # set +x 00:08:17.196 [2024-04-26 12:04:17.369841] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:08:17.196 [2024-04-26 12:04:17.369888] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:17.196 EAL: No free 2048 kB hugepages reported on node 1 00:08:17.196 [2024-04-26 12:04:17.437234] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:17.196 [2024-04-26 12:04:17.501339] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:17.196 [2024-04-26 12:04:17.501379] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:17.196 [2024-04-26 12:04:17.501389] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:17.196 [2024-04-26 12:04:17.501397] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:17.196 [2024-04-26 12:04:17.501405] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:17.196 [2024-04-26 12:04:17.502860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:17.196 [2024-04-26 12:04:17.503019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:17.196 [2024-04-26 12:04:17.503262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:17.196 [2024-04-26 12:04:17.503263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.196 12:04:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:17.197 12:04:17 -- common/autotest_common.sh@850 -- # return 0 00:08:17.197 12:04:17 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:17.197 12:04:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:17.197 12:04:17 -- common/autotest_common.sh@10 -- # set +x 00:08:17.197 12:04:17 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:17.197 12:04:17 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:17.197 12:04:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:17.197 12:04:17 -- common/autotest_common.sh@10 -- # set +x 00:08:17.197 [2024-04-26 12:04:17.649693] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:17.197 12:04:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:17.197 12:04:17 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:17.197 12:04:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:17.197 12:04:17 -- common/autotest_common.sh@10 -- # set +x 00:08:17.197 12:04:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:17.197 12:04:17 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:17.197 12:04:17 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:17.197 12:04:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:17.197 12:04:17 -- common/autotest_common.sh@10 -- # set +x 00:08:17.197 12:04:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:17.197 12:04:17 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:17.197 12:04:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:17.197 12:04:17 -- common/autotest_common.sh@10 -- # set +x 00:08:17.197 12:04:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:17.197 12:04:17 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:17.197 12:04:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:17.197 12:04:17 -- common/autotest_common.sh@10 -- # set +x 00:08:17.197 [2024-04-26 12:04:17.706637] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:17.197 12:04:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:17.197 12:04:17 -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:08:17.197 12:04:17 -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:08:17.197 12:04:17 -- target/connect_disconnect.sh@34 -- # set +x 00:08:20.498 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:23.802 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:28.036 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:31.336 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:34.647 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:34.908 12:04:35 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:08:34.908 12:04:35 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:08:34.908 12:04:35 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:34.908 12:04:35 -- nvmf/common.sh@117 -- # sync 00:08:34.908 12:04:35 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:34.908 12:04:35 -- nvmf/common.sh@120 -- # set +e 00:08:34.908 12:04:35 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:34.908 12:04:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:34.908 rmmod nvme_tcp 00:08:34.908 rmmod nvme_fabrics 00:08:34.908 rmmod nvme_keyring 00:08:34.908 12:04:35 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:34.908 12:04:35 -- nvmf/common.sh@124 -- # set -e 00:08:34.908 12:04:35 -- nvmf/common.sh@125 -- # return 0 00:08:34.908 12:04:35 -- nvmf/common.sh@478 -- # '[' -n 3249423 ']' 00:08:34.908 12:04:35 -- nvmf/common.sh@479 -- # killprocess 3249423 00:08:34.908 12:04:35 -- common/autotest_common.sh@936 -- # '[' -z 3249423 ']' 00:08:34.908 12:04:35 -- common/autotest_common.sh@940 -- # kill -0 3249423 00:08:34.908 12:04:35 -- common/autotest_common.sh@941 -- # uname 00:08:34.908 12:04:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:34.908 12:04:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3249423 00:08:34.908 12:04:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:34.908 12:04:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:34.908 12:04:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3249423' 00:08:34.908 killing process with pid 3249423 00:08:34.908 12:04:35 -- common/autotest_common.sh@955 -- # kill 3249423 00:08:34.908 12:04:35 -- common/autotest_common.sh@960 -- # wait 3249423 00:08:35.169 12:04:36 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:35.169 12:04:36 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:35.169 12:04:36 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:35.169 12:04:36 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:35.169 12:04:36 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:35.169 12:04:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.169 12:04:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:35.169 12:04:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:37.083 12:04:38 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:37.083 00:08:37.083 real 0m28.278s 00:08:37.083 user 1m16.251s 00:08:37.083 sys 0m6.500s 00:08:37.083 12:04:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:37.083 12:04:38 -- common/autotest_common.sh@10 -- # set +x 00:08:37.083 ************************************ 00:08:37.083 END TEST nvmf_connect_disconnect 00:08:37.083 ************************************ 00:08:37.083 12:04:38 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:37.083 12:04:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:37.083 12:04:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:37.083 12:04:38 -- common/autotest_common.sh@10 -- # set +x 00:08:37.344 ************************************ 00:08:37.344 START TEST nvmf_multitarget 00:08:37.344 ************************************ 00:08:37.344 12:04:38 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:37.344 * Looking for test storage... 00:08:37.344 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:37.344 12:04:38 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:37.344 12:04:38 -- nvmf/common.sh@7 -- # uname -s 00:08:37.344 12:04:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:37.344 12:04:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:37.344 12:04:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:37.344 12:04:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:37.344 12:04:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:37.344 12:04:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:37.344 12:04:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:37.344 12:04:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:37.344 12:04:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:37.344 12:04:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:37.344 12:04:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:37.344 12:04:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:37.345 12:04:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:37.345 12:04:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:37.345 12:04:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:37.345 12:04:38 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:37.345 12:04:38 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:37.345 12:04:38 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:37.345 12:04:38 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:37.345 12:04:38 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:37.345 12:04:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.345 12:04:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.345 12:04:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.345 12:04:38 -- paths/export.sh@5 -- # export PATH 00:08:37.345 12:04:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.345 12:04:38 -- nvmf/common.sh@47 -- # : 0 00:08:37.345 12:04:38 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:37.345 12:04:38 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:37.345 12:04:38 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:37.345 12:04:38 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:37.345 12:04:38 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:37.345 12:04:38 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:37.345 12:04:38 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:37.345 12:04:38 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:37.345 12:04:38 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:37.345 12:04:38 -- target/multitarget.sh@15 -- # nvmftestinit 00:08:37.345 12:04:38 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:37.345 12:04:38 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:37.345 12:04:38 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:37.345 12:04:38 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:37.345 12:04:38 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:37.345 12:04:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:37.345 12:04:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:37.345 12:04:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:37.345 12:04:38 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:37.345 12:04:38 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:37.345 12:04:38 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:37.345 12:04:38 -- common/autotest_common.sh@10 -- # set +x 00:08:45.510 12:04:45 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:45.510 12:04:45 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:45.510 12:04:45 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:45.510 12:04:45 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:45.510 12:04:45 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:45.510 12:04:45 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:45.510 12:04:45 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:45.510 12:04:45 -- nvmf/common.sh@295 -- # net_devs=() 00:08:45.510 12:04:45 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:45.510 12:04:45 -- nvmf/common.sh@296 -- # e810=() 00:08:45.510 12:04:45 -- nvmf/common.sh@296 -- # local -ga e810 00:08:45.510 12:04:45 -- nvmf/common.sh@297 -- # x722=() 00:08:45.510 12:04:45 -- nvmf/common.sh@297 -- # local -ga x722 00:08:45.510 12:04:45 -- nvmf/common.sh@298 -- # mlx=() 00:08:45.510 12:04:45 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:45.510 12:04:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:45.510 12:04:45 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:45.510 12:04:45 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:45.510 12:04:45 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:45.510 12:04:45 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:45.510 12:04:45 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:45.510 12:04:45 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:45.510 12:04:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:45.510 12:04:45 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:45.510 12:04:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:45.510 12:04:45 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:45.510 12:04:45 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:45.510 12:04:45 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:45.510 12:04:45 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:45.510 12:04:45 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:45.510 12:04:45 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:45.510 12:04:45 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:45.510 12:04:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:45.510 12:04:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:45.510 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:45.510 12:04:45 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:45.510 12:04:45 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:45.510 12:04:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:45.510 12:04:45 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:45.510 12:04:45 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:45.510 12:04:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:45.510 12:04:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:45.510 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:45.510 12:04:45 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:45.510 12:04:45 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:45.510 12:04:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:45.510 12:04:45 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:45.510 12:04:45 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:45.510 12:04:45 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:45.510 12:04:45 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:45.510 12:04:45 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:45.510 12:04:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:45.510 12:04:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:45.510 12:04:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:45.510 12:04:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:45.510 12:04:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:45.510 Found net devices under 0000:31:00.0: cvl_0_0 00:08:45.510 12:04:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:45.510 12:04:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:45.510 12:04:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:45.510 12:04:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:45.510 12:04:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:45.510 12:04:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:45.510 Found net devices under 0000:31:00.1: cvl_0_1 00:08:45.510 12:04:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:45.510 12:04:45 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:45.510 12:04:45 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:45.510 12:04:45 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:45.510 12:04:45 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:45.510 12:04:45 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:45.510 12:04:45 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:45.510 12:04:45 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:45.510 12:04:45 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:45.510 12:04:45 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:45.510 12:04:45 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:45.510 12:04:45 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:45.510 12:04:45 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:45.510 12:04:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:45.510 12:04:45 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:45.510 12:04:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:45.510 12:04:45 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:45.510 12:04:45 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:45.510 12:04:45 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:45.510 12:04:45 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:45.510 12:04:45 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:45.510 12:04:45 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:45.510 12:04:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:45.510 12:04:45 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:45.510 12:04:45 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:45.510 12:04:45 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:45.510 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:45.510 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.586 ms 00:08:45.510 00:08:45.510 --- 10.0.0.2 ping statistics --- 00:08:45.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.510 rtt min/avg/max/mdev = 0.586/0.586/0.586/0.000 ms 00:08:45.510 12:04:45 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:45.510 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:45.510 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:08:45.510 00:08:45.510 --- 10.0.0.1 ping statistics --- 00:08:45.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.510 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:08:45.510 12:04:45 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:45.510 12:04:45 -- nvmf/common.sh@411 -- # return 0 00:08:45.510 12:04:45 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:45.510 12:04:45 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:45.510 12:04:45 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:45.510 12:04:45 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:45.510 12:04:45 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:45.510 12:04:45 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:45.510 12:04:45 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:45.510 12:04:45 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:08:45.510 12:04:45 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:45.510 12:04:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:45.510 12:04:45 -- common/autotest_common.sh@10 -- # set +x 00:08:45.510 12:04:45 -- nvmf/common.sh@470 -- # nvmfpid=3257610 00:08:45.510 12:04:45 -- nvmf/common.sh@471 -- # waitforlisten 3257610 00:08:45.510 12:04:45 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:45.510 12:04:45 -- common/autotest_common.sh@817 -- # '[' -z 3257610 ']' 00:08:45.510 12:04:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.510 12:04:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:45.510 12:04:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.510 12:04:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:45.511 12:04:45 -- common/autotest_common.sh@10 -- # set +x 00:08:45.511 [2024-04-26 12:04:45.840979] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:08:45.511 [2024-04-26 12:04:45.841044] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:45.511 EAL: No free 2048 kB hugepages reported on node 1 00:08:45.511 [2024-04-26 12:04:45.913409] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:45.511 [2024-04-26 12:04:45.987194] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:45.511 [2024-04-26 12:04:45.987234] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:45.511 [2024-04-26 12:04:45.987243] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:45.511 [2024-04-26 12:04:45.987250] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:45.511 [2024-04-26 12:04:45.987257] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:45.511 [2024-04-26 12:04:45.987415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:45.511 [2024-04-26 12:04:45.987529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:45.511 [2024-04-26 12:04:45.987686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.511 [2024-04-26 12:04:45.987687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:45.511 12:04:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:45.511 12:04:46 -- common/autotest_common.sh@850 -- # return 0 00:08:45.511 12:04:46 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:45.511 12:04:46 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:45.511 12:04:46 -- common/autotest_common.sh@10 -- # set +x 00:08:45.511 12:04:46 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:45.511 12:04:46 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:45.511 12:04:46 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:45.511 12:04:46 -- target/multitarget.sh@21 -- # jq length 00:08:45.771 12:04:46 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:08:45.771 12:04:46 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:08:45.771 "nvmf_tgt_1" 00:08:45.771 12:04:46 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:08:45.771 "nvmf_tgt_2" 00:08:45.771 12:04:46 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:45.771 12:04:46 -- target/multitarget.sh@28 -- # jq length 00:08:46.031 12:04:47 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:08:46.031 12:04:47 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:08:46.031 true 00:08:46.031 12:04:47 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:08:46.031 true 00:08:46.292 12:04:47 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:46.292 12:04:47 -- target/multitarget.sh@35 -- # jq length 00:08:46.292 12:04:47 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:08:46.292 12:04:47 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:46.292 12:04:47 -- target/multitarget.sh@41 -- # nvmftestfini 00:08:46.292 12:04:47 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:46.292 12:04:47 -- nvmf/common.sh@117 -- # sync 00:08:46.292 12:04:47 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:46.292 12:04:47 -- nvmf/common.sh@120 -- # set +e 00:08:46.292 12:04:47 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:46.292 12:04:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:46.292 rmmod nvme_tcp 00:08:46.292 rmmod nvme_fabrics 00:08:46.292 rmmod nvme_keyring 00:08:46.292 12:04:47 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:46.292 12:04:47 -- nvmf/common.sh@124 -- # set -e 00:08:46.292 12:04:47 -- nvmf/common.sh@125 -- # return 0 00:08:46.292 12:04:47 -- nvmf/common.sh@478 -- # '[' -n 3257610 ']' 00:08:46.292 12:04:47 -- nvmf/common.sh@479 -- # killprocess 3257610 00:08:46.292 12:04:47 -- common/autotest_common.sh@936 -- # '[' -z 3257610 ']' 00:08:46.292 12:04:47 -- common/autotest_common.sh@940 -- # kill -0 3257610 00:08:46.292 12:04:47 -- common/autotest_common.sh@941 -- # uname 00:08:46.292 12:04:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:46.292 12:04:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3257610 00:08:46.292 12:04:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:46.292 12:04:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:46.292 12:04:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3257610' 00:08:46.292 killing process with pid 3257610 00:08:46.292 12:04:47 -- common/autotest_common.sh@955 -- # kill 3257610 00:08:46.292 12:04:47 -- common/autotest_common.sh@960 -- # wait 3257610 00:08:46.552 12:04:47 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:46.552 12:04:47 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:46.552 12:04:47 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:46.552 12:04:47 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:46.552 12:04:47 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:46.552 12:04:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:46.552 12:04:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:46.552 12:04:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:48.496 12:04:49 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:48.496 00:08:48.496 real 0m11.297s 00:08:48.496 user 0m9.328s 00:08:48.496 sys 0m5.813s 00:08:48.496 12:04:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:48.496 12:04:49 -- common/autotest_common.sh@10 -- # set +x 00:08:48.496 ************************************ 00:08:48.496 END TEST nvmf_multitarget 00:08:48.496 ************************************ 00:08:48.757 12:04:49 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:48.757 12:04:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:48.757 12:04:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:48.757 12:04:49 -- common/autotest_common.sh@10 -- # set +x 00:08:48.757 ************************************ 00:08:48.757 START TEST nvmf_rpc 00:08:48.757 ************************************ 00:08:48.757 12:04:49 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:49.018 * Looking for test storage... 00:08:49.018 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:49.018 12:04:49 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:49.018 12:04:49 -- nvmf/common.sh@7 -- # uname -s 00:08:49.018 12:04:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:49.018 12:04:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:49.018 12:04:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:49.018 12:04:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:49.018 12:04:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:49.018 12:04:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:49.018 12:04:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:49.018 12:04:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:49.018 12:04:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:49.018 12:04:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:49.018 12:04:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:49.018 12:04:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:49.018 12:04:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:49.018 12:04:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:49.018 12:04:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:49.018 12:04:50 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:49.018 12:04:50 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:49.018 12:04:50 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:49.018 12:04:50 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:49.018 12:04:50 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:49.018 12:04:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.018 12:04:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.018 12:04:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.018 12:04:50 -- paths/export.sh@5 -- # export PATH 00:08:49.018 12:04:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.018 12:04:50 -- nvmf/common.sh@47 -- # : 0 00:08:49.018 12:04:50 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:49.018 12:04:50 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:49.019 12:04:50 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:49.019 12:04:50 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:49.019 12:04:50 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:49.019 12:04:50 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:49.019 12:04:50 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:49.019 12:04:50 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:49.019 12:04:50 -- target/rpc.sh@11 -- # loops=5 00:08:49.019 12:04:50 -- target/rpc.sh@23 -- # nvmftestinit 00:08:49.019 12:04:50 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:49.019 12:04:50 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:49.019 12:04:50 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:49.019 12:04:50 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:49.019 12:04:50 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:49.019 12:04:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:49.019 12:04:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:49.019 12:04:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:49.019 12:04:50 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:49.019 12:04:50 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:49.019 12:04:50 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:49.019 12:04:50 -- common/autotest_common.sh@10 -- # set +x 00:08:57.155 12:04:56 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:57.155 12:04:56 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:57.155 12:04:56 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:57.155 12:04:56 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:57.155 12:04:56 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:57.155 12:04:56 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:57.155 12:04:56 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:57.155 12:04:56 -- nvmf/common.sh@295 -- # net_devs=() 00:08:57.155 12:04:56 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:57.155 12:04:56 -- nvmf/common.sh@296 -- # e810=() 00:08:57.155 12:04:56 -- nvmf/common.sh@296 -- # local -ga e810 00:08:57.155 12:04:56 -- nvmf/common.sh@297 -- # x722=() 00:08:57.155 12:04:56 -- nvmf/common.sh@297 -- # local -ga x722 00:08:57.155 12:04:56 -- nvmf/common.sh@298 -- # mlx=() 00:08:57.155 12:04:56 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:57.155 12:04:56 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:57.155 12:04:56 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:57.155 12:04:56 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:57.155 12:04:56 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:57.155 12:04:56 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:57.155 12:04:56 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:57.155 12:04:56 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:57.155 12:04:56 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:57.155 12:04:56 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:57.155 12:04:56 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:57.155 12:04:56 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:57.155 12:04:56 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:57.155 12:04:56 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:57.155 12:04:56 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:57.155 12:04:56 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:57.155 12:04:56 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:57.155 12:04:56 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:57.155 12:04:56 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:57.155 12:04:56 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:57.155 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:57.155 12:04:56 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:57.155 12:04:56 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:57.155 12:04:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:57.155 12:04:56 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:57.155 12:04:56 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:57.155 12:04:56 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:57.155 12:04:56 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:57.155 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:57.155 12:04:56 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:57.155 12:04:56 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:57.155 12:04:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:57.155 12:04:56 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:57.155 12:04:56 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:57.155 12:04:56 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:57.155 12:04:56 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:57.155 12:04:56 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:57.155 12:04:56 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:57.155 12:04:56 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:57.155 12:04:56 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:57.155 12:04:56 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:57.155 12:04:56 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:57.155 Found net devices under 0000:31:00.0: cvl_0_0 00:08:57.155 12:04:56 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:57.155 12:04:56 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:57.155 12:04:56 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:57.155 12:04:56 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:57.155 12:04:56 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:57.155 12:04:56 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:57.155 Found net devices under 0000:31:00.1: cvl_0_1 00:08:57.155 12:04:56 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:57.155 12:04:56 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:57.155 12:04:56 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:57.155 12:04:56 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:57.155 12:04:56 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:57.155 12:04:56 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:57.155 12:04:56 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:57.155 12:04:56 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:57.155 12:04:56 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:57.155 12:04:56 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:57.155 12:04:56 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:57.155 12:04:56 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:57.155 12:04:56 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:57.155 12:04:56 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:57.155 12:04:56 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:57.155 12:04:56 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:57.155 12:04:56 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:57.155 12:04:56 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:57.155 12:04:56 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:57.155 12:04:57 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:57.155 12:04:57 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:57.155 12:04:57 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:57.155 12:04:57 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:57.155 12:04:57 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:57.155 12:04:57 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:57.155 12:04:57 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:57.155 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:57.155 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.610 ms 00:08:57.155 00:08:57.155 --- 10.0.0.2 ping statistics --- 00:08:57.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:57.155 rtt min/avg/max/mdev = 0.610/0.610/0.610/0.000 ms 00:08:57.155 12:04:57 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:57.155 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:57.155 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:08:57.155 00:08:57.155 --- 10.0.0.1 ping statistics --- 00:08:57.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:57.155 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:08:57.155 12:04:57 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:57.155 12:04:57 -- nvmf/common.sh@411 -- # return 0 00:08:57.155 12:04:57 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:57.155 12:04:57 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:57.155 12:04:57 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:57.155 12:04:57 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:57.155 12:04:57 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:57.155 12:04:57 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:57.155 12:04:57 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:57.155 12:04:57 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:08:57.155 12:04:57 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:57.155 12:04:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:57.155 12:04:57 -- common/autotest_common.sh@10 -- # set +x 00:08:57.155 12:04:57 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:57.155 12:04:57 -- nvmf/common.sh@470 -- # nvmfpid=3262073 00:08:57.155 12:04:57 -- nvmf/common.sh@471 -- # waitforlisten 3262073 00:08:57.155 12:04:57 -- common/autotest_common.sh@817 -- # '[' -z 3262073 ']' 00:08:57.155 12:04:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.155 12:04:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:57.155 12:04:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.155 12:04:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:57.155 12:04:57 -- common/autotest_common.sh@10 -- # set +x 00:08:57.155 [2024-04-26 12:04:57.265613] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:08:57.155 [2024-04-26 12:04:57.265664] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:57.155 EAL: No free 2048 kB hugepages reported on node 1 00:08:57.155 [2024-04-26 12:04:57.327367] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:57.155 [2024-04-26 12:04:57.394565] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:57.155 [2024-04-26 12:04:57.394602] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:57.155 [2024-04-26 12:04:57.394612] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:57.155 [2024-04-26 12:04:57.394624] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:57.156 [2024-04-26 12:04:57.394632] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:57.156 [2024-04-26 12:04:57.394802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:57.156 [2024-04-26 12:04:57.394926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:57.156 [2024-04-26 12:04:57.395083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.156 [2024-04-26 12:04:57.395084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:57.156 12:04:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:57.156 12:04:58 -- common/autotest_common.sh@850 -- # return 0 00:08:57.156 12:04:58 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:57.156 12:04:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:57.156 12:04:58 -- common/autotest_common.sh@10 -- # set +x 00:08:57.156 12:04:58 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:57.156 12:04:58 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:08:57.156 12:04:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:57.156 12:04:58 -- common/autotest_common.sh@10 -- # set +x 00:08:57.156 12:04:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:57.156 12:04:58 -- target/rpc.sh@26 -- # stats='{ 00:08:57.156 "tick_rate": 2400000000, 00:08:57.156 "poll_groups": [ 00:08:57.156 { 00:08:57.156 "name": "nvmf_tgt_poll_group_0", 00:08:57.156 "admin_qpairs": 0, 00:08:57.156 "io_qpairs": 0, 00:08:57.156 "current_admin_qpairs": 0, 00:08:57.156 "current_io_qpairs": 0, 00:08:57.156 "pending_bdev_io": 0, 00:08:57.156 "completed_nvme_io": 0, 00:08:57.156 "transports": [] 00:08:57.156 }, 00:08:57.156 { 00:08:57.156 "name": "nvmf_tgt_poll_group_1", 00:08:57.156 "admin_qpairs": 0, 00:08:57.156 "io_qpairs": 0, 00:08:57.156 "current_admin_qpairs": 0, 00:08:57.156 "current_io_qpairs": 0, 00:08:57.156 "pending_bdev_io": 0, 00:08:57.156 "completed_nvme_io": 0, 00:08:57.156 "transports": [] 00:08:57.156 }, 00:08:57.156 { 00:08:57.156 "name": "nvmf_tgt_poll_group_2", 00:08:57.156 "admin_qpairs": 0, 00:08:57.156 "io_qpairs": 0, 00:08:57.156 "current_admin_qpairs": 0, 00:08:57.156 "current_io_qpairs": 0, 00:08:57.156 "pending_bdev_io": 0, 00:08:57.156 "completed_nvme_io": 0, 00:08:57.156 "transports": [] 00:08:57.156 }, 00:08:57.156 { 00:08:57.156 "name": "nvmf_tgt_poll_group_3", 00:08:57.156 "admin_qpairs": 0, 00:08:57.156 "io_qpairs": 0, 00:08:57.156 "current_admin_qpairs": 0, 00:08:57.156 "current_io_qpairs": 0, 00:08:57.156 "pending_bdev_io": 0, 00:08:57.156 "completed_nvme_io": 0, 00:08:57.156 "transports": [] 00:08:57.156 } 00:08:57.156 ] 00:08:57.156 }' 00:08:57.156 12:04:58 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:08:57.156 12:04:58 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:08:57.156 12:04:58 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:08:57.156 12:04:58 -- target/rpc.sh@15 -- # wc -l 00:08:57.156 12:04:58 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:08:57.156 12:04:58 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:08:57.156 12:04:58 -- target/rpc.sh@29 -- # [[ null == null ]] 00:08:57.156 12:04:58 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:57.156 12:04:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:57.156 12:04:58 -- common/autotest_common.sh@10 -- # set +x 00:08:57.156 [2024-04-26 12:04:58.227961] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:57.156 12:04:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:57.156 12:04:58 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:08:57.156 12:04:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:57.156 12:04:58 -- common/autotest_common.sh@10 -- # set +x 00:08:57.156 12:04:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:57.156 12:04:58 -- target/rpc.sh@33 -- # stats='{ 00:08:57.156 "tick_rate": 2400000000, 00:08:57.156 "poll_groups": [ 00:08:57.156 { 00:08:57.156 "name": "nvmf_tgt_poll_group_0", 00:08:57.156 "admin_qpairs": 0, 00:08:57.156 "io_qpairs": 0, 00:08:57.156 "current_admin_qpairs": 0, 00:08:57.156 "current_io_qpairs": 0, 00:08:57.156 "pending_bdev_io": 0, 00:08:57.156 "completed_nvme_io": 0, 00:08:57.156 "transports": [ 00:08:57.156 { 00:08:57.156 "trtype": "TCP" 00:08:57.156 } 00:08:57.156 ] 00:08:57.156 }, 00:08:57.156 { 00:08:57.156 "name": "nvmf_tgt_poll_group_1", 00:08:57.156 "admin_qpairs": 0, 00:08:57.156 "io_qpairs": 0, 00:08:57.156 "current_admin_qpairs": 0, 00:08:57.156 "current_io_qpairs": 0, 00:08:57.156 "pending_bdev_io": 0, 00:08:57.156 "completed_nvme_io": 0, 00:08:57.156 "transports": [ 00:08:57.156 { 00:08:57.156 "trtype": "TCP" 00:08:57.156 } 00:08:57.156 ] 00:08:57.156 }, 00:08:57.156 { 00:08:57.156 "name": "nvmf_tgt_poll_group_2", 00:08:57.156 "admin_qpairs": 0, 00:08:57.156 "io_qpairs": 0, 00:08:57.156 "current_admin_qpairs": 0, 00:08:57.156 "current_io_qpairs": 0, 00:08:57.156 "pending_bdev_io": 0, 00:08:57.156 "completed_nvme_io": 0, 00:08:57.156 "transports": [ 00:08:57.156 { 00:08:57.156 "trtype": "TCP" 00:08:57.156 } 00:08:57.156 ] 00:08:57.156 }, 00:08:57.156 { 00:08:57.156 "name": "nvmf_tgt_poll_group_3", 00:08:57.156 "admin_qpairs": 0, 00:08:57.156 "io_qpairs": 0, 00:08:57.156 "current_admin_qpairs": 0, 00:08:57.156 "current_io_qpairs": 0, 00:08:57.156 "pending_bdev_io": 0, 00:08:57.156 "completed_nvme_io": 0, 00:08:57.156 "transports": [ 00:08:57.156 { 00:08:57.156 "trtype": "TCP" 00:08:57.156 } 00:08:57.156 ] 00:08:57.156 } 00:08:57.156 ] 00:08:57.156 }' 00:08:57.156 12:04:58 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:08:57.156 12:04:58 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:57.156 12:04:58 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:57.156 12:04:58 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:57.156 12:04:58 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:08:57.156 12:04:58 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:08:57.156 12:04:58 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:57.156 12:04:58 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:57.156 12:04:58 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:57.156 12:04:58 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:08:57.156 12:04:58 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:08:57.156 12:04:58 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:08:57.156 12:04:58 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:08:57.156 12:04:58 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:08:57.156 12:04:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:57.156 12:04:58 -- common/autotest_common.sh@10 -- # set +x 00:08:57.156 Malloc1 00:08:57.156 12:04:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:57.156 12:04:58 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:57.156 12:04:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:57.156 12:04:58 -- common/autotest_common.sh@10 -- # set +x 00:08:57.416 12:04:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:57.416 12:04:58 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:57.416 12:04:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:57.416 12:04:58 -- common/autotest_common.sh@10 -- # set +x 00:08:57.416 12:04:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:57.416 12:04:58 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:08:57.416 12:04:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:57.416 12:04:58 -- common/autotest_common.sh@10 -- # set +x 00:08:57.416 12:04:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:57.416 12:04:58 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:57.416 12:04:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:57.416 12:04:58 -- common/autotest_common.sh@10 -- # set +x 00:08:57.416 [2024-04-26 12:04:58.415829] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:57.416 12:04:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:57.416 12:04:58 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:08:57.416 12:04:58 -- common/autotest_common.sh@638 -- # local es=0 00:08:57.416 12:04:58 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:08:57.416 12:04:58 -- common/autotest_common.sh@626 -- # local arg=nvme 00:08:57.416 12:04:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:57.416 12:04:58 -- common/autotest_common.sh@630 -- # type -t nvme 00:08:57.416 12:04:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:57.416 12:04:58 -- common/autotest_common.sh@632 -- # type -P nvme 00:08:57.416 12:04:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:57.416 12:04:58 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:08:57.416 12:04:58 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:08:57.416 12:04:58 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:08:57.416 [2024-04-26 12:04:58.442554] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:08:57.416 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:57.416 could not add new controller: failed to write to nvme-fabrics device 00:08:57.416 12:04:58 -- common/autotest_common.sh@641 -- # es=1 00:08:57.416 12:04:58 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:57.416 12:04:58 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:08:57.416 12:04:58 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:57.416 12:04:58 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:57.416 12:04:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:57.416 12:04:58 -- common/autotest_common.sh@10 -- # set +x 00:08:57.416 12:04:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:57.416 12:04:58 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:59.330 12:05:00 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:08:59.330 12:05:00 -- common/autotest_common.sh@1184 -- # local i=0 00:08:59.330 12:05:00 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:08:59.330 12:05:00 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:08:59.330 12:05:00 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:01.244 12:05:02 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:01.244 12:05:02 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:01.244 12:05:02 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:01.244 12:05:02 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:01.244 12:05:02 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:01.244 12:05:02 -- common/autotest_common.sh@1194 -- # return 0 00:09:01.244 12:05:02 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:01.244 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.244 12:05:02 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:01.244 12:05:02 -- common/autotest_common.sh@1205 -- # local i=0 00:09:01.244 12:05:02 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:01.244 12:05:02 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:01.244 12:05:02 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:01.244 12:05:02 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:01.244 12:05:02 -- common/autotest_common.sh@1217 -- # return 0 00:09:01.244 12:05:02 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:01.244 12:05:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:01.244 12:05:02 -- common/autotest_common.sh@10 -- # set +x 00:09:01.244 12:05:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:01.244 12:05:02 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:01.244 12:05:02 -- common/autotest_common.sh@638 -- # local es=0 00:09:01.244 12:05:02 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:01.244 12:05:02 -- common/autotest_common.sh@626 -- # local arg=nvme 00:09:01.244 12:05:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:01.244 12:05:02 -- common/autotest_common.sh@630 -- # type -t nvme 00:09:01.244 12:05:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:01.244 12:05:02 -- common/autotest_common.sh@632 -- # type -P nvme 00:09:01.244 12:05:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:01.244 12:05:02 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:09:01.244 12:05:02 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:09:01.244 12:05:02 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:01.244 [2024-04-26 12:05:02.199089] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:09:01.244 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:01.244 could not add new controller: failed to write to nvme-fabrics device 00:09:01.244 12:05:02 -- common/autotest_common.sh@641 -- # es=1 00:09:01.244 12:05:02 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:01.244 12:05:02 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:01.244 12:05:02 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:01.244 12:05:02 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:09:01.244 12:05:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:01.244 12:05:02 -- common/autotest_common.sh@10 -- # set +x 00:09:01.244 12:05:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:01.244 12:05:02 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:02.629 12:05:03 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:09:02.629 12:05:03 -- common/autotest_common.sh@1184 -- # local i=0 00:09:02.629 12:05:03 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:02.629 12:05:03 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:02.629 12:05:03 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:05.175 12:05:05 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:05.175 12:05:05 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:05.175 12:05:05 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:05.175 12:05:05 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:05.175 12:05:05 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:05.175 12:05:05 -- common/autotest_common.sh@1194 -- # return 0 00:09:05.175 12:05:05 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:05.175 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:05.175 12:05:05 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:05.175 12:05:05 -- common/autotest_common.sh@1205 -- # local i=0 00:09:05.175 12:05:05 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:05.175 12:05:05 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:05.175 12:05:05 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:05.175 12:05:05 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:05.175 12:05:05 -- common/autotest_common.sh@1217 -- # return 0 00:09:05.175 12:05:05 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:05.175 12:05:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:05.175 12:05:05 -- common/autotest_common.sh@10 -- # set +x 00:09:05.175 12:05:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:05.175 12:05:05 -- target/rpc.sh@81 -- # seq 1 5 00:09:05.175 12:05:05 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:05.175 12:05:05 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:05.175 12:05:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:05.175 12:05:05 -- common/autotest_common.sh@10 -- # set +x 00:09:05.175 12:05:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:05.175 12:05:05 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:05.175 12:05:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:05.175 12:05:05 -- common/autotest_common.sh@10 -- # set +x 00:09:05.175 [2024-04-26 12:05:05.948906] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:05.175 12:05:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:05.175 12:05:05 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:05.175 12:05:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:05.175 12:05:05 -- common/autotest_common.sh@10 -- # set +x 00:09:05.175 12:05:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:05.175 12:05:05 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:05.175 12:05:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:05.175 12:05:05 -- common/autotest_common.sh@10 -- # set +x 00:09:05.175 12:05:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:05.175 12:05:05 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:06.560 12:05:07 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:06.560 12:05:07 -- common/autotest_common.sh@1184 -- # local i=0 00:09:06.560 12:05:07 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:06.560 12:05:07 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:06.560 12:05:07 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:08.544 12:05:09 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:08.544 12:05:09 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:08.544 12:05:09 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:08.544 12:05:09 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:08.544 12:05:09 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:08.544 12:05:09 -- common/autotest_common.sh@1194 -- # return 0 00:09:08.544 12:05:09 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:08.544 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.544 12:05:09 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:08.544 12:05:09 -- common/autotest_common.sh@1205 -- # local i=0 00:09:08.544 12:05:09 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:08.544 12:05:09 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:08.544 12:05:09 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:08.544 12:05:09 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:08.544 12:05:09 -- common/autotest_common.sh@1217 -- # return 0 00:09:08.544 12:05:09 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:08.544 12:05:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:08.544 12:05:09 -- common/autotest_common.sh@10 -- # set +x 00:09:08.544 12:05:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:08.544 12:05:09 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:08.544 12:05:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:08.544 12:05:09 -- common/autotest_common.sh@10 -- # set +x 00:09:08.544 12:05:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:08.544 12:05:09 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:08.544 12:05:09 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:08.544 12:05:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:08.544 12:05:09 -- common/autotest_common.sh@10 -- # set +x 00:09:08.544 12:05:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:08.544 12:05:09 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:08.544 12:05:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:08.544 12:05:09 -- common/autotest_common.sh@10 -- # set +x 00:09:08.544 [2024-04-26 12:05:09.683463] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:08.544 12:05:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:08.544 12:05:09 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:08.544 12:05:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:08.544 12:05:09 -- common/autotest_common.sh@10 -- # set +x 00:09:08.544 12:05:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:08.544 12:05:09 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:08.544 12:05:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:08.544 12:05:09 -- common/autotest_common.sh@10 -- # set +x 00:09:08.544 12:05:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:08.544 12:05:09 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:10.458 12:05:11 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:10.458 12:05:11 -- common/autotest_common.sh@1184 -- # local i=0 00:09:10.458 12:05:11 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:10.458 12:05:11 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:10.458 12:05:11 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:12.372 12:05:13 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:12.372 12:05:13 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:12.372 12:05:13 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:12.372 12:05:13 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:12.372 12:05:13 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:12.372 12:05:13 -- common/autotest_common.sh@1194 -- # return 0 00:09:12.372 12:05:13 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:12.372 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:12.372 12:05:13 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:12.372 12:05:13 -- common/autotest_common.sh@1205 -- # local i=0 00:09:12.372 12:05:13 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:12.372 12:05:13 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:12.372 12:05:13 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:12.372 12:05:13 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:12.372 12:05:13 -- common/autotest_common.sh@1217 -- # return 0 00:09:12.372 12:05:13 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:12.372 12:05:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:12.372 12:05:13 -- common/autotest_common.sh@10 -- # set +x 00:09:12.372 12:05:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:12.372 12:05:13 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:12.372 12:05:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:12.372 12:05:13 -- common/autotest_common.sh@10 -- # set +x 00:09:12.372 12:05:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:12.372 12:05:13 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:12.372 12:05:13 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:12.372 12:05:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:12.372 12:05:13 -- common/autotest_common.sh@10 -- # set +x 00:09:12.372 12:05:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:12.372 12:05:13 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:12.372 12:05:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:12.372 12:05:13 -- common/autotest_common.sh@10 -- # set +x 00:09:12.372 [2024-04-26 12:05:13.422313] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:12.372 12:05:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:12.372 12:05:13 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:12.372 12:05:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:12.372 12:05:13 -- common/autotest_common.sh@10 -- # set +x 00:09:12.372 12:05:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:12.372 12:05:13 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:12.372 12:05:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:12.372 12:05:13 -- common/autotest_common.sh@10 -- # set +x 00:09:12.372 12:05:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:12.372 12:05:13 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:14.285 12:05:14 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:14.285 12:05:14 -- common/autotest_common.sh@1184 -- # local i=0 00:09:14.285 12:05:14 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:14.285 12:05:14 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:14.285 12:05:14 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:16.197 12:05:17 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:16.197 12:05:17 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:16.197 12:05:17 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:16.197 12:05:17 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:16.197 12:05:17 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:16.197 12:05:17 -- common/autotest_common.sh@1194 -- # return 0 00:09:16.197 12:05:17 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:16.197 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:16.197 12:05:17 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:16.197 12:05:17 -- common/autotest_common.sh@1205 -- # local i=0 00:09:16.197 12:05:17 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:16.197 12:05:17 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:16.197 12:05:17 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:16.197 12:05:17 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:16.197 12:05:17 -- common/autotest_common.sh@1217 -- # return 0 00:09:16.197 12:05:17 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:16.197 12:05:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:16.197 12:05:17 -- common/autotest_common.sh@10 -- # set +x 00:09:16.197 12:05:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:16.197 12:05:17 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:16.197 12:05:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:16.197 12:05:17 -- common/autotest_common.sh@10 -- # set +x 00:09:16.197 12:05:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:16.197 12:05:17 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:16.197 12:05:17 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:16.197 12:05:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:16.197 12:05:17 -- common/autotest_common.sh@10 -- # set +x 00:09:16.197 12:05:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:16.197 12:05:17 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:16.197 12:05:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:16.197 12:05:17 -- common/autotest_common.sh@10 -- # set +x 00:09:16.197 [2024-04-26 12:05:17.159972] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:16.197 12:05:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:16.197 12:05:17 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:16.197 12:05:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:16.197 12:05:17 -- common/autotest_common.sh@10 -- # set +x 00:09:16.197 12:05:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:16.197 12:05:17 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:16.197 12:05:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:16.197 12:05:17 -- common/autotest_common.sh@10 -- # set +x 00:09:16.197 12:05:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:16.197 12:05:17 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:17.580 12:05:18 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:17.580 12:05:18 -- common/autotest_common.sh@1184 -- # local i=0 00:09:17.580 12:05:18 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:17.580 12:05:18 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:17.580 12:05:18 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:19.491 12:05:20 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:19.491 12:05:20 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:19.491 12:05:20 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:19.492 12:05:20 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:19.492 12:05:20 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:19.492 12:05:20 -- common/autotest_common.sh@1194 -- # return 0 00:09:19.492 12:05:20 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:19.752 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.752 12:05:20 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:19.752 12:05:20 -- common/autotest_common.sh@1205 -- # local i=0 00:09:19.752 12:05:20 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:19.752 12:05:20 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:19.752 12:05:20 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:19.752 12:05:20 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:19.752 12:05:20 -- common/autotest_common.sh@1217 -- # return 0 00:09:19.752 12:05:20 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:19.752 12:05:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:19.752 12:05:20 -- common/autotest_common.sh@10 -- # set +x 00:09:19.752 12:05:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:19.752 12:05:20 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:19.752 12:05:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:19.752 12:05:20 -- common/autotest_common.sh@10 -- # set +x 00:09:19.752 12:05:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:19.752 12:05:20 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:19.752 12:05:20 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:19.752 12:05:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:19.752 12:05:20 -- common/autotest_common.sh@10 -- # set +x 00:09:19.752 12:05:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:19.752 12:05:20 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:19.752 12:05:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:19.752 12:05:20 -- common/autotest_common.sh@10 -- # set +x 00:09:19.752 [2024-04-26 12:05:20.865215] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:19.752 12:05:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:19.752 12:05:20 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:19.752 12:05:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:19.752 12:05:20 -- common/autotest_common.sh@10 -- # set +x 00:09:19.752 12:05:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:19.752 12:05:20 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:19.752 12:05:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:19.752 12:05:20 -- common/autotest_common.sh@10 -- # set +x 00:09:19.752 12:05:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:19.752 12:05:20 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:21.665 12:05:22 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:21.665 12:05:22 -- common/autotest_common.sh@1184 -- # local i=0 00:09:21.665 12:05:22 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:21.665 12:05:22 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:21.665 12:05:22 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:23.577 12:05:24 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:23.577 12:05:24 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:23.577 12:05:24 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:23.577 12:05:24 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:23.577 12:05:24 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:23.577 12:05:24 -- common/autotest_common.sh@1194 -- # return 0 00:09:23.577 12:05:24 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:23.577 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.577 12:05:24 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:23.577 12:05:24 -- common/autotest_common.sh@1205 -- # local i=0 00:09:23.577 12:05:24 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:23.577 12:05:24 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:23.577 12:05:24 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:23.577 12:05:24 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:23.577 12:05:24 -- common/autotest_common.sh@1217 -- # return 0 00:09:23.577 12:05:24 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:23.577 12:05:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:23.577 12:05:24 -- common/autotest_common.sh@10 -- # set +x 00:09:23.577 12:05:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:23.577 12:05:24 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:23.577 12:05:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:23.577 12:05:24 -- common/autotest_common.sh@10 -- # set +x 00:09:23.577 12:05:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:23.577 12:05:24 -- target/rpc.sh@99 -- # seq 1 5 00:09:23.577 12:05:24 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:23.577 12:05:24 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:23.577 12:05:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:23.577 12:05:24 -- common/autotest_common.sh@10 -- # set +x 00:09:23.577 12:05:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:23.577 12:05:24 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:23.577 12:05:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:23.577 12:05:24 -- common/autotest_common.sh@10 -- # set +x 00:09:23.577 [2024-04-26 12:05:24.613193] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:23.577 12:05:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:23.577 12:05:24 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:23.577 12:05:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:23.577 12:05:24 -- common/autotest_common.sh@10 -- # set +x 00:09:23.577 12:05:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:23.577 12:05:24 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:23.577 12:05:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:23.577 12:05:24 -- common/autotest_common.sh@10 -- # set +x 00:09:23.577 12:05:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:23.577 12:05:24 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:23.577 12:05:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:23.577 12:05:24 -- common/autotest_common.sh@10 -- # set +x 00:09:23.577 12:05:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:23.577 12:05:24 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:23.577 12:05:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:23.577 12:05:24 -- common/autotest_common.sh@10 -- # set +x 00:09:23.577 12:05:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:23.577 12:05:24 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:23.577 12:05:24 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:23.577 12:05:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:23.577 12:05:24 -- common/autotest_common.sh@10 -- # set +x 00:09:23.577 12:05:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:23.577 12:05:24 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:23.577 12:05:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:23.577 12:05:24 -- common/autotest_common.sh@10 -- # set +x 00:09:23.577 [2024-04-26 12:05:24.677323] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:23.577 12:05:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:23.577 12:05:24 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:23.577 12:05:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:23.577 12:05:24 -- common/autotest_common.sh@10 -- # set +x 00:09:23.577 12:05:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:23.577 12:05:24 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:23.577 12:05:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:23.577 12:05:24 -- common/autotest_common.sh@10 -- # set +x 00:09:23.577 12:05:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:23.577 12:05:24 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:23.577 12:05:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:23.577 12:05:24 -- common/autotest_common.sh@10 -- # set +x 00:09:23.577 12:05:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:23.577 12:05:24 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:23.578 12:05:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:23.578 12:05:24 -- common/autotest_common.sh@10 -- # set +x 00:09:23.578 12:05:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:23.578 12:05:24 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:23.578 12:05:24 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:23.578 12:05:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:23.578 12:05:24 -- common/autotest_common.sh@10 -- # set +x 00:09:23.578 12:05:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:23.578 12:05:24 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:23.578 12:05:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:23.578 12:05:24 -- common/autotest_common.sh@10 -- # set +x 00:09:23.578 [2024-04-26 12:05:24.733490] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:23.578 12:05:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:23.578 12:05:24 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:23.578 12:05:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:23.578 12:05:24 -- common/autotest_common.sh@10 -- # set +x 00:09:23.578 12:05:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:23.578 12:05:24 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:23.578 12:05:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:23.578 12:05:24 -- common/autotest_common.sh@10 -- # set +x 00:09:23.578 12:05:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:23.578 12:05:24 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:23.578 12:05:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:23.578 12:05:24 -- common/autotest_common.sh@10 -- # set +x 00:09:23.578 12:05:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:23.578 12:05:24 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:23.578 12:05:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:23.578 12:05:24 -- common/autotest_common.sh@10 -- # set +x 00:09:23.578 12:05:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:23.578 12:05:24 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:23.578 12:05:24 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:23.578 12:05:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:23.578 12:05:24 -- common/autotest_common.sh@10 -- # set +x 00:09:23.578 12:05:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:23.578 12:05:24 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:23.578 12:05:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:23.578 12:05:24 -- common/autotest_common.sh@10 -- # set +x 00:09:23.578 [2024-04-26 12:05:24.793704] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:23.838 12:05:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:23.838 12:05:24 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:23.838 12:05:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:23.838 12:05:24 -- common/autotest_common.sh@10 -- # set +x 00:09:23.838 12:05:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:23.838 12:05:24 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:23.838 12:05:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:23.838 12:05:24 -- common/autotest_common.sh@10 -- # set +x 00:09:23.838 12:05:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:23.838 12:05:24 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:23.838 12:05:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:23.838 12:05:24 -- common/autotest_common.sh@10 -- # set +x 00:09:23.838 12:05:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:23.838 12:05:24 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:23.838 12:05:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:23.838 12:05:24 -- common/autotest_common.sh@10 -- # set +x 00:09:23.839 12:05:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:23.839 12:05:24 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:23.839 12:05:24 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:23.839 12:05:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:23.839 12:05:24 -- common/autotest_common.sh@10 -- # set +x 00:09:23.839 12:05:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:23.839 12:05:24 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:23.839 12:05:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:23.839 12:05:24 -- common/autotest_common.sh@10 -- # set +x 00:09:23.839 [2024-04-26 12:05:24.853909] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:23.839 12:05:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:23.839 12:05:24 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:23.839 12:05:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:23.839 12:05:24 -- common/autotest_common.sh@10 -- # set +x 00:09:23.839 12:05:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:23.839 12:05:24 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:23.839 12:05:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:23.839 12:05:24 -- common/autotest_common.sh@10 -- # set +x 00:09:23.839 12:05:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:23.839 12:05:24 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:23.839 12:05:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:23.839 12:05:24 -- common/autotest_common.sh@10 -- # set +x 00:09:23.839 12:05:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:23.839 12:05:24 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:23.839 12:05:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:23.839 12:05:24 -- common/autotest_common.sh@10 -- # set +x 00:09:23.839 12:05:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:23.839 12:05:24 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:09:23.839 12:05:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:23.839 12:05:24 -- common/autotest_common.sh@10 -- # set +x 00:09:23.839 12:05:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:23.839 12:05:24 -- target/rpc.sh@110 -- # stats='{ 00:09:23.839 "tick_rate": 2400000000, 00:09:23.839 "poll_groups": [ 00:09:23.839 { 00:09:23.839 "name": "nvmf_tgt_poll_group_0", 00:09:23.839 "admin_qpairs": 0, 00:09:23.839 "io_qpairs": 224, 00:09:23.839 "current_admin_qpairs": 0, 00:09:23.839 "current_io_qpairs": 0, 00:09:23.839 "pending_bdev_io": 0, 00:09:23.839 "completed_nvme_io": 273, 00:09:23.839 "transports": [ 00:09:23.839 { 00:09:23.839 "trtype": "TCP" 00:09:23.839 } 00:09:23.839 ] 00:09:23.839 }, 00:09:23.839 { 00:09:23.839 "name": "nvmf_tgt_poll_group_1", 00:09:23.839 "admin_qpairs": 1, 00:09:23.839 "io_qpairs": 223, 00:09:23.839 "current_admin_qpairs": 0, 00:09:23.839 "current_io_qpairs": 0, 00:09:23.839 "pending_bdev_io": 0, 00:09:23.839 "completed_nvme_io": 521, 00:09:23.839 "transports": [ 00:09:23.839 { 00:09:23.839 "trtype": "TCP" 00:09:23.839 } 00:09:23.839 ] 00:09:23.839 }, 00:09:23.839 { 00:09:23.839 "name": "nvmf_tgt_poll_group_2", 00:09:23.839 "admin_qpairs": 6, 00:09:23.839 "io_qpairs": 218, 00:09:23.839 "current_admin_qpairs": 0, 00:09:23.839 "current_io_qpairs": 0, 00:09:23.839 "pending_bdev_io": 0, 00:09:23.839 "completed_nvme_io": 221, 00:09:23.839 "transports": [ 00:09:23.839 { 00:09:23.839 "trtype": "TCP" 00:09:23.839 } 00:09:23.839 ] 00:09:23.839 }, 00:09:23.839 { 00:09:23.839 "name": "nvmf_tgt_poll_group_3", 00:09:23.839 "admin_qpairs": 0, 00:09:23.839 "io_qpairs": 224, 00:09:23.839 "current_admin_qpairs": 0, 00:09:23.839 "current_io_qpairs": 0, 00:09:23.839 "pending_bdev_io": 0, 00:09:23.839 "completed_nvme_io": 224, 00:09:23.839 "transports": [ 00:09:23.839 { 00:09:23.839 "trtype": "TCP" 00:09:23.839 } 00:09:23.839 ] 00:09:23.839 } 00:09:23.839 ] 00:09:23.839 }' 00:09:23.839 12:05:24 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:09:23.839 12:05:24 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:23.839 12:05:24 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:23.839 12:05:24 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:23.839 12:05:24 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:09:23.839 12:05:24 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:09:23.839 12:05:24 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:23.839 12:05:24 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:23.839 12:05:24 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:23.839 12:05:25 -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:09:23.839 12:05:25 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:09:23.839 12:05:25 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:09:23.839 12:05:25 -- target/rpc.sh@123 -- # nvmftestfini 00:09:23.839 12:05:25 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:23.839 12:05:25 -- nvmf/common.sh@117 -- # sync 00:09:23.839 12:05:25 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:23.839 12:05:25 -- nvmf/common.sh@120 -- # set +e 00:09:23.839 12:05:25 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:23.839 12:05:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:23.839 rmmod nvme_tcp 00:09:23.839 rmmod nvme_fabrics 00:09:23.839 rmmod nvme_keyring 00:09:24.100 12:05:25 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:24.101 12:05:25 -- nvmf/common.sh@124 -- # set -e 00:09:24.101 12:05:25 -- nvmf/common.sh@125 -- # return 0 00:09:24.101 12:05:25 -- nvmf/common.sh@478 -- # '[' -n 3262073 ']' 00:09:24.101 12:05:25 -- nvmf/common.sh@479 -- # killprocess 3262073 00:09:24.101 12:05:25 -- common/autotest_common.sh@936 -- # '[' -z 3262073 ']' 00:09:24.101 12:05:25 -- common/autotest_common.sh@940 -- # kill -0 3262073 00:09:24.101 12:05:25 -- common/autotest_common.sh@941 -- # uname 00:09:24.101 12:05:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:24.101 12:05:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3262073 00:09:24.101 12:05:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:24.101 12:05:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:24.101 12:05:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3262073' 00:09:24.101 killing process with pid 3262073 00:09:24.101 12:05:25 -- common/autotest_common.sh@955 -- # kill 3262073 00:09:24.101 12:05:25 -- common/autotest_common.sh@960 -- # wait 3262073 00:09:24.101 12:05:25 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:24.101 12:05:25 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:24.101 12:05:25 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:24.101 12:05:25 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:24.101 12:05:25 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:24.101 12:05:25 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:24.101 12:05:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:24.101 12:05:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:26.641 12:05:27 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:26.641 00:09:26.641 real 0m37.455s 00:09:26.641 user 1m53.384s 00:09:26.641 sys 0m7.206s 00:09:26.641 12:05:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:26.641 12:05:27 -- common/autotest_common.sh@10 -- # set +x 00:09:26.641 ************************************ 00:09:26.641 END TEST nvmf_rpc 00:09:26.641 ************************************ 00:09:26.641 12:05:27 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:26.641 12:05:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:26.641 12:05:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:26.641 12:05:27 -- common/autotest_common.sh@10 -- # set +x 00:09:26.641 ************************************ 00:09:26.641 START TEST nvmf_invalid 00:09:26.641 ************************************ 00:09:26.641 12:05:27 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:26.641 * Looking for test storage... 00:09:26.641 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:26.641 12:05:27 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:26.641 12:05:27 -- nvmf/common.sh@7 -- # uname -s 00:09:26.641 12:05:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:26.641 12:05:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:26.641 12:05:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:26.641 12:05:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:26.641 12:05:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:26.641 12:05:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:26.641 12:05:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:26.641 12:05:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:26.641 12:05:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:26.641 12:05:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:26.641 12:05:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:26.641 12:05:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:26.641 12:05:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:26.641 12:05:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:26.641 12:05:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:26.641 12:05:27 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:26.641 12:05:27 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:26.641 12:05:27 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:26.641 12:05:27 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:26.641 12:05:27 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:26.641 12:05:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.641 12:05:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.642 12:05:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.642 12:05:27 -- paths/export.sh@5 -- # export PATH 00:09:26.642 12:05:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.642 12:05:27 -- nvmf/common.sh@47 -- # : 0 00:09:26.642 12:05:27 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:26.642 12:05:27 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:26.642 12:05:27 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:26.642 12:05:27 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:26.642 12:05:27 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:26.642 12:05:27 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:26.642 12:05:27 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:26.642 12:05:27 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:26.642 12:05:27 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:26.642 12:05:27 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:26.642 12:05:27 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:09:26.642 12:05:27 -- target/invalid.sh@14 -- # target=foobar 00:09:26.642 12:05:27 -- target/invalid.sh@16 -- # RANDOM=0 00:09:26.642 12:05:27 -- target/invalid.sh@34 -- # nvmftestinit 00:09:26.642 12:05:27 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:26.642 12:05:27 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:26.642 12:05:27 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:26.642 12:05:27 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:26.642 12:05:27 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:26.642 12:05:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:26.642 12:05:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:26.642 12:05:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:26.642 12:05:27 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:26.642 12:05:27 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:26.642 12:05:27 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:26.642 12:05:27 -- common/autotest_common.sh@10 -- # set +x 00:09:34.797 12:05:34 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:34.797 12:05:34 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:34.797 12:05:34 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:34.797 12:05:34 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:34.797 12:05:34 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:34.797 12:05:34 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:34.797 12:05:34 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:34.797 12:05:34 -- nvmf/common.sh@295 -- # net_devs=() 00:09:34.797 12:05:34 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:34.797 12:05:34 -- nvmf/common.sh@296 -- # e810=() 00:09:34.797 12:05:34 -- nvmf/common.sh@296 -- # local -ga e810 00:09:34.797 12:05:34 -- nvmf/common.sh@297 -- # x722=() 00:09:34.797 12:05:34 -- nvmf/common.sh@297 -- # local -ga x722 00:09:34.797 12:05:34 -- nvmf/common.sh@298 -- # mlx=() 00:09:34.797 12:05:34 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:34.797 12:05:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:34.797 12:05:34 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:34.797 12:05:34 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:34.797 12:05:34 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:34.797 12:05:34 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:34.797 12:05:34 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:34.797 12:05:34 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:34.797 12:05:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:34.797 12:05:34 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:34.797 12:05:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:34.797 12:05:34 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:34.797 12:05:34 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:34.797 12:05:34 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:34.797 12:05:34 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:34.797 12:05:34 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:34.797 12:05:34 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:34.797 12:05:34 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:34.797 12:05:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:34.797 12:05:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:34.797 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:34.797 12:05:34 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:34.797 12:05:34 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:34.797 12:05:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:34.797 12:05:34 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:34.797 12:05:34 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:34.797 12:05:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:34.797 12:05:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:34.797 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:34.797 12:05:34 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:34.797 12:05:34 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:34.797 12:05:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:34.797 12:05:34 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:34.798 12:05:34 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:34.798 12:05:34 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:34.798 12:05:34 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:34.798 12:05:34 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:34.798 12:05:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:34.798 12:05:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:34.798 12:05:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:34.798 12:05:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:34.798 12:05:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:34.798 Found net devices under 0000:31:00.0: cvl_0_0 00:09:34.798 12:05:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:34.798 12:05:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:34.798 12:05:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:34.798 12:05:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:34.798 12:05:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:34.798 12:05:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:34.798 Found net devices under 0000:31:00.1: cvl_0_1 00:09:34.798 12:05:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:34.798 12:05:34 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:34.798 12:05:34 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:34.798 12:05:34 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:34.798 12:05:34 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:09:34.798 12:05:34 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:09:34.798 12:05:34 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:34.798 12:05:34 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:34.798 12:05:34 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:34.798 12:05:34 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:34.798 12:05:34 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:34.798 12:05:34 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:34.798 12:05:34 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:34.798 12:05:34 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:34.798 12:05:34 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:34.798 12:05:34 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:34.798 12:05:34 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:34.798 12:05:34 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:34.798 12:05:34 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:34.798 12:05:34 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:34.798 12:05:34 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:34.798 12:05:34 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:34.798 12:05:34 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:34.798 12:05:34 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:34.798 12:05:34 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:34.798 12:05:34 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:34.798 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:34.798 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.841 ms 00:09:34.798 00:09:34.798 --- 10.0.0.2 ping statistics --- 00:09:34.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.798 rtt min/avg/max/mdev = 0.841/0.841/0.841/0.000 ms 00:09:34.798 12:05:34 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:34.798 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:34.798 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.549 ms 00:09:34.798 00:09:34.798 --- 10.0.0.1 ping statistics --- 00:09:34.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.798 rtt min/avg/max/mdev = 0.549/0.549/0.549/0.000 ms 00:09:34.798 12:05:34 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:34.798 12:05:34 -- nvmf/common.sh@411 -- # return 0 00:09:34.798 12:05:34 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:34.798 12:05:34 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:34.798 12:05:34 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:34.798 12:05:34 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:34.798 12:05:34 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:34.798 12:05:34 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:34.798 12:05:34 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:34.798 12:05:34 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:09:34.798 12:05:34 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:34.798 12:05:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:34.798 12:05:34 -- common/autotest_common.sh@10 -- # set +x 00:09:34.798 12:05:34 -- nvmf/common.sh@470 -- # nvmfpid=3271984 00:09:34.798 12:05:34 -- nvmf/common.sh@471 -- # waitforlisten 3271984 00:09:34.798 12:05:34 -- common/autotest_common.sh@817 -- # '[' -z 3271984 ']' 00:09:34.798 12:05:34 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:34.798 12:05:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.798 12:05:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:34.798 12:05:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.798 12:05:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:34.798 12:05:34 -- common/autotest_common.sh@10 -- # set +x 00:09:34.798 [2024-04-26 12:05:34.966915] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:09:34.798 [2024-04-26 12:05:34.966976] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:34.798 EAL: No free 2048 kB hugepages reported on node 1 00:09:34.798 [2024-04-26 12:05:35.043100] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:34.798 [2024-04-26 12:05:35.118691] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:34.798 [2024-04-26 12:05:35.118732] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:34.798 [2024-04-26 12:05:35.118745] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:34.798 [2024-04-26 12:05:35.118751] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:34.798 [2024-04-26 12:05:35.118756] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:34.798 [2024-04-26 12:05:35.118897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:34.798 [2024-04-26 12:05:35.119029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:34.798 [2024-04-26 12:05:35.119187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.798 [2024-04-26 12:05:35.119186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:34.798 12:05:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:34.798 12:05:35 -- common/autotest_common.sh@850 -- # return 0 00:09:34.798 12:05:35 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:34.798 12:05:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:34.798 12:05:35 -- common/autotest_common.sh@10 -- # set +x 00:09:34.798 12:05:35 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:34.798 12:05:35 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:34.798 12:05:35 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode15940 00:09:34.798 [2024-04-26 12:05:35.935821] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:09:34.798 12:05:35 -- target/invalid.sh@40 -- # out='request: 00:09:34.798 { 00:09:34.798 "nqn": "nqn.2016-06.io.spdk:cnode15940", 00:09:34.798 "tgt_name": "foobar", 00:09:34.798 "method": "nvmf_create_subsystem", 00:09:34.798 "req_id": 1 00:09:34.798 } 00:09:34.798 Got JSON-RPC error response 00:09:34.798 response: 00:09:34.798 { 00:09:34.798 "code": -32603, 00:09:34.798 "message": "Unable to find target foobar" 00:09:34.798 }' 00:09:34.798 12:05:35 -- target/invalid.sh@41 -- # [[ request: 00:09:34.798 { 00:09:34.798 "nqn": "nqn.2016-06.io.spdk:cnode15940", 00:09:34.798 "tgt_name": "foobar", 00:09:34.798 "method": "nvmf_create_subsystem", 00:09:34.798 "req_id": 1 00:09:34.798 } 00:09:34.798 Got JSON-RPC error response 00:09:34.798 response: 00:09:34.798 { 00:09:34.798 "code": -32603, 00:09:34.798 "message": "Unable to find target foobar" 00:09:34.798 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:09:34.798 12:05:35 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:09:34.798 12:05:35 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode10193 00:09:35.058 [2024-04-26 12:05:36.104401] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10193: invalid serial number 'SPDKISFASTANDAWESOME' 00:09:35.058 12:05:36 -- target/invalid.sh@45 -- # out='request: 00:09:35.058 { 00:09:35.058 "nqn": "nqn.2016-06.io.spdk:cnode10193", 00:09:35.058 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:35.058 "method": "nvmf_create_subsystem", 00:09:35.058 "req_id": 1 00:09:35.058 } 00:09:35.058 Got JSON-RPC error response 00:09:35.058 response: 00:09:35.058 { 00:09:35.058 "code": -32602, 00:09:35.058 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:35.058 }' 00:09:35.058 12:05:36 -- target/invalid.sh@46 -- # [[ request: 00:09:35.058 { 00:09:35.058 "nqn": "nqn.2016-06.io.spdk:cnode10193", 00:09:35.058 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:35.058 "method": "nvmf_create_subsystem", 00:09:35.058 "req_id": 1 00:09:35.058 } 00:09:35.058 Got JSON-RPC error response 00:09:35.058 response: 00:09:35.058 { 00:09:35.058 "code": -32602, 00:09:35.058 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:35.058 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:35.058 12:05:36 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:09:35.058 12:05:36 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode32640 00:09:35.320 [2024-04-26 12:05:36.280991] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32640: invalid model number 'SPDK_Controller' 00:09:35.320 12:05:36 -- target/invalid.sh@50 -- # out='request: 00:09:35.320 { 00:09:35.320 "nqn": "nqn.2016-06.io.spdk:cnode32640", 00:09:35.320 "model_number": "SPDK_Controller\u001f", 00:09:35.320 "method": "nvmf_create_subsystem", 00:09:35.320 "req_id": 1 00:09:35.320 } 00:09:35.320 Got JSON-RPC error response 00:09:35.320 response: 00:09:35.320 { 00:09:35.320 "code": -32602, 00:09:35.320 "message": "Invalid MN SPDK_Controller\u001f" 00:09:35.320 }' 00:09:35.320 12:05:36 -- target/invalid.sh@51 -- # [[ request: 00:09:35.320 { 00:09:35.320 "nqn": "nqn.2016-06.io.spdk:cnode32640", 00:09:35.320 "model_number": "SPDK_Controller\u001f", 00:09:35.320 "method": "nvmf_create_subsystem", 00:09:35.320 "req_id": 1 00:09:35.320 } 00:09:35.320 Got JSON-RPC error response 00:09:35.320 response: 00:09:35.320 { 00:09:35.320 "code": -32602, 00:09:35.320 "message": "Invalid MN SPDK_Controller\u001f" 00:09:35.320 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:35.320 12:05:36 -- target/invalid.sh@54 -- # gen_random_s 21 00:09:35.320 12:05:36 -- target/invalid.sh@19 -- # local length=21 ll 00:09:35.320 12:05:36 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:35.320 12:05:36 -- target/invalid.sh@21 -- # local chars 00:09:35.320 12:05:36 -- target/invalid.sh@22 -- # local string 00:09:35.320 12:05:36 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:35.320 12:05:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.320 12:05:36 -- target/invalid.sh@25 -- # printf %x 116 00:09:35.320 12:05:36 -- target/invalid.sh@25 -- # echo -e '\x74' 00:09:35.320 12:05:36 -- target/invalid.sh@25 -- # string+=t 00:09:35.320 12:05:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.320 12:05:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.320 12:05:36 -- target/invalid.sh@25 -- # printf %x 90 00:09:35.320 12:05:36 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:09:35.320 12:05:36 -- target/invalid.sh@25 -- # string+=Z 00:09:35.320 12:05:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.320 12:05:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.320 12:05:36 -- target/invalid.sh@25 -- # printf %x 40 00:09:35.320 12:05:36 -- target/invalid.sh@25 -- # echo -e '\x28' 00:09:35.320 12:05:36 -- target/invalid.sh@25 -- # string+='(' 00:09:35.320 12:05:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.320 12:05:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.320 12:05:36 -- target/invalid.sh@25 -- # printf %x 60 00:09:35.320 12:05:36 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:09:35.320 12:05:36 -- target/invalid.sh@25 -- # string+='<' 00:09:35.320 12:05:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.320 12:05:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.320 12:05:36 -- target/invalid.sh@25 -- # printf %x 39 00:09:35.320 12:05:36 -- target/invalid.sh@25 -- # echo -e '\x27' 00:09:35.320 12:05:36 -- target/invalid.sh@25 -- # string+=\' 00:09:35.320 12:05:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.320 12:05:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.320 12:05:36 -- target/invalid.sh@25 -- # printf %x 104 00:09:35.320 12:05:36 -- target/invalid.sh@25 -- # echo -e '\x68' 00:09:35.320 12:05:36 -- target/invalid.sh@25 -- # string+=h 00:09:35.320 12:05:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.320 12:05:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.320 12:05:36 -- target/invalid.sh@25 -- # printf %x 78 00:09:35.320 12:05:36 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:09:35.320 12:05:36 -- target/invalid.sh@25 -- # string+=N 00:09:35.320 12:05:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.320 12:05:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.320 12:05:36 -- target/invalid.sh@25 -- # printf %x 65 00:09:35.320 12:05:36 -- target/invalid.sh@25 -- # echo -e '\x41' 00:09:35.320 12:05:36 -- target/invalid.sh@25 -- # string+=A 00:09:35.320 12:05:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.320 12:05:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.320 12:05:36 -- target/invalid.sh@25 -- # printf %x 106 00:09:35.320 12:05:36 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:09:35.320 12:05:36 -- target/invalid.sh@25 -- # string+=j 00:09:35.320 12:05:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.320 12:05:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.320 12:05:36 -- target/invalid.sh@25 -- # printf %x 104 00:09:35.320 12:05:36 -- target/invalid.sh@25 -- # echo -e '\x68' 00:09:35.320 12:05:36 -- target/invalid.sh@25 -- # string+=h 00:09:35.320 12:05:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.320 12:05:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.320 12:05:36 -- target/invalid.sh@25 -- # printf %x 123 00:09:35.320 12:05:36 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:09:35.320 12:05:36 -- target/invalid.sh@25 -- # string+='{' 00:09:35.320 12:05:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.320 12:05:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.320 12:05:36 -- target/invalid.sh@25 -- # printf %x 69 00:09:35.320 12:05:36 -- target/invalid.sh@25 -- # echo -e '\x45' 00:09:35.320 12:05:36 -- target/invalid.sh@25 -- # string+=E 00:09:35.320 12:05:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.321 12:05:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.321 12:05:36 -- target/invalid.sh@25 -- # printf %x 45 00:09:35.321 12:05:36 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:09:35.321 12:05:36 -- target/invalid.sh@25 -- # string+=- 00:09:35.321 12:05:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.321 12:05:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.321 12:05:36 -- target/invalid.sh@25 -- # printf %x 100 00:09:35.321 12:05:36 -- target/invalid.sh@25 -- # echo -e '\x64' 00:09:35.321 12:05:36 -- target/invalid.sh@25 -- # string+=d 00:09:35.321 12:05:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.321 12:05:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.321 12:05:36 -- target/invalid.sh@25 -- # printf %x 107 00:09:35.321 12:05:36 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:09:35.321 12:05:36 -- target/invalid.sh@25 -- # string+=k 00:09:35.321 12:05:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.321 12:05:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.321 12:05:36 -- target/invalid.sh@25 -- # printf %x 101 00:09:35.321 12:05:36 -- target/invalid.sh@25 -- # echo -e '\x65' 00:09:35.321 12:05:36 -- target/invalid.sh@25 -- # string+=e 00:09:35.321 12:05:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.321 12:05:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.321 12:05:36 -- target/invalid.sh@25 -- # printf %x 60 00:09:35.321 12:05:36 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:09:35.321 12:05:36 -- target/invalid.sh@25 -- # string+='<' 00:09:35.321 12:05:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.321 12:05:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.321 12:05:36 -- target/invalid.sh@25 -- # printf %x 42 00:09:35.321 12:05:36 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:09:35.321 12:05:36 -- target/invalid.sh@25 -- # string+='*' 00:09:35.321 12:05:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.321 12:05:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.321 12:05:36 -- target/invalid.sh@25 -- # printf %x 125 00:09:35.321 12:05:36 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:09:35.321 12:05:36 -- target/invalid.sh@25 -- # string+='}' 00:09:35.321 12:05:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.321 12:05:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.321 12:05:36 -- target/invalid.sh@25 -- # printf %x 63 00:09:35.321 12:05:36 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:09:35.321 12:05:36 -- target/invalid.sh@25 -- # string+='?' 00:09:35.321 12:05:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.321 12:05:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.321 12:05:36 -- target/invalid.sh@25 -- # printf %x 106 00:09:35.321 12:05:36 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:09:35.321 12:05:36 -- target/invalid.sh@25 -- # string+=j 00:09:35.321 12:05:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.321 12:05:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.321 12:05:36 -- target/invalid.sh@28 -- # [[ t == \- ]] 00:09:35.321 12:05:36 -- target/invalid.sh@31 -- # echo 'tZ(<'\''hNAjh{E-dke<*}?j' 00:09:35.321 12:05:36 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'tZ(<'\''hNAjh{E-dke<*}?j' nqn.2016-06.io.spdk:cnode7641 00:09:35.581 [2024-04-26 12:05:36.609992] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7641: invalid serial number 'tZ(<'hNAjh{E-dke<*}?j' 00:09:35.581 12:05:36 -- target/invalid.sh@54 -- # out='request: 00:09:35.581 { 00:09:35.581 "nqn": "nqn.2016-06.io.spdk:cnode7641", 00:09:35.581 "serial_number": "tZ(<'\''hNAjh{E-dke<*}?j", 00:09:35.581 "method": "nvmf_create_subsystem", 00:09:35.581 "req_id": 1 00:09:35.581 } 00:09:35.581 Got JSON-RPC error response 00:09:35.581 response: 00:09:35.581 { 00:09:35.581 "code": -32602, 00:09:35.581 "message": "Invalid SN tZ(<'\''hNAjh{E-dke<*}?j" 00:09:35.581 }' 00:09:35.581 12:05:36 -- target/invalid.sh@55 -- # [[ request: 00:09:35.581 { 00:09:35.581 "nqn": "nqn.2016-06.io.spdk:cnode7641", 00:09:35.581 "serial_number": "tZ(<'hNAjh{E-dke<*}?j", 00:09:35.581 "method": "nvmf_create_subsystem", 00:09:35.581 "req_id": 1 00:09:35.581 } 00:09:35.581 Got JSON-RPC error response 00:09:35.581 response: 00:09:35.581 { 00:09:35.581 "code": -32602, 00:09:35.581 "message": "Invalid SN tZ(<'hNAjh{E-dke<*}?j" 00:09:35.581 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:35.581 12:05:36 -- target/invalid.sh@58 -- # gen_random_s 41 00:09:35.581 12:05:36 -- target/invalid.sh@19 -- # local length=41 ll 00:09:35.581 12:05:36 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:35.581 12:05:36 -- target/invalid.sh@21 -- # local chars 00:09:35.581 12:05:36 -- target/invalid.sh@22 -- # local string 00:09:35.581 12:05:36 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:35.581 12:05:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.581 12:05:36 -- target/invalid.sh@25 -- # printf %x 80 00:09:35.581 12:05:36 -- target/invalid.sh@25 -- # echo -e '\x50' 00:09:35.581 12:05:36 -- target/invalid.sh@25 -- # string+=P 00:09:35.581 12:05:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.581 12:05:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.581 12:05:36 -- target/invalid.sh@25 -- # printf %x 99 00:09:35.581 12:05:36 -- target/invalid.sh@25 -- # echo -e '\x63' 00:09:35.581 12:05:36 -- target/invalid.sh@25 -- # string+=c 00:09:35.581 12:05:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.581 12:05:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.581 12:05:36 -- target/invalid.sh@25 -- # printf %x 105 00:09:35.581 12:05:36 -- target/invalid.sh@25 -- # echo -e '\x69' 00:09:35.581 12:05:36 -- target/invalid.sh@25 -- # string+=i 00:09:35.581 12:05:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.581 12:05:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.581 12:05:36 -- target/invalid.sh@25 -- # printf %x 92 00:09:35.581 12:05:36 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:09:35.581 12:05:36 -- target/invalid.sh@25 -- # string+='\' 00:09:35.581 12:05:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.581 12:05:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.581 12:05:36 -- target/invalid.sh@25 -- # printf %x 33 00:09:35.581 12:05:36 -- target/invalid.sh@25 -- # echo -e '\x21' 00:09:35.581 12:05:36 -- target/invalid.sh@25 -- # string+='!' 00:09:35.581 12:05:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.581 12:05:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.581 12:05:36 -- target/invalid.sh@25 -- # printf %x 113 00:09:35.581 12:05:36 -- target/invalid.sh@25 -- # echo -e '\x71' 00:09:35.581 12:05:36 -- target/invalid.sh@25 -- # string+=q 00:09:35.581 12:05:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.582 12:05:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.582 12:05:36 -- target/invalid.sh@25 -- # printf %x 57 00:09:35.582 12:05:36 -- target/invalid.sh@25 -- # echo -e '\x39' 00:09:35.582 12:05:36 -- target/invalid.sh@25 -- # string+=9 00:09:35.582 12:05:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.582 12:05:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.582 12:05:36 -- target/invalid.sh@25 -- # printf %x 55 00:09:35.582 12:05:36 -- target/invalid.sh@25 -- # echo -e '\x37' 00:09:35.582 12:05:36 -- target/invalid.sh@25 -- # string+=7 00:09:35.582 12:05:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.582 12:05:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.582 12:05:36 -- target/invalid.sh@25 -- # printf %x 70 00:09:35.582 12:05:36 -- target/invalid.sh@25 -- # echo -e '\x46' 00:09:35.582 12:05:36 -- target/invalid.sh@25 -- # string+=F 00:09:35.582 12:05:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.582 12:05:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.582 12:05:36 -- target/invalid.sh@25 -- # printf %x 58 00:09:35.582 12:05:36 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:09:35.582 12:05:36 -- target/invalid.sh@25 -- # string+=: 00:09:35.582 12:05:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.582 12:05:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.582 12:05:36 -- target/invalid.sh@25 -- # printf %x 59 00:09:35.582 12:05:36 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:09:35.582 12:05:36 -- target/invalid.sh@25 -- # string+=';' 00:09:35.582 12:05:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.582 12:05:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.582 12:05:36 -- target/invalid.sh@25 -- # printf %x 82 00:09:35.582 12:05:36 -- target/invalid.sh@25 -- # echo -e '\x52' 00:09:35.582 12:05:36 -- target/invalid.sh@25 -- # string+=R 00:09:35.582 12:05:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.582 12:05:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.582 12:05:36 -- target/invalid.sh@25 -- # printf %x 37 00:09:35.582 12:05:36 -- target/invalid.sh@25 -- # echo -e '\x25' 00:09:35.582 12:05:36 -- target/invalid.sh@25 -- # string+=% 00:09:35.582 12:05:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.582 12:05:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.582 12:05:36 -- target/invalid.sh@25 -- # printf %x 92 00:09:35.582 12:05:36 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:09:35.582 12:05:36 -- target/invalid.sh@25 -- # string+='\' 00:09:35.582 12:05:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.582 12:05:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.582 12:05:36 -- target/invalid.sh@25 -- # printf %x 63 00:09:35.582 12:05:36 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:09:35.582 12:05:36 -- target/invalid.sh@25 -- # string+='?' 00:09:35.582 12:05:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.582 12:05:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.582 12:05:36 -- target/invalid.sh@25 -- # printf %x 124 00:09:35.582 12:05:36 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:09:35.582 12:05:36 -- target/invalid.sh@25 -- # string+='|' 00:09:35.582 12:05:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.582 12:05:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.582 12:05:36 -- target/invalid.sh@25 -- # printf %x 114 00:09:35.582 12:05:36 -- target/invalid.sh@25 -- # echo -e '\x72' 00:09:35.582 12:05:36 -- target/invalid.sh@25 -- # string+=r 00:09:35.582 12:05:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.582 12:05:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.582 12:05:36 -- target/invalid.sh@25 -- # printf %x 119 00:09:35.582 12:05:36 -- target/invalid.sh@25 -- # echo -e '\x77' 00:09:35.582 12:05:36 -- target/invalid.sh@25 -- # string+=w 00:09:35.582 12:05:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.582 12:05:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.582 12:05:36 -- target/invalid.sh@25 -- # printf %x 43 00:09:35.582 12:05:36 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:09:35.582 12:05:36 -- target/invalid.sh@25 -- # string+=+ 00:09:35.582 12:05:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.582 12:05:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.582 12:05:36 -- target/invalid.sh@25 -- # printf %x 119 00:09:35.582 12:05:36 -- target/invalid.sh@25 -- # echo -e '\x77' 00:09:35.582 12:05:36 -- target/invalid.sh@25 -- # string+=w 00:09:35.582 12:05:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.582 12:05:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.582 12:05:36 -- target/invalid.sh@25 -- # printf %x 107 00:09:35.582 12:05:36 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:09:35.582 12:05:36 -- target/invalid.sh@25 -- # string+=k 00:09:35.582 12:05:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.842 12:05:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.842 12:05:36 -- target/invalid.sh@25 -- # printf %x 122 00:09:35.842 12:05:36 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:09:35.842 12:05:36 -- target/invalid.sh@25 -- # string+=z 00:09:35.842 12:05:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.842 12:05:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.842 12:05:36 -- target/invalid.sh@25 -- # printf %x 48 00:09:35.842 12:05:36 -- target/invalid.sh@25 -- # echo -e '\x30' 00:09:35.842 12:05:36 -- target/invalid.sh@25 -- # string+=0 00:09:35.842 12:05:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.842 12:05:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.842 12:05:36 -- target/invalid.sh@25 -- # printf %x 83 00:09:35.842 12:05:36 -- target/invalid.sh@25 -- # echo -e '\x53' 00:09:35.842 12:05:36 -- target/invalid.sh@25 -- # string+=S 00:09:35.842 12:05:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.842 12:05:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.842 12:05:36 -- target/invalid.sh@25 -- # printf %x 77 00:09:35.842 12:05:36 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:09:35.842 12:05:36 -- target/invalid.sh@25 -- # string+=M 00:09:35.843 12:05:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.843 12:05:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.843 12:05:36 -- target/invalid.sh@25 -- # printf %x 45 00:09:35.843 12:05:36 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:09:35.843 12:05:36 -- target/invalid.sh@25 -- # string+=- 00:09:35.843 12:05:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.843 12:05:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.843 12:05:36 -- target/invalid.sh@25 -- # printf %x 116 00:09:35.843 12:05:36 -- target/invalid.sh@25 -- # echo -e '\x74' 00:09:35.843 12:05:36 -- target/invalid.sh@25 -- # string+=t 00:09:35.843 12:05:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.843 12:05:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.843 12:05:36 -- target/invalid.sh@25 -- # printf %x 111 00:09:35.843 12:05:36 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:09:35.843 12:05:36 -- target/invalid.sh@25 -- # string+=o 00:09:35.843 12:05:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.843 12:05:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.843 12:05:36 -- target/invalid.sh@25 -- # printf %x 54 00:09:35.843 12:05:36 -- target/invalid.sh@25 -- # echo -e '\x36' 00:09:35.843 12:05:36 -- target/invalid.sh@25 -- # string+=6 00:09:35.843 12:05:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.843 12:05:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.843 12:05:36 -- target/invalid.sh@25 -- # printf %x 87 00:09:35.843 12:05:36 -- target/invalid.sh@25 -- # echo -e '\x57' 00:09:35.843 12:05:36 -- target/invalid.sh@25 -- # string+=W 00:09:35.843 12:05:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.843 12:05:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.843 12:05:36 -- target/invalid.sh@25 -- # printf %x 54 00:09:35.843 12:05:36 -- target/invalid.sh@25 -- # echo -e '\x36' 00:09:35.843 12:05:36 -- target/invalid.sh@25 -- # string+=6 00:09:35.843 12:05:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.843 12:05:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.843 12:05:36 -- target/invalid.sh@25 -- # printf %x 121 00:09:35.843 12:05:36 -- target/invalid.sh@25 -- # echo -e '\x79' 00:09:35.843 12:05:36 -- target/invalid.sh@25 -- # string+=y 00:09:35.843 12:05:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.843 12:05:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.843 12:05:36 -- target/invalid.sh@25 -- # printf %x 95 00:09:35.843 12:05:36 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:09:35.843 12:05:36 -- target/invalid.sh@25 -- # string+=_ 00:09:35.843 12:05:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.843 12:05:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.843 12:05:36 -- target/invalid.sh@25 -- # printf %x 125 00:09:35.843 12:05:36 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:09:35.843 12:05:36 -- target/invalid.sh@25 -- # string+='}' 00:09:35.843 12:05:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.843 12:05:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.843 12:05:36 -- target/invalid.sh@25 -- # printf %x 83 00:09:35.843 12:05:36 -- target/invalid.sh@25 -- # echo -e '\x53' 00:09:35.843 12:05:36 -- target/invalid.sh@25 -- # string+=S 00:09:35.843 12:05:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.843 12:05:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.843 12:05:36 -- target/invalid.sh@25 -- # printf %x 66 00:09:35.843 12:05:36 -- target/invalid.sh@25 -- # echo -e '\x42' 00:09:35.843 12:05:36 -- target/invalid.sh@25 -- # string+=B 00:09:35.843 12:05:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.843 12:05:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.843 12:05:36 -- target/invalid.sh@25 -- # printf %x 41 00:09:35.843 12:05:36 -- target/invalid.sh@25 -- # echo -e '\x29' 00:09:35.843 12:05:36 -- target/invalid.sh@25 -- # string+=')' 00:09:35.843 12:05:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.843 12:05:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.843 12:05:36 -- target/invalid.sh@25 -- # printf %x 111 00:09:35.843 12:05:36 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:09:35.843 12:05:36 -- target/invalid.sh@25 -- # string+=o 00:09:35.843 12:05:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.843 12:05:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.843 12:05:36 -- target/invalid.sh@25 -- # printf %x 94 00:09:35.843 12:05:36 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:09:35.843 12:05:36 -- target/invalid.sh@25 -- # string+='^' 00:09:35.843 12:05:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.843 12:05:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.843 12:05:36 -- target/invalid.sh@25 -- # printf %x 116 00:09:35.843 12:05:36 -- target/invalid.sh@25 -- # echo -e '\x74' 00:09:35.843 12:05:36 -- target/invalid.sh@25 -- # string+=t 00:09:35.843 12:05:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.843 12:05:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.843 12:05:36 -- target/invalid.sh@25 -- # printf %x 89 00:09:35.843 12:05:36 -- target/invalid.sh@25 -- # echo -e '\x59' 00:09:35.843 12:05:36 -- target/invalid.sh@25 -- # string+=Y 00:09:35.843 12:05:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.843 12:05:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.843 12:05:36 -- target/invalid.sh@28 -- # [[ P == \- ]] 00:09:35.843 12:05:36 -- target/invalid.sh@31 -- # echo 'Pci\!q97F:;R%\?|rw+wkz0SM-to6W6y_}SB)o^tY' 00:09:35.843 12:05:36 -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'Pci\!q97F:;R%\?|rw+wkz0SM-to6W6y_}SB)o^tY' nqn.2016-06.io.spdk:cnode3844 00:09:36.103 [2024-04-26 12:05:37.091549] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3844: invalid model number 'Pci\!q97F:;R%\?|rw+wkz0SM-to6W6y_}SB)o^tY' 00:09:36.103 12:05:37 -- target/invalid.sh@58 -- # out='request: 00:09:36.103 { 00:09:36.103 "nqn": "nqn.2016-06.io.spdk:cnode3844", 00:09:36.103 "model_number": "Pci\\!q97F:;R%\\?|rw+wkz0SM-to6W6y_}SB)o^tY", 00:09:36.103 "method": "nvmf_create_subsystem", 00:09:36.103 "req_id": 1 00:09:36.103 } 00:09:36.103 Got JSON-RPC error response 00:09:36.103 response: 00:09:36.103 { 00:09:36.103 "code": -32602, 00:09:36.103 "message": "Invalid MN Pci\\!q97F:;R%\\?|rw+wkz0SM-to6W6y_}SB)o^tY" 00:09:36.103 }' 00:09:36.103 12:05:37 -- target/invalid.sh@59 -- # [[ request: 00:09:36.103 { 00:09:36.103 "nqn": "nqn.2016-06.io.spdk:cnode3844", 00:09:36.103 "model_number": "Pci\\!q97F:;R%\\?|rw+wkz0SM-to6W6y_}SB)o^tY", 00:09:36.103 "method": "nvmf_create_subsystem", 00:09:36.103 "req_id": 1 00:09:36.103 } 00:09:36.103 Got JSON-RPC error response 00:09:36.103 response: 00:09:36.103 { 00:09:36.103 "code": -32602, 00:09:36.103 "message": "Invalid MN Pci\\!q97F:;R%\\?|rw+wkz0SM-to6W6y_}SB)o^tY" 00:09:36.103 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:36.103 12:05:37 -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:09:36.103 [2024-04-26 12:05:37.260164] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:36.103 12:05:37 -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:09:36.363 12:05:37 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:09:36.363 12:05:37 -- target/invalid.sh@67 -- # echo '' 00:09:36.363 12:05:37 -- target/invalid.sh@67 -- # head -n 1 00:09:36.363 12:05:37 -- target/invalid.sh@67 -- # IP= 00:09:36.363 12:05:37 -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:09:36.623 [2024-04-26 12:05:37.597215] nvmf_rpc.c: 792:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:09:36.623 12:05:37 -- target/invalid.sh@69 -- # out='request: 00:09:36.623 { 00:09:36.623 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:36.623 "listen_address": { 00:09:36.623 "trtype": "tcp", 00:09:36.623 "traddr": "", 00:09:36.623 "trsvcid": "4421" 00:09:36.623 }, 00:09:36.623 "method": "nvmf_subsystem_remove_listener", 00:09:36.623 "req_id": 1 00:09:36.623 } 00:09:36.623 Got JSON-RPC error response 00:09:36.623 response: 00:09:36.623 { 00:09:36.623 "code": -32602, 00:09:36.623 "message": "Invalid parameters" 00:09:36.623 }' 00:09:36.623 12:05:37 -- target/invalid.sh@70 -- # [[ request: 00:09:36.623 { 00:09:36.623 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:36.623 "listen_address": { 00:09:36.623 "trtype": "tcp", 00:09:36.623 "traddr": "", 00:09:36.623 "trsvcid": "4421" 00:09:36.623 }, 00:09:36.623 "method": "nvmf_subsystem_remove_listener", 00:09:36.623 "req_id": 1 00:09:36.623 } 00:09:36.623 Got JSON-RPC error response 00:09:36.623 response: 00:09:36.623 { 00:09:36.623 "code": -32602, 00:09:36.623 "message": "Invalid parameters" 00:09:36.623 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:09:36.623 12:05:37 -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2889 -i 0 00:09:36.623 [2024-04-26 12:05:37.773752] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2889: invalid cntlid range [0-65519] 00:09:36.623 12:05:37 -- target/invalid.sh@73 -- # out='request: 00:09:36.623 { 00:09:36.623 "nqn": "nqn.2016-06.io.spdk:cnode2889", 00:09:36.623 "min_cntlid": 0, 00:09:36.623 "method": "nvmf_create_subsystem", 00:09:36.623 "req_id": 1 00:09:36.623 } 00:09:36.623 Got JSON-RPC error response 00:09:36.623 response: 00:09:36.623 { 00:09:36.623 "code": -32602, 00:09:36.623 "message": "Invalid cntlid range [0-65519]" 00:09:36.623 }' 00:09:36.623 12:05:37 -- target/invalid.sh@74 -- # [[ request: 00:09:36.623 { 00:09:36.623 "nqn": "nqn.2016-06.io.spdk:cnode2889", 00:09:36.623 "min_cntlid": 0, 00:09:36.623 "method": "nvmf_create_subsystem", 00:09:36.623 "req_id": 1 00:09:36.623 } 00:09:36.623 Got JSON-RPC error response 00:09:36.623 response: 00:09:36.623 { 00:09:36.623 "code": -32602, 00:09:36.623 "message": "Invalid cntlid range [0-65519]" 00:09:36.623 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:36.623 12:05:37 -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30450 -i 65520 00:09:36.885 [2024-04-26 12:05:37.946325] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30450: invalid cntlid range [65520-65519] 00:09:36.885 12:05:37 -- target/invalid.sh@75 -- # out='request: 00:09:36.885 { 00:09:36.885 "nqn": "nqn.2016-06.io.spdk:cnode30450", 00:09:36.885 "min_cntlid": 65520, 00:09:36.885 "method": "nvmf_create_subsystem", 00:09:36.885 "req_id": 1 00:09:36.885 } 00:09:36.885 Got JSON-RPC error response 00:09:36.885 response: 00:09:36.885 { 00:09:36.885 "code": -32602, 00:09:36.885 "message": "Invalid cntlid range [65520-65519]" 00:09:36.885 }' 00:09:36.885 12:05:37 -- target/invalid.sh@76 -- # [[ request: 00:09:36.885 { 00:09:36.885 "nqn": "nqn.2016-06.io.spdk:cnode30450", 00:09:36.885 "min_cntlid": 65520, 00:09:36.885 "method": "nvmf_create_subsystem", 00:09:36.885 "req_id": 1 00:09:36.885 } 00:09:36.885 Got JSON-RPC error response 00:09:36.885 response: 00:09:36.885 { 00:09:36.885 "code": -32602, 00:09:36.885 "message": "Invalid cntlid range [65520-65519]" 00:09:36.885 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:36.885 12:05:37 -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23975 -I 0 00:09:37.146 [2024-04-26 12:05:38.118869] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23975: invalid cntlid range [1-0] 00:09:37.146 12:05:38 -- target/invalid.sh@77 -- # out='request: 00:09:37.146 { 00:09:37.146 "nqn": "nqn.2016-06.io.spdk:cnode23975", 00:09:37.146 "max_cntlid": 0, 00:09:37.146 "method": "nvmf_create_subsystem", 00:09:37.146 "req_id": 1 00:09:37.146 } 00:09:37.146 Got JSON-RPC error response 00:09:37.146 response: 00:09:37.146 { 00:09:37.146 "code": -32602, 00:09:37.146 "message": "Invalid cntlid range [1-0]" 00:09:37.146 }' 00:09:37.146 12:05:38 -- target/invalid.sh@78 -- # [[ request: 00:09:37.146 { 00:09:37.146 "nqn": "nqn.2016-06.io.spdk:cnode23975", 00:09:37.146 "max_cntlid": 0, 00:09:37.146 "method": "nvmf_create_subsystem", 00:09:37.146 "req_id": 1 00:09:37.146 } 00:09:37.146 Got JSON-RPC error response 00:09:37.146 response: 00:09:37.146 { 00:09:37.146 "code": -32602, 00:09:37.146 "message": "Invalid cntlid range [1-0]" 00:09:37.146 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:37.146 12:05:38 -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5145 -I 65520 00:09:37.146 [2024-04-26 12:05:38.291464] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5145: invalid cntlid range [1-65520] 00:09:37.146 12:05:38 -- target/invalid.sh@79 -- # out='request: 00:09:37.146 { 00:09:37.146 "nqn": "nqn.2016-06.io.spdk:cnode5145", 00:09:37.146 "max_cntlid": 65520, 00:09:37.146 "method": "nvmf_create_subsystem", 00:09:37.146 "req_id": 1 00:09:37.146 } 00:09:37.146 Got JSON-RPC error response 00:09:37.146 response: 00:09:37.146 { 00:09:37.146 "code": -32602, 00:09:37.146 "message": "Invalid cntlid range [1-65520]" 00:09:37.146 }' 00:09:37.146 12:05:38 -- target/invalid.sh@80 -- # [[ request: 00:09:37.146 { 00:09:37.146 "nqn": "nqn.2016-06.io.spdk:cnode5145", 00:09:37.146 "max_cntlid": 65520, 00:09:37.146 "method": "nvmf_create_subsystem", 00:09:37.146 "req_id": 1 00:09:37.146 } 00:09:37.146 Got JSON-RPC error response 00:09:37.146 response: 00:09:37.146 { 00:09:37.146 "code": -32602, 00:09:37.146 "message": "Invalid cntlid range [1-65520]" 00:09:37.146 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:37.146 12:05:38 -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7330 -i 6 -I 5 00:09:37.407 [2024-04-26 12:05:38.463966] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7330: invalid cntlid range [6-5] 00:09:37.407 12:05:38 -- target/invalid.sh@83 -- # out='request: 00:09:37.407 { 00:09:37.407 "nqn": "nqn.2016-06.io.spdk:cnode7330", 00:09:37.407 "min_cntlid": 6, 00:09:37.407 "max_cntlid": 5, 00:09:37.407 "method": "nvmf_create_subsystem", 00:09:37.407 "req_id": 1 00:09:37.407 } 00:09:37.407 Got JSON-RPC error response 00:09:37.407 response: 00:09:37.407 { 00:09:37.407 "code": -32602, 00:09:37.407 "message": "Invalid cntlid range [6-5]" 00:09:37.407 }' 00:09:37.407 12:05:38 -- target/invalid.sh@84 -- # [[ request: 00:09:37.407 { 00:09:37.407 "nqn": "nqn.2016-06.io.spdk:cnode7330", 00:09:37.407 "min_cntlid": 6, 00:09:37.407 "max_cntlid": 5, 00:09:37.407 "method": "nvmf_create_subsystem", 00:09:37.407 "req_id": 1 00:09:37.407 } 00:09:37.407 Got JSON-RPC error response 00:09:37.407 response: 00:09:37.407 { 00:09:37.407 "code": -32602, 00:09:37.407 "message": "Invalid cntlid range [6-5]" 00:09:37.407 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:37.407 12:05:38 -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:09:37.407 12:05:38 -- target/invalid.sh@87 -- # out='request: 00:09:37.407 { 00:09:37.407 "name": "foobar", 00:09:37.407 "method": "nvmf_delete_target", 00:09:37.407 "req_id": 1 00:09:37.407 } 00:09:37.407 Got JSON-RPC error response 00:09:37.407 response: 00:09:37.407 { 00:09:37.407 "code": -32602, 00:09:37.407 "message": "The specified target doesn'\''t exist, cannot delete it." 00:09:37.407 }' 00:09:37.407 12:05:38 -- target/invalid.sh@88 -- # [[ request: 00:09:37.407 { 00:09:37.407 "name": "foobar", 00:09:37.407 "method": "nvmf_delete_target", 00:09:37.407 "req_id": 1 00:09:37.407 } 00:09:37.407 Got JSON-RPC error response 00:09:37.407 response: 00:09:37.407 { 00:09:37.407 "code": -32602, 00:09:37.407 "message": "The specified target doesn't exist, cannot delete it." 00:09:37.407 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:09:37.407 12:05:38 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:09:37.407 12:05:38 -- target/invalid.sh@91 -- # nvmftestfini 00:09:37.407 12:05:38 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:37.407 12:05:38 -- nvmf/common.sh@117 -- # sync 00:09:37.407 12:05:38 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:37.407 12:05:38 -- nvmf/common.sh@120 -- # set +e 00:09:37.407 12:05:38 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:37.407 12:05:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:37.407 rmmod nvme_tcp 00:09:37.667 rmmod nvme_fabrics 00:09:37.667 rmmod nvme_keyring 00:09:37.667 12:05:38 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:37.667 12:05:38 -- nvmf/common.sh@124 -- # set -e 00:09:37.667 12:05:38 -- nvmf/common.sh@125 -- # return 0 00:09:37.667 12:05:38 -- nvmf/common.sh@478 -- # '[' -n 3271984 ']' 00:09:37.667 12:05:38 -- nvmf/common.sh@479 -- # killprocess 3271984 00:09:37.667 12:05:38 -- common/autotest_common.sh@936 -- # '[' -z 3271984 ']' 00:09:37.667 12:05:38 -- common/autotest_common.sh@940 -- # kill -0 3271984 00:09:37.667 12:05:38 -- common/autotest_common.sh@941 -- # uname 00:09:37.667 12:05:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:37.667 12:05:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3271984 00:09:37.667 12:05:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:37.667 12:05:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:37.667 12:05:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3271984' 00:09:37.667 killing process with pid 3271984 00:09:37.667 12:05:38 -- common/autotest_common.sh@955 -- # kill 3271984 00:09:37.667 12:05:38 -- common/autotest_common.sh@960 -- # wait 3271984 00:09:37.667 12:05:38 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:37.667 12:05:38 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:37.667 12:05:38 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:37.667 12:05:38 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:37.667 12:05:38 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:37.667 12:05:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:37.667 12:05:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:37.667 12:05:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:40.208 12:05:40 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:40.208 00:09:40.208 real 0m13.387s 00:09:40.208 user 0m19.133s 00:09:40.208 sys 0m6.265s 00:09:40.208 12:05:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:40.208 12:05:40 -- common/autotest_common.sh@10 -- # set +x 00:09:40.208 ************************************ 00:09:40.208 END TEST nvmf_invalid 00:09:40.208 ************************************ 00:09:40.208 12:05:40 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:40.208 12:05:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:40.208 12:05:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:40.208 12:05:40 -- common/autotest_common.sh@10 -- # set +x 00:09:40.208 ************************************ 00:09:40.208 START TEST nvmf_abort 00:09:40.208 ************************************ 00:09:40.208 12:05:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:40.208 * Looking for test storage... 00:09:40.208 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:40.208 12:05:41 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:40.208 12:05:41 -- nvmf/common.sh@7 -- # uname -s 00:09:40.208 12:05:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:40.208 12:05:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:40.208 12:05:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:40.208 12:05:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:40.208 12:05:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:40.208 12:05:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:40.208 12:05:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:40.208 12:05:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:40.208 12:05:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:40.208 12:05:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:40.208 12:05:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:40.208 12:05:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:40.208 12:05:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:40.208 12:05:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:40.208 12:05:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:40.208 12:05:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:40.208 12:05:41 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:40.208 12:05:41 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:40.208 12:05:41 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:40.208 12:05:41 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:40.208 12:05:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.208 12:05:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.208 12:05:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.208 12:05:41 -- paths/export.sh@5 -- # export PATH 00:09:40.208 12:05:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.208 12:05:41 -- nvmf/common.sh@47 -- # : 0 00:09:40.208 12:05:41 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:40.208 12:05:41 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:40.208 12:05:41 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:40.208 12:05:41 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:40.208 12:05:41 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:40.208 12:05:41 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:40.208 12:05:41 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:40.208 12:05:41 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:40.208 12:05:41 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:40.208 12:05:41 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:09:40.208 12:05:41 -- target/abort.sh@14 -- # nvmftestinit 00:09:40.208 12:05:41 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:40.208 12:05:41 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:40.208 12:05:41 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:40.208 12:05:41 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:40.208 12:05:41 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:40.208 12:05:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:40.208 12:05:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:40.208 12:05:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:40.208 12:05:41 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:40.208 12:05:41 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:40.208 12:05:41 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:40.208 12:05:41 -- common/autotest_common.sh@10 -- # set +x 00:09:48.385 12:05:48 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:48.385 12:05:48 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:48.385 12:05:48 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:48.385 12:05:48 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:48.385 12:05:48 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:48.385 12:05:48 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:48.385 12:05:48 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:48.385 12:05:48 -- nvmf/common.sh@295 -- # net_devs=() 00:09:48.385 12:05:48 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:48.385 12:05:48 -- nvmf/common.sh@296 -- # e810=() 00:09:48.385 12:05:48 -- nvmf/common.sh@296 -- # local -ga e810 00:09:48.385 12:05:48 -- nvmf/common.sh@297 -- # x722=() 00:09:48.385 12:05:48 -- nvmf/common.sh@297 -- # local -ga x722 00:09:48.385 12:05:48 -- nvmf/common.sh@298 -- # mlx=() 00:09:48.385 12:05:48 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:48.385 12:05:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:48.385 12:05:48 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:48.385 12:05:48 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:48.385 12:05:48 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:48.385 12:05:48 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:48.385 12:05:48 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:48.385 12:05:48 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:48.385 12:05:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:48.385 12:05:48 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:48.385 12:05:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:48.385 12:05:48 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:48.385 12:05:48 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:48.385 12:05:48 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:48.385 12:05:48 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:48.385 12:05:48 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:48.385 12:05:48 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:48.385 12:05:48 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:48.385 12:05:48 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:48.385 12:05:48 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:48.385 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:48.385 12:05:48 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:48.385 12:05:48 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:48.385 12:05:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:48.385 12:05:48 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:48.385 12:05:48 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:48.386 12:05:48 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:48.386 12:05:48 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:48.386 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:48.386 12:05:48 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:48.386 12:05:48 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:48.386 12:05:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:48.386 12:05:48 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:48.386 12:05:48 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:48.386 12:05:48 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:48.386 12:05:48 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:48.386 12:05:48 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:48.386 12:05:48 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:48.386 12:05:48 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:48.386 12:05:48 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:48.386 12:05:48 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:48.386 12:05:48 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:48.386 Found net devices under 0000:31:00.0: cvl_0_0 00:09:48.386 12:05:48 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:48.386 12:05:48 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:48.386 12:05:48 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:48.386 12:05:48 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:48.386 12:05:48 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:48.386 12:05:48 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:48.386 Found net devices under 0000:31:00.1: cvl_0_1 00:09:48.386 12:05:48 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:48.386 12:05:48 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:48.386 12:05:48 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:48.386 12:05:48 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:48.386 12:05:48 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:09:48.386 12:05:48 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:09:48.386 12:05:48 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:48.386 12:05:48 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:48.386 12:05:48 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:48.386 12:05:48 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:48.386 12:05:48 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:48.386 12:05:48 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:48.386 12:05:48 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:48.386 12:05:48 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:48.386 12:05:48 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:48.386 12:05:48 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:48.386 12:05:48 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:48.386 12:05:48 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:48.386 12:05:48 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:48.386 12:05:48 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:48.386 12:05:48 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:48.386 12:05:48 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:48.386 12:05:48 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:48.386 12:05:48 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:48.386 12:05:48 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:48.386 12:05:48 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:48.386 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:48.386 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.588 ms 00:09:48.386 00:09:48.386 --- 10.0.0.2 ping statistics --- 00:09:48.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:48.386 rtt min/avg/max/mdev = 0.588/0.588/0.588/0.000 ms 00:09:48.386 12:05:48 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:48.386 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:48.386 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:09:48.386 00:09:48.386 --- 10.0.0.1 ping statistics --- 00:09:48.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:48.386 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:09:48.386 12:05:48 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:48.386 12:05:48 -- nvmf/common.sh@411 -- # return 0 00:09:48.386 12:05:48 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:48.386 12:05:48 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:48.386 12:05:48 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:48.386 12:05:48 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:48.386 12:05:48 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:48.386 12:05:48 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:48.386 12:05:48 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:48.386 12:05:48 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:09:48.386 12:05:48 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:48.386 12:05:48 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:48.386 12:05:48 -- common/autotest_common.sh@10 -- # set +x 00:09:48.386 12:05:48 -- nvmf/common.sh@470 -- # nvmfpid=3277223 00:09:48.386 12:05:48 -- nvmf/common.sh@471 -- # waitforlisten 3277223 00:09:48.386 12:05:48 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:48.386 12:05:48 -- common/autotest_common.sh@817 -- # '[' -z 3277223 ']' 00:09:48.386 12:05:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:48.386 12:05:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:48.386 12:05:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:48.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:48.386 12:05:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:48.386 12:05:48 -- common/autotest_common.sh@10 -- # set +x 00:09:48.386 [2024-04-26 12:05:48.570738] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:09:48.386 [2024-04-26 12:05:48.570785] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:48.386 EAL: No free 2048 kB hugepages reported on node 1 00:09:48.386 [2024-04-26 12:05:48.656457] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:48.386 [2024-04-26 12:05:48.737800] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:48.386 [2024-04-26 12:05:48.737874] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:48.386 [2024-04-26 12:05:48.737882] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:48.386 [2024-04-26 12:05:48.737890] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:48.386 [2024-04-26 12:05:48.737896] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:48.386 [2024-04-26 12:05:48.738041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:48.386 [2024-04-26 12:05:48.738347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:48.386 [2024-04-26 12:05:48.738348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:48.386 12:05:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:48.386 12:05:49 -- common/autotest_common.sh@850 -- # return 0 00:09:48.386 12:05:49 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:48.386 12:05:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:48.386 12:05:49 -- common/autotest_common.sh@10 -- # set +x 00:09:48.386 12:05:49 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:48.386 12:05:49 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:09:48.386 12:05:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:48.386 12:05:49 -- common/autotest_common.sh@10 -- # set +x 00:09:48.386 [2024-04-26 12:05:49.392324] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:48.386 12:05:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:48.386 12:05:49 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:09:48.386 12:05:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:48.386 12:05:49 -- common/autotest_common.sh@10 -- # set +x 00:09:48.386 Malloc0 00:09:48.386 12:05:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:48.386 12:05:49 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:48.386 12:05:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:48.386 12:05:49 -- common/autotest_common.sh@10 -- # set +x 00:09:48.386 Delay0 00:09:48.386 12:05:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:48.386 12:05:49 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:48.386 12:05:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:48.386 12:05:49 -- common/autotest_common.sh@10 -- # set +x 00:09:48.386 12:05:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:48.386 12:05:49 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:09:48.386 12:05:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:48.386 12:05:49 -- common/autotest_common.sh@10 -- # set +x 00:09:48.386 12:05:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:48.386 12:05:49 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:48.386 12:05:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:48.386 12:05:49 -- common/autotest_common.sh@10 -- # set +x 00:09:48.386 [2024-04-26 12:05:49.472678] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:48.386 12:05:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:48.386 12:05:49 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:48.387 12:05:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:48.387 12:05:49 -- common/autotest_common.sh@10 -- # set +x 00:09:48.387 12:05:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:48.387 12:05:49 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:09:48.387 EAL: No free 2048 kB hugepages reported on node 1 00:09:48.668 [2024-04-26 12:05:49.623010] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:50.578 Initializing NVMe Controllers 00:09:50.578 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:50.578 controller IO queue size 128 less than required 00:09:50.578 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:09:50.578 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:09:50.578 Initialization complete. Launching workers. 00:09:50.578 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 34612 00:09:50.578 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 34677, failed to submit 62 00:09:50.578 success 34616, unsuccess 61, failed 0 00:09:50.578 12:05:51 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:50.578 12:05:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:50.578 12:05:51 -- common/autotest_common.sh@10 -- # set +x 00:09:50.839 12:05:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:50.839 12:05:51 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:09:50.839 12:05:51 -- target/abort.sh@38 -- # nvmftestfini 00:09:50.839 12:05:51 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:50.839 12:05:51 -- nvmf/common.sh@117 -- # sync 00:09:50.839 12:05:51 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:50.839 12:05:51 -- nvmf/common.sh@120 -- # set +e 00:09:50.839 12:05:51 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:50.839 12:05:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:50.839 rmmod nvme_tcp 00:09:50.839 rmmod nvme_fabrics 00:09:50.839 rmmod nvme_keyring 00:09:50.839 12:05:51 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:50.839 12:05:51 -- nvmf/common.sh@124 -- # set -e 00:09:50.839 12:05:51 -- nvmf/common.sh@125 -- # return 0 00:09:50.839 12:05:51 -- nvmf/common.sh@478 -- # '[' -n 3277223 ']' 00:09:50.839 12:05:51 -- nvmf/common.sh@479 -- # killprocess 3277223 00:09:50.839 12:05:51 -- common/autotest_common.sh@936 -- # '[' -z 3277223 ']' 00:09:50.839 12:05:51 -- common/autotest_common.sh@940 -- # kill -0 3277223 00:09:50.839 12:05:51 -- common/autotest_common.sh@941 -- # uname 00:09:50.839 12:05:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:50.839 12:05:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3277223 00:09:50.839 12:05:51 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:09:50.839 12:05:51 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:09:50.839 12:05:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3277223' 00:09:50.839 killing process with pid 3277223 00:09:50.839 12:05:51 -- common/autotest_common.sh@955 -- # kill 3277223 00:09:50.839 12:05:51 -- common/autotest_common.sh@960 -- # wait 3277223 00:09:50.839 12:05:52 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:50.839 12:05:52 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:50.839 12:05:52 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:50.839 12:05:52 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:50.839 12:05:52 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:50.839 12:05:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:50.839 12:05:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:50.839 12:05:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.385 12:05:54 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:53.385 00:09:53.385 real 0m13.012s 00:09:53.385 user 0m13.973s 00:09:53.385 sys 0m6.186s 00:09:53.385 12:05:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:53.385 12:05:54 -- common/autotest_common.sh@10 -- # set +x 00:09:53.385 ************************************ 00:09:53.385 END TEST nvmf_abort 00:09:53.385 ************************************ 00:09:53.385 12:05:54 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:53.385 12:05:54 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:53.385 12:05:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:53.385 12:05:54 -- common/autotest_common.sh@10 -- # set +x 00:09:53.385 ************************************ 00:09:53.385 START TEST nvmf_ns_hotplug_stress 00:09:53.385 ************************************ 00:09:53.385 12:05:54 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:53.385 * Looking for test storage... 00:09:53.385 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:53.385 12:05:54 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:53.385 12:05:54 -- nvmf/common.sh@7 -- # uname -s 00:09:53.385 12:05:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:53.385 12:05:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:53.385 12:05:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:53.385 12:05:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:53.385 12:05:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:53.385 12:05:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:53.385 12:05:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:53.385 12:05:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:53.385 12:05:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:53.385 12:05:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:53.385 12:05:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:53.385 12:05:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:53.385 12:05:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:53.385 12:05:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:53.385 12:05:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:53.385 12:05:54 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:53.385 12:05:54 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:53.385 12:05:54 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:53.385 12:05:54 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:53.385 12:05:54 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:53.385 12:05:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.385 12:05:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.385 12:05:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.385 12:05:54 -- paths/export.sh@5 -- # export PATH 00:09:53.386 12:05:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.386 12:05:54 -- nvmf/common.sh@47 -- # : 0 00:09:53.386 12:05:54 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:53.386 12:05:54 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:53.386 12:05:54 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:53.386 12:05:54 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:53.386 12:05:54 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:53.386 12:05:54 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:53.386 12:05:54 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:53.386 12:05:54 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:53.386 12:05:54 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:53.386 12:05:54 -- target/ns_hotplug_stress.sh@13 -- # nvmftestinit 00:09:53.386 12:05:54 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:53.386 12:05:54 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:53.386 12:05:54 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:53.386 12:05:54 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:53.386 12:05:54 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:53.386 12:05:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:53.386 12:05:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:53.386 12:05:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.386 12:05:54 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:53.386 12:05:54 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:53.386 12:05:54 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:53.386 12:05:54 -- common/autotest_common.sh@10 -- # set +x 00:10:01.528 12:06:01 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:01.528 12:06:01 -- nvmf/common.sh@291 -- # pci_devs=() 00:10:01.528 12:06:01 -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:01.528 12:06:01 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:01.528 12:06:01 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:01.528 12:06:01 -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:01.528 12:06:01 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:01.528 12:06:01 -- nvmf/common.sh@295 -- # net_devs=() 00:10:01.528 12:06:01 -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:01.528 12:06:01 -- nvmf/common.sh@296 -- # e810=() 00:10:01.528 12:06:01 -- nvmf/common.sh@296 -- # local -ga e810 00:10:01.528 12:06:01 -- nvmf/common.sh@297 -- # x722=() 00:10:01.528 12:06:01 -- nvmf/common.sh@297 -- # local -ga x722 00:10:01.528 12:06:01 -- nvmf/common.sh@298 -- # mlx=() 00:10:01.528 12:06:01 -- nvmf/common.sh@298 -- # local -ga mlx 00:10:01.528 12:06:01 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:01.528 12:06:01 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:01.528 12:06:01 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:01.528 12:06:01 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:01.528 12:06:01 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:01.528 12:06:01 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:01.528 12:06:01 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:01.528 12:06:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:01.528 12:06:01 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:01.528 12:06:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:01.528 12:06:01 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:01.528 12:06:01 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:01.528 12:06:01 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:01.528 12:06:01 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:01.528 12:06:01 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:01.528 12:06:01 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:01.528 12:06:01 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:01.528 12:06:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:01.528 12:06:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:01.528 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:01.528 12:06:01 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:01.528 12:06:01 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:01.528 12:06:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:01.528 12:06:01 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:01.528 12:06:01 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:01.528 12:06:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:01.528 12:06:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:01.528 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:01.528 12:06:01 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:01.528 12:06:01 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:01.528 12:06:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:01.528 12:06:01 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:01.528 12:06:01 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:01.528 12:06:01 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:01.528 12:06:01 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:01.528 12:06:01 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:01.528 12:06:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:01.528 12:06:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:01.528 12:06:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:01.528 12:06:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:01.528 12:06:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:01.528 Found net devices under 0000:31:00.0: cvl_0_0 00:10:01.528 12:06:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:01.528 12:06:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:01.528 12:06:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:01.528 12:06:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:01.528 12:06:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:01.528 12:06:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:01.528 Found net devices under 0000:31:00.1: cvl_0_1 00:10:01.528 12:06:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:01.528 12:06:01 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:10:01.528 12:06:01 -- nvmf/common.sh@403 -- # is_hw=yes 00:10:01.528 12:06:01 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:10:01.528 12:06:01 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:10:01.528 12:06:01 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:10:01.528 12:06:01 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:01.528 12:06:01 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:01.528 12:06:01 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:01.528 12:06:01 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:01.528 12:06:01 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:01.528 12:06:01 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:01.528 12:06:01 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:01.528 12:06:01 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:01.528 12:06:01 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:01.528 12:06:01 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:01.528 12:06:01 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:01.528 12:06:01 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:01.528 12:06:01 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:01.528 12:06:01 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:01.528 12:06:01 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:01.528 12:06:01 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:01.528 12:06:01 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:01.528 12:06:01 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:01.528 12:06:01 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:01.528 12:06:01 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:01.528 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:01.528 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.718 ms 00:10:01.528 00:10:01.528 --- 10.0.0.2 ping statistics --- 00:10:01.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:01.528 rtt min/avg/max/mdev = 0.718/0.718/0.718/0.000 ms 00:10:01.528 12:06:01 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:01.528 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:01.528 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:10:01.528 00:10:01.528 --- 10.0.0.1 ping statistics --- 00:10:01.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:01.528 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:10:01.528 12:06:01 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:01.528 12:06:01 -- nvmf/common.sh@411 -- # return 0 00:10:01.528 12:06:01 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:01.528 12:06:01 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:01.528 12:06:01 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:10:01.528 12:06:01 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:10:01.528 12:06:01 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:01.528 12:06:01 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:10:01.528 12:06:01 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:10:01.528 12:06:01 -- target/ns_hotplug_stress.sh@14 -- # nvmfappstart -m 0xE 00:10:01.528 12:06:01 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:01.528 12:06:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:01.528 12:06:01 -- common/autotest_common.sh@10 -- # set +x 00:10:01.528 12:06:01 -- nvmf/common.sh@470 -- # nvmfpid=3282407 00:10:01.528 12:06:01 -- nvmf/common.sh@471 -- # waitforlisten 3282407 00:10:01.528 12:06:01 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:01.528 12:06:01 -- common/autotest_common.sh@817 -- # '[' -z 3282407 ']' 00:10:01.528 12:06:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:01.528 12:06:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:01.528 12:06:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:01.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:01.528 12:06:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:01.528 12:06:01 -- common/autotest_common.sh@10 -- # set +x 00:10:01.528 [2024-04-26 12:06:01.839807] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:10:01.529 [2024-04-26 12:06:01.839900] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:01.529 EAL: No free 2048 kB hugepages reported on node 1 00:10:01.529 [2024-04-26 12:06:01.929825] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:01.529 [2024-04-26 12:06:02.021659] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:01.529 [2024-04-26 12:06:02.021722] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:01.529 [2024-04-26 12:06:02.021730] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:01.529 [2024-04-26 12:06:02.021737] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:01.529 [2024-04-26 12:06:02.021743] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:01.529 [2024-04-26 12:06:02.021908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:01.529 [2024-04-26 12:06:02.022125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:01.529 [2024-04-26 12:06:02.022224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:01.529 12:06:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:01.529 12:06:02 -- common/autotest_common.sh@850 -- # return 0 00:10:01.529 12:06:02 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:01.529 12:06:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:01.529 12:06:02 -- common/autotest_common.sh@10 -- # set +x 00:10:01.529 12:06:02 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:01.529 12:06:02 -- target/ns_hotplug_stress.sh@16 -- # null_size=1000 00:10:01.529 12:06:02 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:01.789 [2024-04-26 12:06:02.796591] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:01.789 12:06:02 -- target/ns_hotplug_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:01.790 12:06:02 -- target/ns_hotplug_stress.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:02.051 [2024-04-26 12:06:03.138050] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:02.051 12:06:03 -- target/ns_hotplug_stress.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:02.312 12:06:03 -- target/ns_hotplug_stress.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:10:02.312 Malloc0 00:10:02.312 12:06:03 -- target/ns_hotplug_stress.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:02.574 Delay0 00:10:02.574 12:06:03 -- target/ns_hotplug_stress.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:02.836 12:06:03 -- target/ns_hotplug_stress.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:10:02.836 NULL1 00:10:02.836 12:06:04 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:03.097 12:06:04 -- target/ns_hotplug_stress.sh@33 -- # PERF_PID=3282795 00:10:03.097 12:06:04 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:10:03.097 12:06:04 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3282795 00:10:03.097 12:06:04 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:03.097 EAL: No free 2048 kB hugepages reported on node 1 00:10:03.357 12:06:04 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:03.357 12:06:04 -- target/ns_hotplug_stress.sh@40 -- # null_size=1001 00:10:03.357 12:06:04 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:10:03.618 [2024-04-26 12:06:04.643344] bdev.c:4971:_tmp_bdev_event_cb: *NOTICE*: Unexpected event type: 1 00:10:03.618 true 00:10:03.618 12:06:04 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3282795 00:10:03.618 12:06:04 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:03.878 12:06:04 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:03.878 12:06:05 -- target/ns_hotplug_stress.sh@40 -- # null_size=1002 00:10:03.878 12:06:05 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:10:04.139 true 00:10:04.139 12:06:05 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3282795 00:10:04.139 12:06:05 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:04.399 12:06:05 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:04.400 12:06:05 -- target/ns_hotplug_stress.sh@40 -- # null_size=1003 00:10:04.400 12:06:05 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:10:04.661 true 00:10:04.661 12:06:05 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3282795 00:10:04.661 12:06:05 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:04.661 12:06:05 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:04.921 12:06:06 -- target/ns_hotplug_stress.sh@40 -- # null_size=1004 00:10:04.921 12:06:06 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:10:05.182 true 00:10:05.182 12:06:06 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3282795 00:10:05.182 12:06:06 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:05.182 12:06:06 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:05.443 12:06:06 -- target/ns_hotplug_stress.sh@40 -- # null_size=1005 00:10:05.443 12:06:06 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:10:05.703 true 00:10:05.703 12:06:06 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3282795 00:10:05.703 12:06:06 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:05.703 12:06:06 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:05.963 12:06:07 -- target/ns_hotplug_stress.sh@40 -- # null_size=1006 00:10:05.963 12:06:07 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:10:06.224 true 00:10:06.224 12:06:07 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3282795 00:10:06.224 12:06:07 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:06.224 12:06:07 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:06.484 12:06:07 -- target/ns_hotplug_stress.sh@40 -- # null_size=1007 00:10:06.484 12:06:07 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:10:06.745 true 00:10:06.745 12:06:07 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3282795 00:10:06.745 12:06:07 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:06.745 12:06:07 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:07.005 12:06:08 -- target/ns_hotplug_stress.sh@40 -- # null_size=1008 00:10:07.005 12:06:08 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:10:07.005 true 00:10:07.005 12:06:08 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3282795 00:10:07.005 12:06:08 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:07.266 12:06:08 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:07.527 12:06:08 -- target/ns_hotplug_stress.sh@40 -- # null_size=1009 00:10:07.527 12:06:08 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:10:07.527 true 00:10:07.527 12:06:08 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3282795 00:10:07.527 12:06:08 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:07.787 12:06:08 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:08.048 12:06:09 -- target/ns_hotplug_stress.sh@40 -- # null_size=1010 00:10:08.048 12:06:09 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:10:08.048 true 00:10:08.048 12:06:09 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3282795 00:10:08.048 12:06:09 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:08.308 12:06:09 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:08.568 12:06:09 -- target/ns_hotplug_stress.sh@40 -- # null_size=1011 00:10:08.569 12:06:09 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:10:08.569 true 00:10:08.569 12:06:09 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3282795 00:10:08.569 12:06:09 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:08.829 12:06:09 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:08.829 12:06:10 -- target/ns_hotplug_stress.sh@40 -- # null_size=1012 00:10:08.829 12:06:10 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:10:09.090 true 00:10:09.090 12:06:10 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3282795 00:10:09.090 12:06:10 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:09.350 12:06:10 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:09.350 12:06:10 -- target/ns_hotplug_stress.sh@40 -- # null_size=1013 00:10:09.350 12:06:10 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:10:09.610 true 00:10:09.610 12:06:10 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3282795 00:10:09.610 12:06:10 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:09.871 12:06:10 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:09.871 12:06:11 -- target/ns_hotplug_stress.sh@40 -- # null_size=1014 00:10:09.871 12:06:11 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:10:10.132 true 00:10:10.132 12:06:11 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3282795 00:10:10.132 12:06:11 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:10.394 12:06:11 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:10.394 12:06:11 -- target/ns_hotplug_stress.sh@40 -- # null_size=1015 00:10:10.394 12:06:11 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:10:10.655 true 00:10:10.655 12:06:11 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3282795 00:10:10.655 12:06:11 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:10.916 12:06:11 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:10.916 12:06:12 -- target/ns_hotplug_stress.sh@40 -- # null_size=1016 00:10:10.916 12:06:12 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:10:11.177 true 00:10:11.177 12:06:12 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3282795 00:10:11.177 12:06:12 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:11.439 12:06:12 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:11.439 12:06:12 -- target/ns_hotplug_stress.sh@40 -- # null_size=1017 00:10:11.439 12:06:12 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:10:11.699 true 00:10:11.699 12:06:12 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3282795 00:10:11.699 12:06:12 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:11.699 12:06:12 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:11.961 12:06:13 -- target/ns_hotplug_stress.sh@40 -- # null_size=1018 00:10:11.961 12:06:13 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:10:12.222 true 00:10:12.222 12:06:13 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3282795 00:10:12.223 12:06:13 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:12.223 12:06:13 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:12.483 12:06:13 -- target/ns_hotplug_stress.sh@40 -- # null_size=1019 00:10:12.483 12:06:13 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:10:12.743 true 00:10:12.743 12:06:13 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3282795 00:10:12.743 12:06:13 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:12.743 12:06:13 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:13.003 12:06:14 -- target/ns_hotplug_stress.sh@40 -- # null_size=1020 00:10:13.003 12:06:14 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:10:13.264 true 00:10:13.264 12:06:14 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3282795 00:10:13.264 12:06:14 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:13.264 12:06:14 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:13.525 12:06:14 -- target/ns_hotplug_stress.sh@40 -- # null_size=1021 00:10:13.525 12:06:14 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:10:13.525 true 00:10:13.785 12:06:14 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3282795 00:10:13.785 12:06:14 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:13.785 12:06:14 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:14.047 12:06:15 -- target/ns_hotplug_stress.sh@40 -- # null_size=1022 00:10:14.047 12:06:15 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:10:14.047 true 00:10:14.307 12:06:15 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3282795 00:10:14.307 12:06:15 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:14.307 12:06:15 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:14.567 12:06:15 -- target/ns_hotplug_stress.sh@40 -- # null_size=1023 00:10:14.567 12:06:15 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:10:14.567 true 00:10:14.827 12:06:15 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3282795 00:10:14.827 12:06:15 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:14.827 12:06:15 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:15.087 12:06:16 -- target/ns_hotplug_stress.sh@40 -- # null_size=1024 00:10:15.087 12:06:16 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:10:15.087 true 00:10:15.347 12:06:16 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3282795 00:10:15.347 12:06:16 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:15.347 12:06:16 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:15.607 12:06:16 -- target/ns_hotplug_stress.sh@40 -- # null_size=1025 00:10:15.607 12:06:16 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:10:15.607 true 00:10:15.607 12:06:16 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3282795 00:10:15.607 12:06:16 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:15.867 12:06:16 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:16.127 12:06:17 -- target/ns_hotplug_stress.sh@40 -- # null_size=1026 00:10:16.127 12:06:17 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:10:16.127 true 00:10:16.127 12:06:17 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3282795 00:10:16.127 12:06:17 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:16.387 12:06:17 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:16.648 12:06:17 -- target/ns_hotplug_stress.sh@40 -- # null_size=1027 00:10:16.648 12:06:17 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:10:16.648 true 00:10:16.648 12:06:17 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3282795 00:10:16.648 12:06:17 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:16.909 12:06:18 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:17.169 12:06:18 -- target/ns_hotplug_stress.sh@40 -- # null_size=1028 00:10:17.169 12:06:18 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:10:17.169 true 00:10:17.169 12:06:18 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3282795 00:10:17.169 12:06:18 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:17.430 12:06:18 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:17.690 12:06:18 -- target/ns_hotplug_stress.sh@40 -- # null_size=1029 00:10:17.690 12:06:18 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:10:17.690 true 00:10:17.690 12:06:18 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3282795 00:10:17.690 12:06:18 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:17.951 12:06:19 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:18.261 12:06:19 -- target/ns_hotplug_stress.sh@40 -- # null_size=1030 00:10:18.261 12:06:19 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:10:18.261 true 00:10:18.261 12:06:19 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3282795 00:10:18.261 12:06:19 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.541 12:06:19 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:18.541 12:06:19 -- target/ns_hotplug_stress.sh@40 -- # null_size=1031 00:10:18.541 12:06:19 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:10:18.803 true 00:10:18.803 12:06:19 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3282795 00:10:18.803 12:06:19 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:19.063 12:06:20 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:19.063 12:06:20 -- target/ns_hotplug_stress.sh@40 -- # null_size=1032 00:10:19.063 12:06:20 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:10:19.323 true 00:10:19.323 12:06:20 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3282795 00:10:19.323 12:06:20 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:19.583 12:06:20 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:19.584 12:06:20 -- target/ns_hotplug_stress.sh@40 -- # null_size=1033 00:10:19.584 12:06:20 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:10:19.844 true 00:10:19.845 12:06:20 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3282795 00:10:19.845 12:06:20 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.105 12:06:21 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:20.105 12:06:21 -- target/ns_hotplug_stress.sh@40 -- # null_size=1034 00:10:20.105 12:06:21 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:10:20.367 true 00:10:20.367 12:06:21 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3282795 00:10:20.367 12:06:21 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.367 12:06:21 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:20.628 12:06:21 -- target/ns_hotplug_stress.sh@40 -- # null_size=1035 00:10:20.628 12:06:21 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:10:20.890 true 00:10:20.890 12:06:21 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3282795 00:10:20.890 12:06:21 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.890 12:06:22 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:21.149 12:06:22 -- target/ns_hotplug_stress.sh@40 -- # null_size=1036 00:10:21.149 12:06:22 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:10:21.410 true 00:10:21.410 12:06:22 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3282795 00:10:21.410 12:06:22 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:21.410 12:06:22 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:21.669 12:06:22 -- target/ns_hotplug_stress.sh@40 -- # null_size=1037 00:10:21.669 12:06:22 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:10:21.929 true 00:10:21.929 12:06:22 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3282795 00:10:21.929 12:06:22 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:21.929 12:06:23 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:22.188 12:06:23 -- target/ns_hotplug_stress.sh@40 -- # null_size=1038 00:10:22.188 12:06:23 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:10:22.188 true 00:10:22.449 12:06:23 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3282795 00:10:22.449 12:06:23 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:22.449 12:06:23 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:22.710 12:06:23 -- target/ns_hotplug_stress.sh@40 -- # null_size=1039 00:10:22.710 12:06:23 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:10:22.710 true 00:10:22.970 12:06:23 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3282795 00:10:22.971 12:06:23 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:22.971 12:06:24 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:23.232 12:06:24 -- target/ns_hotplug_stress.sh@40 -- # null_size=1040 00:10:23.232 12:06:24 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:10:23.232 true 00:10:23.232 12:06:24 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3282795 00:10:23.232 12:06:24 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:23.493 12:06:24 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:23.754 12:06:24 -- target/ns_hotplug_stress.sh@40 -- # null_size=1041 00:10:23.754 12:06:24 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:10:23.754 true 00:10:23.754 12:06:24 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3282795 00:10:23.754 12:06:24 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:24.014 12:06:25 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:24.274 12:06:25 -- target/ns_hotplug_stress.sh@40 -- # null_size=1042 00:10:24.274 12:06:25 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:10:24.274 true 00:10:24.274 12:06:25 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3282795 00:10:24.274 12:06:25 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:24.534 12:06:25 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:24.795 12:06:25 -- target/ns_hotplug_stress.sh@40 -- # null_size=1043 00:10:24.795 12:06:25 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:10:24.795 true 00:10:24.795 12:06:25 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3282795 00:10:24.795 12:06:25 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:25.056 12:06:26 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:25.318 12:06:26 -- target/ns_hotplug_stress.sh@40 -- # null_size=1044 00:10:25.318 12:06:26 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:10:25.318 true 00:10:25.318 12:06:26 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3282795 00:10:25.318 12:06:26 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:25.580 12:06:26 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:25.842 12:06:26 -- target/ns_hotplug_stress.sh@40 -- # null_size=1045 00:10:25.842 12:06:26 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:10:25.842 true 00:10:25.842 12:06:26 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3282795 00:10:25.842 12:06:26 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:26.102 12:06:27 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:26.363 12:06:27 -- target/ns_hotplug_stress.sh@40 -- # null_size=1046 00:10:26.363 12:06:27 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:10:26.363 true 00:10:26.363 12:06:27 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3282795 00:10:26.363 12:06:27 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:26.624 12:06:27 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:26.884 12:06:27 -- target/ns_hotplug_stress.sh@40 -- # null_size=1047 00:10:26.884 12:06:27 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:10:26.884 true 00:10:26.884 12:06:28 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3282795 00:10:26.884 12:06:28 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.145 12:06:28 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:27.405 12:06:28 -- target/ns_hotplug_stress.sh@40 -- # null_size=1048 00:10:27.405 12:06:28 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:10:27.405 true 00:10:27.405 12:06:28 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3282795 00:10:27.405 12:06:28 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.666 12:06:28 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:27.666 12:06:28 -- target/ns_hotplug_stress.sh@40 -- # null_size=1049 00:10:27.666 12:06:28 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:10:27.933 true 00:10:27.933 12:06:29 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3282795 00:10:27.933 12:06:29 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.193 12:06:29 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:28.193 12:06:29 -- target/ns_hotplug_stress.sh@40 -- # null_size=1050 00:10:28.193 12:06:29 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:10:28.454 true 00:10:28.454 12:06:29 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3282795 00:10:28.454 12:06:29 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.714 12:06:29 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:28.714 12:06:29 -- target/ns_hotplug_stress.sh@40 -- # null_size=1051 00:10:28.714 12:06:29 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:10:28.974 true 00:10:28.974 12:06:30 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3282795 00:10:28.974 12:06:30 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:29.233 12:06:30 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:29.234 12:06:30 -- target/ns_hotplug_stress.sh@40 -- # null_size=1052 00:10:29.234 12:06:30 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:10:29.493 true 00:10:29.493 12:06:30 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3282795 00:10:29.493 12:06:30 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:29.752 12:06:30 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:29.752 12:06:30 -- target/ns_hotplug_stress.sh@40 -- # null_size=1053 00:10:29.752 12:06:30 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:10:30.011 true 00:10:30.011 12:06:31 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3282795 00:10:30.011 12:06:31 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:30.272 12:06:31 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:30.272 12:06:31 -- target/ns_hotplug_stress.sh@40 -- # null_size=1054 00:10:30.272 12:06:31 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:10:30.532 true 00:10:30.532 12:06:31 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3282795 00:10:30.532 12:06:31 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:30.792 12:06:31 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:30.792 12:06:31 -- target/ns_hotplug_stress.sh@40 -- # null_size=1055 00:10:30.792 12:06:31 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:10:31.052 true 00:10:31.052 12:06:32 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3282795 00:10:31.052 12:06:32 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:31.313 12:06:32 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:31.313 12:06:32 -- target/ns_hotplug_stress.sh@40 -- # null_size=1056 00:10:31.313 12:06:32 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:10:31.575 true 00:10:31.575 12:06:32 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3282795 00:10:31.575 12:06:32 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:31.836 12:06:32 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:31.836 12:06:32 -- target/ns_hotplug_stress.sh@40 -- # null_size=1057 00:10:31.836 12:06:32 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1057 00:10:32.096 true 00:10:32.096 12:06:33 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3282795 00:10:32.096 12:06:33 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.096 12:06:33 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:32.357 12:06:33 -- target/ns_hotplug_stress.sh@40 -- # null_size=1058 00:10:32.357 12:06:33 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1058 00:10:32.616 true 00:10:32.616 12:06:33 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3282795 00:10:32.616 12:06:33 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.616 12:06:33 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:32.876 12:06:33 -- target/ns_hotplug_stress.sh@40 -- # null_size=1059 00:10:32.876 12:06:33 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1059 00:10:33.136 true 00:10:33.136 12:06:34 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3282795 00:10:33.136 12:06:34 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:33.136 12:06:34 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:33.397 12:06:34 -- target/ns_hotplug_stress.sh@40 -- # null_size=1060 00:10:33.398 12:06:34 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1060 00:10:33.398 Initializing NVMe Controllers 00:10:33.398 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:33.398 Controller IO queue size 128, less than required. 00:10:33.398 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:33.398 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:10:33.398 Initialization complete. Launching workers. 00:10:33.398 ======================================================== 00:10:33.398 Latency(us) 00:10:33.398 Device Information : IOPS MiB/s Average min max 00:10:33.398 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30174.17 14.73 4242.07 1516.43 8985.12 00:10:33.398 ======================================================== 00:10:33.398 Total : 30174.17 14.73 4242.07 1516.43 8985.12 00:10:33.398 00:10:33.657 true 00:10:33.657 12:06:34 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3282795 00:10:33.657 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 35: kill: (3282795) - No such process 00:10:33.657 12:06:34 -- target/ns_hotplug_stress.sh@44 -- # wait 3282795 00:10:33.657 12:06:34 -- target/ns_hotplug_stress.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:10:33.657 12:06:34 -- target/ns_hotplug_stress.sh@48 -- # nvmftestfini 00:10:33.657 12:06:34 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:33.657 12:06:34 -- nvmf/common.sh@117 -- # sync 00:10:33.657 12:06:34 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:33.657 12:06:34 -- nvmf/common.sh@120 -- # set +e 00:10:33.657 12:06:34 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:33.657 12:06:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:33.657 rmmod nvme_tcp 00:10:33.657 rmmod nvme_fabrics 00:10:33.657 rmmod nvme_keyring 00:10:33.657 12:06:34 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:33.657 12:06:34 -- nvmf/common.sh@124 -- # set -e 00:10:33.657 12:06:34 -- nvmf/common.sh@125 -- # return 0 00:10:33.657 12:06:34 -- nvmf/common.sh@478 -- # '[' -n 3282407 ']' 00:10:33.657 12:06:34 -- nvmf/common.sh@479 -- # killprocess 3282407 00:10:33.657 12:06:34 -- common/autotest_common.sh@936 -- # '[' -z 3282407 ']' 00:10:33.657 12:06:34 -- common/autotest_common.sh@940 -- # kill -0 3282407 00:10:33.657 12:06:34 -- common/autotest_common.sh@941 -- # uname 00:10:33.657 12:06:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:33.657 12:06:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3282407 00:10:33.657 12:06:34 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:33.657 12:06:34 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:33.657 12:06:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3282407' 00:10:33.657 killing process with pid 3282407 00:10:33.657 12:06:34 -- common/autotest_common.sh@955 -- # kill 3282407 00:10:33.657 12:06:34 -- common/autotest_common.sh@960 -- # wait 3282407 00:10:33.918 12:06:34 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:33.918 12:06:34 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:10:33.918 12:06:34 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:10:33.918 12:06:34 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:33.918 12:06:34 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:33.918 12:06:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:33.918 12:06:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:33.918 12:06:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:35.832 12:06:37 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:35.832 00:10:35.832 real 0m42.675s 00:10:35.832 user 2m34.885s 00:10:35.832 sys 0m12.687s 00:10:35.832 12:06:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:35.832 12:06:37 -- common/autotest_common.sh@10 -- # set +x 00:10:35.832 ************************************ 00:10:35.832 END TEST nvmf_ns_hotplug_stress 00:10:35.832 ************************************ 00:10:35.832 12:06:37 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:36.093 12:06:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:36.093 12:06:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:36.093 12:06:37 -- common/autotest_common.sh@10 -- # set +x 00:10:36.093 ************************************ 00:10:36.093 START TEST nvmf_connect_stress 00:10:36.093 ************************************ 00:10:36.093 12:06:37 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:36.093 * Looking for test storage... 00:10:36.093 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:36.093 12:06:37 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:36.093 12:06:37 -- nvmf/common.sh@7 -- # uname -s 00:10:36.093 12:06:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:36.093 12:06:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:36.093 12:06:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:36.093 12:06:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:36.093 12:06:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:36.093 12:06:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:36.093 12:06:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:36.093 12:06:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:36.093 12:06:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:36.093 12:06:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:36.354 12:06:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:36.354 12:06:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:36.354 12:06:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:36.354 12:06:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:36.354 12:06:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:36.354 12:06:37 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:36.354 12:06:37 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:36.354 12:06:37 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:36.354 12:06:37 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:36.354 12:06:37 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:36.354 12:06:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.354 12:06:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.354 12:06:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.354 12:06:37 -- paths/export.sh@5 -- # export PATH 00:10:36.354 12:06:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.354 12:06:37 -- nvmf/common.sh@47 -- # : 0 00:10:36.354 12:06:37 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:36.354 12:06:37 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:36.354 12:06:37 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:36.354 12:06:37 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:36.354 12:06:37 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:36.354 12:06:37 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:36.354 12:06:37 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:36.354 12:06:37 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:36.354 12:06:37 -- target/connect_stress.sh@12 -- # nvmftestinit 00:10:36.354 12:06:37 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:10:36.354 12:06:37 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:36.354 12:06:37 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:36.354 12:06:37 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:36.354 12:06:37 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:36.354 12:06:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:36.354 12:06:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:36.354 12:06:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:36.354 12:06:37 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:10:36.354 12:06:37 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:36.354 12:06:37 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:36.354 12:06:37 -- common/autotest_common.sh@10 -- # set +x 00:10:44.495 12:06:44 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:44.495 12:06:44 -- nvmf/common.sh@291 -- # pci_devs=() 00:10:44.495 12:06:44 -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:44.495 12:06:44 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:44.495 12:06:44 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:44.495 12:06:44 -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:44.495 12:06:44 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:44.495 12:06:44 -- nvmf/common.sh@295 -- # net_devs=() 00:10:44.495 12:06:44 -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:44.495 12:06:44 -- nvmf/common.sh@296 -- # e810=() 00:10:44.496 12:06:44 -- nvmf/common.sh@296 -- # local -ga e810 00:10:44.496 12:06:44 -- nvmf/common.sh@297 -- # x722=() 00:10:44.496 12:06:44 -- nvmf/common.sh@297 -- # local -ga x722 00:10:44.496 12:06:44 -- nvmf/common.sh@298 -- # mlx=() 00:10:44.496 12:06:44 -- nvmf/common.sh@298 -- # local -ga mlx 00:10:44.496 12:06:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:44.496 12:06:44 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:44.496 12:06:44 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:44.496 12:06:44 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:44.496 12:06:44 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:44.496 12:06:44 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:44.496 12:06:44 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:44.496 12:06:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:44.496 12:06:44 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:44.496 12:06:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:44.496 12:06:44 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:44.496 12:06:44 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:44.496 12:06:44 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:44.496 12:06:44 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:44.496 12:06:44 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:44.496 12:06:44 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:44.496 12:06:44 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:44.496 12:06:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:44.496 12:06:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:44.496 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:44.496 12:06:44 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:44.496 12:06:44 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:44.496 12:06:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:44.496 12:06:44 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:44.496 12:06:44 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:44.496 12:06:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:44.496 12:06:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:44.496 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:44.496 12:06:44 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:44.496 12:06:44 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:44.496 12:06:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:44.496 12:06:44 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:44.496 12:06:44 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:44.496 12:06:44 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:44.496 12:06:44 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:44.496 12:06:44 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:44.496 12:06:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:44.496 12:06:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:44.496 12:06:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:44.496 12:06:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:44.496 12:06:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:44.496 Found net devices under 0000:31:00.0: cvl_0_0 00:10:44.496 12:06:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:44.496 12:06:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:44.496 12:06:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:44.496 12:06:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:44.496 12:06:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:44.496 12:06:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:44.496 Found net devices under 0000:31:00.1: cvl_0_1 00:10:44.496 12:06:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:44.496 12:06:44 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:10:44.496 12:06:44 -- nvmf/common.sh@403 -- # is_hw=yes 00:10:44.496 12:06:44 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:10:44.496 12:06:44 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:10:44.496 12:06:44 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:10:44.496 12:06:44 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:44.496 12:06:44 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:44.496 12:06:44 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:44.496 12:06:44 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:44.496 12:06:44 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:44.496 12:06:44 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:44.496 12:06:44 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:44.496 12:06:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:44.496 12:06:44 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:44.496 12:06:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:44.496 12:06:44 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:44.496 12:06:44 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:44.496 12:06:44 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:44.496 12:06:44 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:44.496 12:06:44 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:44.496 12:06:44 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:44.496 12:06:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:44.496 12:06:44 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:44.496 12:06:44 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:44.496 12:06:44 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:44.496 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:44.496 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.510 ms 00:10:44.496 00:10:44.496 --- 10.0.0.2 ping statistics --- 00:10:44.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:44.496 rtt min/avg/max/mdev = 0.510/0.510/0.510/0.000 ms 00:10:44.496 12:06:44 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:44.496 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:44.496 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:10:44.496 00:10:44.496 --- 10.0.0.1 ping statistics --- 00:10:44.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:44.496 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:10:44.496 12:06:44 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:44.496 12:06:44 -- nvmf/common.sh@411 -- # return 0 00:10:44.496 12:06:44 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:44.496 12:06:44 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:44.496 12:06:44 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:10:44.496 12:06:44 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:10:44.496 12:06:44 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:44.496 12:06:44 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:10:44.496 12:06:44 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:10:44.496 12:06:44 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:10:44.496 12:06:44 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:44.496 12:06:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:44.496 12:06:44 -- common/autotest_common.sh@10 -- # set +x 00:10:44.496 12:06:44 -- nvmf/common.sh@470 -- # nvmfpid=3293837 00:10:44.496 12:06:44 -- nvmf/common.sh@471 -- # waitforlisten 3293837 00:10:44.496 12:06:44 -- common/autotest_common.sh@817 -- # '[' -z 3293837 ']' 00:10:44.496 12:06:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:44.496 12:06:44 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:44.496 12:06:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:44.496 12:06:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:44.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:44.496 12:06:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:44.496 12:06:44 -- common/autotest_common.sh@10 -- # set +x 00:10:44.496 [2024-04-26 12:06:44.713489] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:10:44.496 [2024-04-26 12:06:44.713555] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:44.496 EAL: No free 2048 kB hugepages reported on node 1 00:10:44.496 [2024-04-26 12:06:44.803025] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:44.496 [2024-04-26 12:06:44.895893] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:44.496 [2024-04-26 12:06:44.895958] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:44.496 [2024-04-26 12:06:44.895966] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:44.496 [2024-04-26 12:06:44.895974] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:44.496 [2024-04-26 12:06:44.895979] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:44.496 [2024-04-26 12:06:44.896127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:44.496 [2024-04-26 12:06:44.896289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:44.496 [2024-04-26 12:06:44.896289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:44.496 12:06:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:44.496 12:06:45 -- common/autotest_common.sh@850 -- # return 0 00:10:44.496 12:06:45 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:44.496 12:06:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:44.496 12:06:45 -- common/autotest_common.sh@10 -- # set +x 00:10:44.496 12:06:45 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:44.496 12:06:45 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:44.496 12:06:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:44.496 12:06:45 -- common/autotest_common.sh@10 -- # set +x 00:10:44.496 [2024-04-26 12:06:45.522583] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:44.496 12:06:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:44.496 12:06:45 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:44.496 12:06:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:44.496 12:06:45 -- common/autotest_common.sh@10 -- # set +x 00:10:44.496 12:06:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:44.496 12:06:45 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:44.496 12:06:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:44.496 12:06:45 -- common/autotest_common.sh@10 -- # set +x 00:10:44.496 [2024-04-26 12:06:45.546941] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:44.497 12:06:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:44.497 12:06:45 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:44.497 12:06:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:44.497 12:06:45 -- common/autotest_common.sh@10 -- # set +x 00:10:44.497 NULL1 00:10:44.497 12:06:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:44.497 12:06:45 -- target/connect_stress.sh@21 -- # PERF_PID=3294187 00:10:44.497 12:06:45 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:44.497 12:06:45 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:10:44.497 12:06:45 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:44.497 12:06:45 -- target/connect_stress.sh@27 -- # seq 1 20 00:10:44.497 12:06:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:44.497 12:06:45 -- target/connect_stress.sh@28 -- # cat 00:10:44.497 12:06:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:44.497 12:06:45 -- target/connect_stress.sh@28 -- # cat 00:10:44.497 12:06:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:44.497 12:06:45 -- target/connect_stress.sh@28 -- # cat 00:10:44.497 12:06:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:44.497 12:06:45 -- target/connect_stress.sh@28 -- # cat 00:10:44.497 12:06:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:44.497 12:06:45 -- target/connect_stress.sh@28 -- # cat 00:10:44.497 12:06:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:44.497 EAL: No free 2048 kB hugepages reported on node 1 00:10:44.497 12:06:45 -- target/connect_stress.sh@28 -- # cat 00:10:44.497 12:06:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:44.497 12:06:45 -- target/connect_stress.sh@28 -- # cat 00:10:44.497 12:06:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:44.497 12:06:45 -- target/connect_stress.sh@28 -- # cat 00:10:44.497 12:06:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:44.497 12:06:45 -- target/connect_stress.sh@28 -- # cat 00:10:44.497 12:06:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:44.497 12:06:45 -- target/connect_stress.sh@28 -- # cat 00:10:44.497 12:06:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:44.497 12:06:45 -- target/connect_stress.sh@28 -- # cat 00:10:44.497 12:06:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:44.497 12:06:45 -- target/connect_stress.sh@28 -- # cat 00:10:44.497 12:06:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:44.497 12:06:45 -- target/connect_stress.sh@28 -- # cat 00:10:44.497 12:06:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:44.497 12:06:45 -- target/connect_stress.sh@28 -- # cat 00:10:44.497 12:06:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:44.497 12:06:45 -- target/connect_stress.sh@28 -- # cat 00:10:44.497 12:06:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:44.497 12:06:45 -- target/connect_stress.sh@28 -- # cat 00:10:44.497 12:06:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:44.497 12:06:45 -- target/connect_stress.sh@28 -- # cat 00:10:44.497 12:06:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:44.497 12:06:45 -- target/connect_stress.sh@28 -- # cat 00:10:44.497 12:06:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:44.497 12:06:45 -- target/connect_stress.sh@28 -- # cat 00:10:44.497 12:06:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:44.497 12:06:45 -- target/connect_stress.sh@28 -- # cat 00:10:44.497 12:06:45 -- target/connect_stress.sh@34 -- # kill -0 3294187 00:10:44.497 12:06:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:44.497 12:06:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:44.497 12:06:45 -- common/autotest_common.sh@10 -- # set +x 00:10:45.070 12:06:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:45.070 12:06:45 -- target/connect_stress.sh@34 -- # kill -0 3294187 00:10:45.070 12:06:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:45.070 12:06:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:45.070 12:06:45 -- common/autotest_common.sh@10 -- # set +x 00:10:45.331 12:06:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:45.331 12:06:46 -- target/connect_stress.sh@34 -- # kill -0 3294187 00:10:45.331 12:06:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:45.331 12:06:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:45.331 12:06:46 -- common/autotest_common.sh@10 -- # set +x 00:10:45.592 12:06:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:45.592 12:06:46 -- target/connect_stress.sh@34 -- # kill -0 3294187 00:10:45.592 12:06:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:45.592 12:06:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:45.592 12:06:46 -- common/autotest_common.sh@10 -- # set +x 00:10:45.853 12:06:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:45.853 12:06:46 -- target/connect_stress.sh@34 -- # kill -0 3294187 00:10:45.853 12:06:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:45.853 12:06:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:45.853 12:06:46 -- common/autotest_common.sh@10 -- # set +x 00:10:46.113 12:06:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:46.113 12:06:47 -- target/connect_stress.sh@34 -- # kill -0 3294187 00:10:46.113 12:06:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:46.113 12:06:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:46.113 12:06:47 -- common/autotest_common.sh@10 -- # set +x 00:10:46.684 12:06:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:46.685 12:06:47 -- target/connect_stress.sh@34 -- # kill -0 3294187 00:10:46.685 12:06:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:46.685 12:06:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:46.685 12:06:47 -- common/autotest_common.sh@10 -- # set +x 00:10:46.945 12:06:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:46.945 12:06:47 -- target/connect_stress.sh@34 -- # kill -0 3294187 00:10:46.945 12:06:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:46.945 12:06:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:46.945 12:06:47 -- common/autotest_common.sh@10 -- # set +x 00:10:47.205 12:06:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:47.205 12:06:48 -- target/connect_stress.sh@34 -- # kill -0 3294187 00:10:47.205 12:06:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:47.205 12:06:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:47.205 12:06:48 -- common/autotest_common.sh@10 -- # set +x 00:10:47.510 12:06:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:47.510 12:06:48 -- target/connect_stress.sh@34 -- # kill -0 3294187 00:10:47.510 12:06:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:47.510 12:06:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:47.510 12:06:48 -- common/autotest_common.sh@10 -- # set +x 00:10:47.784 12:06:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:47.784 12:06:48 -- target/connect_stress.sh@34 -- # kill -0 3294187 00:10:47.784 12:06:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:47.784 12:06:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:47.784 12:06:48 -- common/autotest_common.sh@10 -- # set +x 00:10:48.044 12:06:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:48.044 12:06:49 -- target/connect_stress.sh@34 -- # kill -0 3294187 00:10:48.044 12:06:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:48.044 12:06:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:48.044 12:06:49 -- common/autotest_common.sh@10 -- # set +x 00:10:48.616 12:06:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:48.616 12:06:49 -- target/connect_stress.sh@34 -- # kill -0 3294187 00:10:48.616 12:06:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:48.616 12:06:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:48.616 12:06:49 -- common/autotest_common.sh@10 -- # set +x 00:10:48.877 12:06:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:48.877 12:06:49 -- target/connect_stress.sh@34 -- # kill -0 3294187 00:10:48.877 12:06:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:48.877 12:06:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:48.877 12:06:49 -- common/autotest_common.sh@10 -- # set +x 00:10:49.138 12:06:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:49.138 12:06:50 -- target/connect_stress.sh@34 -- # kill -0 3294187 00:10:49.138 12:06:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:49.138 12:06:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:49.138 12:06:50 -- common/autotest_common.sh@10 -- # set +x 00:10:49.398 12:06:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:49.398 12:06:50 -- target/connect_stress.sh@34 -- # kill -0 3294187 00:10:49.398 12:06:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:49.398 12:06:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:49.398 12:06:50 -- common/autotest_common.sh@10 -- # set +x 00:10:49.658 12:06:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:49.658 12:06:50 -- target/connect_stress.sh@34 -- # kill -0 3294187 00:10:49.658 12:06:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:49.658 12:06:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:49.658 12:06:50 -- common/autotest_common.sh@10 -- # set +x 00:10:50.228 12:06:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:50.229 12:06:51 -- target/connect_stress.sh@34 -- # kill -0 3294187 00:10:50.229 12:06:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:50.229 12:06:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:50.229 12:06:51 -- common/autotest_common.sh@10 -- # set +x 00:10:50.489 12:06:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:50.489 12:06:51 -- target/connect_stress.sh@34 -- # kill -0 3294187 00:10:50.489 12:06:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:50.489 12:06:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:50.489 12:06:51 -- common/autotest_common.sh@10 -- # set +x 00:10:50.749 12:06:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:50.749 12:06:51 -- target/connect_stress.sh@34 -- # kill -0 3294187 00:10:50.749 12:06:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:50.749 12:06:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:50.749 12:06:51 -- common/autotest_common.sh@10 -- # set +x 00:10:51.009 12:06:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:51.009 12:06:52 -- target/connect_stress.sh@34 -- # kill -0 3294187 00:10:51.009 12:06:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:51.009 12:06:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:51.009 12:06:52 -- common/autotest_common.sh@10 -- # set +x 00:10:51.580 12:06:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:51.580 12:06:52 -- target/connect_stress.sh@34 -- # kill -0 3294187 00:10:51.580 12:06:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:51.580 12:06:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:51.580 12:06:52 -- common/autotest_common.sh@10 -- # set +x 00:10:51.840 12:06:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:51.840 12:06:52 -- target/connect_stress.sh@34 -- # kill -0 3294187 00:10:51.840 12:06:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:51.840 12:06:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:51.840 12:06:52 -- common/autotest_common.sh@10 -- # set +x 00:10:52.100 12:06:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:52.100 12:06:53 -- target/connect_stress.sh@34 -- # kill -0 3294187 00:10:52.100 12:06:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:52.100 12:06:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:52.100 12:06:53 -- common/autotest_common.sh@10 -- # set +x 00:10:52.359 12:06:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:52.359 12:06:53 -- target/connect_stress.sh@34 -- # kill -0 3294187 00:10:52.359 12:06:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:52.359 12:06:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:52.359 12:06:53 -- common/autotest_common.sh@10 -- # set +x 00:10:52.618 12:06:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:52.618 12:06:53 -- target/connect_stress.sh@34 -- # kill -0 3294187 00:10:52.618 12:06:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:52.618 12:06:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:52.618 12:06:53 -- common/autotest_common.sh@10 -- # set +x 00:10:53.188 12:06:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:53.188 12:06:54 -- target/connect_stress.sh@34 -- # kill -0 3294187 00:10:53.188 12:06:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:53.188 12:06:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:53.188 12:06:54 -- common/autotest_common.sh@10 -- # set +x 00:10:53.448 12:06:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:53.448 12:06:54 -- target/connect_stress.sh@34 -- # kill -0 3294187 00:10:53.448 12:06:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:53.448 12:06:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:53.448 12:06:54 -- common/autotest_common.sh@10 -- # set +x 00:10:53.708 12:06:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:53.708 12:06:54 -- target/connect_stress.sh@34 -- # kill -0 3294187 00:10:53.708 12:06:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:53.708 12:06:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:53.708 12:06:54 -- common/autotest_common.sh@10 -- # set +x 00:10:53.968 12:06:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:53.968 12:06:55 -- target/connect_stress.sh@34 -- # kill -0 3294187 00:10:53.968 12:06:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:53.968 12:06:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:53.968 12:06:55 -- common/autotest_common.sh@10 -- # set +x 00:10:54.227 12:06:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:54.227 12:06:55 -- target/connect_stress.sh@34 -- # kill -0 3294187 00:10:54.227 12:06:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:54.227 12:06:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:54.227 12:06:55 -- common/autotest_common.sh@10 -- # set +x 00:10:54.487 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:54.748 12:06:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:54.748 12:06:55 -- target/connect_stress.sh@34 -- # kill -0 3294187 00:10:54.748 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3294187) - No such process 00:10:54.748 12:06:55 -- target/connect_stress.sh@38 -- # wait 3294187 00:10:54.748 12:06:55 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:54.748 12:06:55 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:54.748 12:06:55 -- target/connect_stress.sh@43 -- # nvmftestfini 00:10:54.748 12:06:55 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:54.748 12:06:55 -- nvmf/common.sh@117 -- # sync 00:10:54.748 12:06:55 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:54.748 12:06:55 -- nvmf/common.sh@120 -- # set +e 00:10:54.748 12:06:55 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:54.748 12:06:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:54.748 rmmod nvme_tcp 00:10:54.748 rmmod nvme_fabrics 00:10:54.748 rmmod nvme_keyring 00:10:54.748 12:06:55 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:54.748 12:06:55 -- nvmf/common.sh@124 -- # set -e 00:10:54.748 12:06:55 -- nvmf/common.sh@125 -- # return 0 00:10:54.748 12:06:55 -- nvmf/common.sh@478 -- # '[' -n 3293837 ']' 00:10:54.748 12:06:55 -- nvmf/common.sh@479 -- # killprocess 3293837 00:10:54.748 12:06:55 -- common/autotest_common.sh@936 -- # '[' -z 3293837 ']' 00:10:54.748 12:06:55 -- common/autotest_common.sh@940 -- # kill -0 3293837 00:10:54.748 12:06:55 -- common/autotest_common.sh@941 -- # uname 00:10:54.748 12:06:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:54.748 12:06:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3293837 00:10:54.748 12:06:55 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:54.748 12:06:55 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:54.748 12:06:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3293837' 00:10:54.748 killing process with pid 3293837 00:10:54.748 12:06:55 -- common/autotest_common.sh@955 -- # kill 3293837 00:10:54.748 12:06:55 -- common/autotest_common.sh@960 -- # wait 3293837 00:10:55.009 12:06:55 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:55.009 12:06:55 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:10:55.009 12:06:55 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:10:55.009 12:06:55 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:55.009 12:06:55 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:55.009 12:06:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:55.009 12:06:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:55.009 12:06:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:56.921 12:06:58 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:56.921 00:10:56.921 real 0m20.875s 00:10:56.921 user 0m41.984s 00:10:56.921 sys 0m8.691s 00:10:56.921 12:06:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:56.921 12:06:58 -- common/autotest_common.sh@10 -- # set +x 00:10:56.921 ************************************ 00:10:56.921 END TEST nvmf_connect_stress 00:10:56.921 ************************************ 00:10:56.921 12:06:58 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:10:56.921 12:06:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:56.921 12:06:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:56.921 12:06:58 -- common/autotest_common.sh@10 -- # set +x 00:10:57.182 ************************************ 00:10:57.182 START TEST nvmf_fused_ordering 00:10:57.182 ************************************ 00:10:57.182 12:06:58 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:10:57.182 * Looking for test storage... 00:10:57.182 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:57.182 12:06:58 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:57.182 12:06:58 -- nvmf/common.sh@7 -- # uname -s 00:10:57.182 12:06:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:57.182 12:06:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:57.182 12:06:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:57.182 12:06:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:57.182 12:06:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:57.182 12:06:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:57.182 12:06:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:57.182 12:06:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:57.182 12:06:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:57.182 12:06:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:57.182 12:06:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:57.182 12:06:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:57.182 12:06:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:57.182 12:06:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:57.182 12:06:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:57.182 12:06:58 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:57.182 12:06:58 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:57.182 12:06:58 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:57.182 12:06:58 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:57.182 12:06:58 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:57.182 12:06:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.182 12:06:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.182 12:06:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.182 12:06:58 -- paths/export.sh@5 -- # export PATH 00:10:57.182 12:06:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.182 12:06:58 -- nvmf/common.sh@47 -- # : 0 00:10:57.182 12:06:58 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:57.182 12:06:58 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:57.182 12:06:58 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:57.182 12:06:58 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:57.182 12:06:58 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:57.182 12:06:58 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:57.182 12:06:58 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:57.182 12:06:58 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:57.443 12:06:58 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:10:57.443 12:06:58 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:10:57.443 12:06:58 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:57.443 12:06:58 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:57.443 12:06:58 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:57.443 12:06:58 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:57.443 12:06:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:57.443 12:06:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:57.443 12:06:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:57.443 12:06:58 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:10:57.443 12:06:58 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:57.443 12:06:58 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:57.443 12:06:58 -- common/autotest_common.sh@10 -- # set +x 00:11:05.588 12:07:05 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:05.588 12:07:05 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:05.588 12:07:05 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:05.588 12:07:05 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:05.588 12:07:05 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:05.588 12:07:05 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:05.588 12:07:05 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:05.588 12:07:05 -- nvmf/common.sh@295 -- # net_devs=() 00:11:05.588 12:07:05 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:05.588 12:07:05 -- nvmf/common.sh@296 -- # e810=() 00:11:05.588 12:07:05 -- nvmf/common.sh@296 -- # local -ga e810 00:11:05.588 12:07:05 -- nvmf/common.sh@297 -- # x722=() 00:11:05.588 12:07:05 -- nvmf/common.sh@297 -- # local -ga x722 00:11:05.588 12:07:05 -- nvmf/common.sh@298 -- # mlx=() 00:11:05.588 12:07:05 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:05.588 12:07:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:05.588 12:07:05 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:05.588 12:07:05 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:05.588 12:07:05 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:05.588 12:07:05 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:05.588 12:07:05 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:05.588 12:07:05 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:05.588 12:07:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:05.588 12:07:05 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:05.588 12:07:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:05.588 12:07:05 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:05.588 12:07:05 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:05.588 12:07:05 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:05.588 12:07:05 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:05.588 12:07:05 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:05.588 12:07:05 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:05.588 12:07:05 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:05.588 12:07:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:05.588 12:07:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:05.588 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:05.588 12:07:05 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:05.588 12:07:05 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:05.588 12:07:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:05.588 12:07:05 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:05.588 12:07:05 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:05.588 12:07:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:05.588 12:07:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:05.588 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:05.588 12:07:05 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:05.588 12:07:05 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:05.588 12:07:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:05.588 12:07:05 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:05.588 12:07:05 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:05.588 12:07:05 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:05.588 12:07:05 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:05.588 12:07:05 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:05.588 12:07:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:05.588 12:07:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:05.588 12:07:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:05.589 12:07:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:05.589 12:07:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:05.589 Found net devices under 0000:31:00.0: cvl_0_0 00:11:05.589 12:07:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:05.589 12:07:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:05.589 12:07:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:05.589 12:07:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:05.589 12:07:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:05.589 12:07:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:05.589 Found net devices under 0000:31:00.1: cvl_0_1 00:11:05.589 12:07:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:05.589 12:07:05 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:05.589 12:07:05 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:05.589 12:07:05 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:05.589 12:07:05 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:05.589 12:07:05 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:05.589 12:07:05 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:05.589 12:07:05 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:05.589 12:07:05 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:05.589 12:07:05 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:05.589 12:07:05 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:05.589 12:07:05 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:05.589 12:07:05 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:05.589 12:07:05 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:05.589 12:07:05 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:05.589 12:07:05 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:05.589 12:07:05 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:05.589 12:07:05 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:05.589 12:07:05 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:05.589 12:07:05 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:05.589 12:07:05 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:05.589 12:07:05 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:05.589 12:07:05 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:05.589 12:07:05 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:05.589 12:07:05 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:05.589 12:07:05 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:05.589 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:05.589 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.431 ms 00:11:05.589 00:11:05.589 --- 10.0.0.2 ping statistics --- 00:11:05.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:05.589 rtt min/avg/max/mdev = 0.431/0.431/0.431/0.000 ms 00:11:05.589 12:07:05 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:05.589 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:05.589 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:11:05.589 00:11:05.589 --- 10.0.0.1 ping statistics --- 00:11:05.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:05.589 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:11:05.589 12:07:05 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:05.589 12:07:05 -- nvmf/common.sh@411 -- # return 0 00:11:05.589 12:07:05 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:05.589 12:07:05 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:05.589 12:07:05 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:05.589 12:07:05 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:05.589 12:07:05 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:05.589 12:07:05 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:05.589 12:07:05 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:05.589 12:07:05 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:11:05.589 12:07:05 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:05.589 12:07:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:05.589 12:07:05 -- common/autotest_common.sh@10 -- # set +x 00:11:05.589 12:07:05 -- nvmf/common.sh@470 -- # nvmfpid=3300412 00:11:05.589 12:07:05 -- nvmf/common.sh@471 -- # waitforlisten 3300412 00:11:05.589 12:07:05 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:05.589 12:07:05 -- common/autotest_common.sh@817 -- # '[' -z 3300412 ']' 00:11:05.589 12:07:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:05.589 12:07:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:05.589 12:07:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:05.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:05.589 12:07:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:05.589 12:07:05 -- common/autotest_common.sh@10 -- # set +x 00:11:05.589 [2024-04-26 12:07:05.698707] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:11:05.589 [2024-04-26 12:07:05.698774] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:05.589 EAL: No free 2048 kB hugepages reported on node 1 00:11:05.589 [2024-04-26 12:07:05.788181] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:05.589 [2024-04-26 12:07:05.880147] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:05.589 [2024-04-26 12:07:05.880205] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:05.589 [2024-04-26 12:07:05.880213] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:05.589 [2024-04-26 12:07:05.880220] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:05.589 [2024-04-26 12:07:05.880226] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:05.589 [2024-04-26 12:07:05.880262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:05.589 12:07:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:05.589 12:07:06 -- common/autotest_common.sh@850 -- # return 0 00:11:05.589 12:07:06 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:05.589 12:07:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:05.589 12:07:06 -- common/autotest_common.sh@10 -- # set +x 00:11:05.589 12:07:06 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:05.589 12:07:06 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:05.589 12:07:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:05.589 12:07:06 -- common/autotest_common.sh@10 -- # set +x 00:11:05.589 [2024-04-26 12:07:06.527651] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:05.589 12:07:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:05.589 12:07:06 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:05.589 12:07:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:05.589 12:07:06 -- common/autotest_common.sh@10 -- # set +x 00:11:05.589 12:07:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:05.589 12:07:06 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:05.589 12:07:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:05.589 12:07:06 -- common/autotest_common.sh@10 -- # set +x 00:11:05.589 [2024-04-26 12:07:06.551896] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:05.589 12:07:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:05.589 12:07:06 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:05.589 12:07:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:05.589 12:07:06 -- common/autotest_common.sh@10 -- # set +x 00:11:05.589 NULL1 00:11:05.589 12:07:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:05.589 12:07:06 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:11:05.589 12:07:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:05.589 12:07:06 -- common/autotest_common.sh@10 -- # set +x 00:11:05.589 12:07:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:05.589 12:07:06 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:05.589 12:07:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:05.589 12:07:06 -- common/autotest_common.sh@10 -- # set +x 00:11:05.589 12:07:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:05.589 12:07:06 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:05.589 [2024-04-26 12:07:06.621979] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:11:05.589 [2024-04-26 12:07:06.622058] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3300644 ] 00:11:05.589 EAL: No free 2048 kB hugepages reported on node 1 00:11:05.850 Attached to nqn.2016-06.io.spdk:cnode1 00:11:05.850 Namespace ID: 1 size: 1GB 00:11:05.850 fused_ordering(0) 00:11:05.850 fused_ordering(1) 00:11:05.850 fused_ordering(2) 00:11:05.850 fused_ordering(3) 00:11:05.850 fused_ordering(4) 00:11:05.850 fused_ordering(5) 00:11:05.850 fused_ordering(6) 00:11:05.850 fused_ordering(7) 00:11:05.850 fused_ordering(8) 00:11:05.850 fused_ordering(9) 00:11:05.850 fused_ordering(10) 00:11:05.850 fused_ordering(11) 00:11:05.850 fused_ordering(12) 00:11:05.850 fused_ordering(13) 00:11:05.850 fused_ordering(14) 00:11:05.850 fused_ordering(15) 00:11:05.850 fused_ordering(16) 00:11:05.850 fused_ordering(17) 00:11:05.850 fused_ordering(18) 00:11:05.850 fused_ordering(19) 00:11:05.850 fused_ordering(20) 00:11:05.850 fused_ordering(21) 00:11:05.850 fused_ordering(22) 00:11:05.850 fused_ordering(23) 00:11:05.850 fused_ordering(24) 00:11:05.850 fused_ordering(25) 00:11:05.850 fused_ordering(26) 00:11:05.850 fused_ordering(27) 00:11:05.850 fused_ordering(28) 00:11:05.850 fused_ordering(29) 00:11:05.850 fused_ordering(30) 00:11:05.850 fused_ordering(31) 00:11:05.850 fused_ordering(32) 00:11:05.850 fused_ordering(33) 00:11:05.850 fused_ordering(34) 00:11:05.850 fused_ordering(35) 00:11:05.850 fused_ordering(36) 00:11:05.850 fused_ordering(37) 00:11:05.850 fused_ordering(38) 00:11:05.850 fused_ordering(39) 00:11:05.850 fused_ordering(40) 00:11:05.850 fused_ordering(41) 00:11:05.850 fused_ordering(42) 00:11:05.850 fused_ordering(43) 00:11:05.850 fused_ordering(44) 00:11:05.850 fused_ordering(45) 00:11:05.850 fused_ordering(46) 00:11:05.850 fused_ordering(47) 00:11:05.850 fused_ordering(48) 00:11:05.850 fused_ordering(49) 00:11:05.850 fused_ordering(50) 00:11:05.850 fused_ordering(51) 00:11:05.850 fused_ordering(52) 00:11:05.850 fused_ordering(53) 00:11:05.850 fused_ordering(54) 00:11:05.850 fused_ordering(55) 00:11:05.850 fused_ordering(56) 00:11:05.850 fused_ordering(57) 00:11:05.850 fused_ordering(58) 00:11:05.850 fused_ordering(59) 00:11:05.850 fused_ordering(60) 00:11:05.850 fused_ordering(61) 00:11:05.850 fused_ordering(62) 00:11:05.850 fused_ordering(63) 00:11:05.850 fused_ordering(64) 00:11:05.850 fused_ordering(65) 00:11:05.850 fused_ordering(66) 00:11:05.850 fused_ordering(67) 00:11:05.850 fused_ordering(68) 00:11:05.850 fused_ordering(69) 00:11:05.850 fused_ordering(70) 00:11:05.850 fused_ordering(71) 00:11:05.850 fused_ordering(72) 00:11:05.850 fused_ordering(73) 00:11:05.850 fused_ordering(74) 00:11:05.850 fused_ordering(75) 00:11:05.850 fused_ordering(76) 00:11:05.850 fused_ordering(77) 00:11:05.850 fused_ordering(78) 00:11:05.850 fused_ordering(79) 00:11:05.850 fused_ordering(80) 00:11:05.850 fused_ordering(81) 00:11:05.850 fused_ordering(82) 00:11:05.850 fused_ordering(83) 00:11:05.850 fused_ordering(84) 00:11:05.850 fused_ordering(85) 00:11:05.850 fused_ordering(86) 00:11:05.850 fused_ordering(87) 00:11:05.851 fused_ordering(88) 00:11:05.851 fused_ordering(89) 00:11:05.851 fused_ordering(90) 00:11:05.851 fused_ordering(91) 00:11:05.851 fused_ordering(92) 00:11:05.851 fused_ordering(93) 00:11:05.851 fused_ordering(94) 00:11:05.851 fused_ordering(95) 00:11:05.851 fused_ordering(96) 00:11:05.851 fused_ordering(97) 00:11:05.851 fused_ordering(98) 00:11:05.851 fused_ordering(99) 00:11:05.851 fused_ordering(100) 00:11:05.851 fused_ordering(101) 00:11:05.851 fused_ordering(102) 00:11:05.851 fused_ordering(103) 00:11:05.851 fused_ordering(104) 00:11:05.851 fused_ordering(105) 00:11:05.851 fused_ordering(106) 00:11:05.851 fused_ordering(107) 00:11:05.851 fused_ordering(108) 00:11:05.851 fused_ordering(109) 00:11:05.851 fused_ordering(110) 00:11:05.851 fused_ordering(111) 00:11:05.851 fused_ordering(112) 00:11:05.851 fused_ordering(113) 00:11:05.851 fused_ordering(114) 00:11:05.851 fused_ordering(115) 00:11:05.851 fused_ordering(116) 00:11:05.851 fused_ordering(117) 00:11:05.851 fused_ordering(118) 00:11:05.851 fused_ordering(119) 00:11:05.851 fused_ordering(120) 00:11:05.851 fused_ordering(121) 00:11:05.851 fused_ordering(122) 00:11:05.851 fused_ordering(123) 00:11:05.851 fused_ordering(124) 00:11:05.851 fused_ordering(125) 00:11:05.851 fused_ordering(126) 00:11:05.851 fused_ordering(127) 00:11:05.851 fused_ordering(128) 00:11:05.851 fused_ordering(129) 00:11:05.851 fused_ordering(130) 00:11:05.851 fused_ordering(131) 00:11:05.851 fused_ordering(132) 00:11:05.851 fused_ordering(133) 00:11:05.851 fused_ordering(134) 00:11:05.851 fused_ordering(135) 00:11:05.851 fused_ordering(136) 00:11:05.851 fused_ordering(137) 00:11:05.851 fused_ordering(138) 00:11:05.851 fused_ordering(139) 00:11:05.851 fused_ordering(140) 00:11:05.851 fused_ordering(141) 00:11:05.851 fused_ordering(142) 00:11:05.851 fused_ordering(143) 00:11:05.851 fused_ordering(144) 00:11:05.851 fused_ordering(145) 00:11:05.851 fused_ordering(146) 00:11:05.851 fused_ordering(147) 00:11:05.851 fused_ordering(148) 00:11:05.851 fused_ordering(149) 00:11:05.851 fused_ordering(150) 00:11:05.851 fused_ordering(151) 00:11:05.851 fused_ordering(152) 00:11:05.851 fused_ordering(153) 00:11:05.851 fused_ordering(154) 00:11:05.851 fused_ordering(155) 00:11:05.851 fused_ordering(156) 00:11:05.851 fused_ordering(157) 00:11:05.851 fused_ordering(158) 00:11:05.851 fused_ordering(159) 00:11:05.851 fused_ordering(160) 00:11:05.851 fused_ordering(161) 00:11:05.851 fused_ordering(162) 00:11:05.851 fused_ordering(163) 00:11:05.851 fused_ordering(164) 00:11:05.851 fused_ordering(165) 00:11:05.851 fused_ordering(166) 00:11:05.851 fused_ordering(167) 00:11:05.851 fused_ordering(168) 00:11:05.851 fused_ordering(169) 00:11:05.851 fused_ordering(170) 00:11:05.851 fused_ordering(171) 00:11:05.851 fused_ordering(172) 00:11:05.851 fused_ordering(173) 00:11:05.851 fused_ordering(174) 00:11:05.851 fused_ordering(175) 00:11:05.851 fused_ordering(176) 00:11:05.851 fused_ordering(177) 00:11:05.851 fused_ordering(178) 00:11:05.851 fused_ordering(179) 00:11:05.851 fused_ordering(180) 00:11:05.851 fused_ordering(181) 00:11:05.851 fused_ordering(182) 00:11:05.851 fused_ordering(183) 00:11:05.851 fused_ordering(184) 00:11:05.851 fused_ordering(185) 00:11:05.851 fused_ordering(186) 00:11:05.851 fused_ordering(187) 00:11:05.851 fused_ordering(188) 00:11:05.851 fused_ordering(189) 00:11:05.851 fused_ordering(190) 00:11:05.851 fused_ordering(191) 00:11:05.851 fused_ordering(192) 00:11:05.851 fused_ordering(193) 00:11:05.851 fused_ordering(194) 00:11:05.851 fused_ordering(195) 00:11:05.851 fused_ordering(196) 00:11:05.851 fused_ordering(197) 00:11:05.851 fused_ordering(198) 00:11:05.851 fused_ordering(199) 00:11:05.851 fused_ordering(200) 00:11:05.851 fused_ordering(201) 00:11:05.851 fused_ordering(202) 00:11:05.851 fused_ordering(203) 00:11:05.851 fused_ordering(204) 00:11:05.851 fused_ordering(205) 00:11:06.423 fused_ordering(206) 00:11:06.423 fused_ordering(207) 00:11:06.423 fused_ordering(208) 00:11:06.423 fused_ordering(209) 00:11:06.423 fused_ordering(210) 00:11:06.423 fused_ordering(211) 00:11:06.423 fused_ordering(212) 00:11:06.423 fused_ordering(213) 00:11:06.423 fused_ordering(214) 00:11:06.423 fused_ordering(215) 00:11:06.423 fused_ordering(216) 00:11:06.423 fused_ordering(217) 00:11:06.423 fused_ordering(218) 00:11:06.423 fused_ordering(219) 00:11:06.423 fused_ordering(220) 00:11:06.423 fused_ordering(221) 00:11:06.423 fused_ordering(222) 00:11:06.423 fused_ordering(223) 00:11:06.423 fused_ordering(224) 00:11:06.423 fused_ordering(225) 00:11:06.423 fused_ordering(226) 00:11:06.423 fused_ordering(227) 00:11:06.423 fused_ordering(228) 00:11:06.423 fused_ordering(229) 00:11:06.423 fused_ordering(230) 00:11:06.423 fused_ordering(231) 00:11:06.423 fused_ordering(232) 00:11:06.423 fused_ordering(233) 00:11:06.423 fused_ordering(234) 00:11:06.423 fused_ordering(235) 00:11:06.423 fused_ordering(236) 00:11:06.423 fused_ordering(237) 00:11:06.423 fused_ordering(238) 00:11:06.423 fused_ordering(239) 00:11:06.423 fused_ordering(240) 00:11:06.423 fused_ordering(241) 00:11:06.423 fused_ordering(242) 00:11:06.423 fused_ordering(243) 00:11:06.423 fused_ordering(244) 00:11:06.423 fused_ordering(245) 00:11:06.423 fused_ordering(246) 00:11:06.423 fused_ordering(247) 00:11:06.423 fused_ordering(248) 00:11:06.423 fused_ordering(249) 00:11:06.423 fused_ordering(250) 00:11:06.423 fused_ordering(251) 00:11:06.423 fused_ordering(252) 00:11:06.423 fused_ordering(253) 00:11:06.423 fused_ordering(254) 00:11:06.423 fused_ordering(255) 00:11:06.423 fused_ordering(256) 00:11:06.423 fused_ordering(257) 00:11:06.423 fused_ordering(258) 00:11:06.423 fused_ordering(259) 00:11:06.423 fused_ordering(260) 00:11:06.423 fused_ordering(261) 00:11:06.423 fused_ordering(262) 00:11:06.423 fused_ordering(263) 00:11:06.423 fused_ordering(264) 00:11:06.423 fused_ordering(265) 00:11:06.423 fused_ordering(266) 00:11:06.423 fused_ordering(267) 00:11:06.423 fused_ordering(268) 00:11:06.423 fused_ordering(269) 00:11:06.423 fused_ordering(270) 00:11:06.423 fused_ordering(271) 00:11:06.423 fused_ordering(272) 00:11:06.423 fused_ordering(273) 00:11:06.423 fused_ordering(274) 00:11:06.423 fused_ordering(275) 00:11:06.423 fused_ordering(276) 00:11:06.423 fused_ordering(277) 00:11:06.423 fused_ordering(278) 00:11:06.423 fused_ordering(279) 00:11:06.423 fused_ordering(280) 00:11:06.423 fused_ordering(281) 00:11:06.423 fused_ordering(282) 00:11:06.423 fused_ordering(283) 00:11:06.423 fused_ordering(284) 00:11:06.423 fused_ordering(285) 00:11:06.423 fused_ordering(286) 00:11:06.423 fused_ordering(287) 00:11:06.423 fused_ordering(288) 00:11:06.423 fused_ordering(289) 00:11:06.423 fused_ordering(290) 00:11:06.423 fused_ordering(291) 00:11:06.423 fused_ordering(292) 00:11:06.423 fused_ordering(293) 00:11:06.423 fused_ordering(294) 00:11:06.423 fused_ordering(295) 00:11:06.423 fused_ordering(296) 00:11:06.423 fused_ordering(297) 00:11:06.423 fused_ordering(298) 00:11:06.423 fused_ordering(299) 00:11:06.423 fused_ordering(300) 00:11:06.423 fused_ordering(301) 00:11:06.423 fused_ordering(302) 00:11:06.423 fused_ordering(303) 00:11:06.423 fused_ordering(304) 00:11:06.423 fused_ordering(305) 00:11:06.423 fused_ordering(306) 00:11:06.423 fused_ordering(307) 00:11:06.423 fused_ordering(308) 00:11:06.423 fused_ordering(309) 00:11:06.423 fused_ordering(310) 00:11:06.423 fused_ordering(311) 00:11:06.423 fused_ordering(312) 00:11:06.423 fused_ordering(313) 00:11:06.423 fused_ordering(314) 00:11:06.423 fused_ordering(315) 00:11:06.423 fused_ordering(316) 00:11:06.423 fused_ordering(317) 00:11:06.423 fused_ordering(318) 00:11:06.423 fused_ordering(319) 00:11:06.423 fused_ordering(320) 00:11:06.423 fused_ordering(321) 00:11:06.423 fused_ordering(322) 00:11:06.423 fused_ordering(323) 00:11:06.423 fused_ordering(324) 00:11:06.423 fused_ordering(325) 00:11:06.423 fused_ordering(326) 00:11:06.423 fused_ordering(327) 00:11:06.423 fused_ordering(328) 00:11:06.423 fused_ordering(329) 00:11:06.423 fused_ordering(330) 00:11:06.423 fused_ordering(331) 00:11:06.423 fused_ordering(332) 00:11:06.423 fused_ordering(333) 00:11:06.423 fused_ordering(334) 00:11:06.423 fused_ordering(335) 00:11:06.423 fused_ordering(336) 00:11:06.423 fused_ordering(337) 00:11:06.423 fused_ordering(338) 00:11:06.423 fused_ordering(339) 00:11:06.423 fused_ordering(340) 00:11:06.423 fused_ordering(341) 00:11:06.423 fused_ordering(342) 00:11:06.423 fused_ordering(343) 00:11:06.423 fused_ordering(344) 00:11:06.423 fused_ordering(345) 00:11:06.423 fused_ordering(346) 00:11:06.423 fused_ordering(347) 00:11:06.423 fused_ordering(348) 00:11:06.423 fused_ordering(349) 00:11:06.423 fused_ordering(350) 00:11:06.423 fused_ordering(351) 00:11:06.423 fused_ordering(352) 00:11:06.423 fused_ordering(353) 00:11:06.423 fused_ordering(354) 00:11:06.423 fused_ordering(355) 00:11:06.423 fused_ordering(356) 00:11:06.423 fused_ordering(357) 00:11:06.423 fused_ordering(358) 00:11:06.423 fused_ordering(359) 00:11:06.423 fused_ordering(360) 00:11:06.423 fused_ordering(361) 00:11:06.423 fused_ordering(362) 00:11:06.423 fused_ordering(363) 00:11:06.423 fused_ordering(364) 00:11:06.423 fused_ordering(365) 00:11:06.423 fused_ordering(366) 00:11:06.423 fused_ordering(367) 00:11:06.423 fused_ordering(368) 00:11:06.423 fused_ordering(369) 00:11:06.423 fused_ordering(370) 00:11:06.423 fused_ordering(371) 00:11:06.423 fused_ordering(372) 00:11:06.423 fused_ordering(373) 00:11:06.423 fused_ordering(374) 00:11:06.423 fused_ordering(375) 00:11:06.423 fused_ordering(376) 00:11:06.423 fused_ordering(377) 00:11:06.423 fused_ordering(378) 00:11:06.423 fused_ordering(379) 00:11:06.423 fused_ordering(380) 00:11:06.423 fused_ordering(381) 00:11:06.423 fused_ordering(382) 00:11:06.423 fused_ordering(383) 00:11:06.423 fused_ordering(384) 00:11:06.423 fused_ordering(385) 00:11:06.423 fused_ordering(386) 00:11:06.423 fused_ordering(387) 00:11:06.423 fused_ordering(388) 00:11:06.423 fused_ordering(389) 00:11:06.423 fused_ordering(390) 00:11:06.423 fused_ordering(391) 00:11:06.423 fused_ordering(392) 00:11:06.423 fused_ordering(393) 00:11:06.423 fused_ordering(394) 00:11:06.423 fused_ordering(395) 00:11:06.423 fused_ordering(396) 00:11:06.423 fused_ordering(397) 00:11:06.423 fused_ordering(398) 00:11:06.423 fused_ordering(399) 00:11:06.423 fused_ordering(400) 00:11:06.423 fused_ordering(401) 00:11:06.423 fused_ordering(402) 00:11:06.423 fused_ordering(403) 00:11:06.423 fused_ordering(404) 00:11:06.423 fused_ordering(405) 00:11:06.423 fused_ordering(406) 00:11:06.423 fused_ordering(407) 00:11:06.423 fused_ordering(408) 00:11:06.423 fused_ordering(409) 00:11:06.423 fused_ordering(410) 00:11:06.686 fused_ordering(411) 00:11:06.686 fused_ordering(412) 00:11:06.686 fused_ordering(413) 00:11:06.686 fused_ordering(414) 00:11:06.686 fused_ordering(415) 00:11:06.686 fused_ordering(416) 00:11:06.686 fused_ordering(417) 00:11:06.686 fused_ordering(418) 00:11:06.686 fused_ordering(419) 00:11:06.686 fused_ordering(420) 00:11:06.686 fused_ordering(421) 00:11:06.686 fused_ordering(422) 00:11:06.686 fused_ordering(423) 00:11:06.686 fused_ordering(424) 00:11:06.686 fused_ordering(425) 00:11:06.686 fused_ordering(426) 00:11:06.686 fused_ordering(427) 00:11:06.686 fused_ordering(428) 00:11:06.686 fused_ordering(429) 00:11:06.686 fused_ordering(430) 00:11:06.686 fused_ordering(431) 00:11:06.686 fused_ordering(432) 00:11:06.686 fused_ordering(433) 00:11:06.686 fused_ordering(434) 00:11:06.686 fused_ordering(435) 00:11:06.686 fused_ordering(436) 00:11:06.686 fused_ordering(437) 00:11:06.686 fused_ordering(438) 00:11:06.686 fused_ordering(439) 00:11:06.686 fused_ordering(440) 00:11:06.686 fused_ordering(441) 00:11:06.686 fused_ordering(442) 00:11:06.686 fused_ordering(443) 00:11:06.686 fused_ordering(444) 00:11:06.686 fused_ordering(445) 00:11:06.686 fused_ordering(446) 00:11:06.686 fused_ordering(447) 00:11:06.686 fused_ordering(448) 00:11:06.686 fused_ordering(449) 00:11:06.686 fused_ordering(450) 00:11:06.686 fused_ordering(451) 00:11:06.686 fused_ordering(452) 00:11:06.686 fused_ordering(453) 00:11:06.686 fused_ordering(454) 00:11:06.686 fused_ordering(455) 00:11:06.686 fused_ordering(456) 00:11:06.686 fused_ordering(457) 00:11:06.686 fused_ordering(458) 00:11:06.686 fused_ordering(459) 00:11:06.686 fused_ordering(460) 00:11:06.686 fused_ordering(461) 00:11:06.686 fused_ordering(462) 00:11:06.686 fused_ordering(463) 00:11:06.686 fused_ordering(464) 00:11:06.686 fused_ordering(465) 00:11:06.686 fused_ordering(466) 00:11:06.686 fused_ordering(467) 00:11:06.686 fused_ordering(468) 00:11:06.686 fused_ordering(469) 00:11:06.686 fused_ordering(470) 00:11:06.686 fused_ordering(471) 00:11:06.686 fused_ordering(472) 00:11:06.686 fused_ordering(473) 00:11:06.686 fused_ordering(474) 00:11:06.686 fused_ordering(475) 00:11:06.686 fused_ordering(476) 00:11:06.686 fused_ordering(477) 00:11:06.686 fused_ordering(478) 00:11:06.686 fused_ordering(479) 00:11:06.686 fused_ordering(480) 00:11:06.686 fused_ordering(481) 00:11:06.686 fused_ordering(482) 00:11:06.686 fused_ordering(483) 00:11:06.686 fused_ordering(484) 00:11:06.686 fused_ordering(485) 00:11:06.686 fused_ordering(486) 00:11:06.686 fused_ordering(487) 00:11:06.686 fused_ordering(488) 00:11:06.686 fused_ordering(489) 00:11:06.686 fused_ordering(490) 00:11:06.686 fused_ordering(491) 00:11:06.686 fused_ordering(492) 00:11:06.686 fused_ordering(493) 00:11:06.686 fused_ordering(494) 00:11:06.686 fused_ordering(495) 00:11:06.686 fused_ordering(496) 00:11:06.686 fused_ordering(497) 00:11:06.686 fused_ordering(498) 00:11:06.686 fused_ordering(499) 00:11:06.686 fused_ordering(500) 00:11:06.686 fused_ordering(501) 00:11:06.686 fused_ordering(502) 00:11:06.686 fused_ordering(503) 00:11:06.686 fused_ordering(504) 00:11:06.686 fused_ordering(505) 00:11:06.686 fused_ordering(506) 00:11:06.686 fused_ordering(507) 00:11:06.686 fused_ordering(508) 00:11:06.686 fused_ordering(509) 00:11:06.686 fused_ordering(510) 00:11:06.686 fused_ordering(511) 00:11:06.686 fused_ordering(512) 00:11:06.686 fused_ordering(513) 00:11:06.686 fused_ordering(514) 00:11:06.686 fused_ordering(515) 00:11:06.686 fused_ordering(516) 00:11:06.686 fused_ordering(517) 00:11:06.686 fused_ordering(518) 00:11:06.686 fused_ordering(519) 00:11:06.686 fused_ordering(520) 00:11:06.686 fused_ordering(521) 00:11:06.686 fused_ordering(522) 00:11:06.686 fused_ordering(523) 00:11:06.686 fused_ordering(524) 00:11:06.686 fused_ordering(525) 00:11:06.686 fused_ordering(526) 00:11:06.686 fused_ordering(527) 00:11:06.686 fused_ordering(528) 00:11:06.686 fused_ordering(529) 00:11:06.686 fused_ordering(530) 00:11:06.686 fused_ordering(531) 00:11:06.686 fused_ordering(532) 00:11:06.686 fused_ordering(533) 00:11:06.686 fused_ordering(534) 00:11:06.686 fused_ordering(535) 00:11:06.686 fused_ordering(536) 00:11:06.686 fused_ordering(537) 00:11:06.686 fused_ordering(538) 00:11:06.686 fused_ordering(539) 00:11:06.686 fused_ordering(540) 00:11:06.686 fused_ordering(541) 00:11:06.686 fused_ordering(542) 00:11:06.686 fused_ordering(543) 00:11:06.686 fused_ordering(544) 00:11:06.686 fused_ordering(545) 00:11:06.686 fused_ordering(546) 00:11:06.686 fused_ordering(547) 00:11:06.686 fused_ordering(548) 00:11:06.686 fused_ordering(549) 00:11:06.686 fused_ordering(550) 00:11:06.686 fused_ordering(551) 00:11:06.686 fused_ordering(552) 00:11:06.686 fused_ordering(553) 00:11:06.686 fused_ordering(554) 00:11:06.686 fused_ordering(555) 00:11:06.686 fused_ordering(556) 00:11:06.686 fused_ordering(557) 00:11:06.686 fused_ordering(558) 00:11:06.686 fused_ordering(559) 00:11:06.686 fused_ordering(560) 00:11:06.686 fused_ordering(561) 00:11:06.686 fused_ordering(562) 00:11:06.686 fused_ordering(563) 00:11:06.686 fused_ordering(564) 00:11:06.686 fused_ordering(565) 00:11:06.686 fused_ordering(566) 00:11:06.686 fused_ordering(567) 00:11:06.686 fused_ordering(568) 00:11:06.686 fused_ordering(569) 00:11:06.686 fused_ordering(570) 00:11:06.686 fused_ordering(571) 00:11:06.686 fused_ordering(572) 00:11:06.686 fused_ordering(573) 00:11:06.686 fused_ordering(574) 00:11:06.686 fused_ordering(575) 00:11:06.686 fused_ordering(576) 00:11:06.686 fused_ordering(577) 00:11:06.686 fused_ordering(578) 00:11:06.686 fused_ordering(579) 00:11:06.686 fused_ordering(580) 00:11:06.686 fused_ordering(581) 00:11:06.686 fused_ordering(582) 00:11:06.686 fused_ordering(583) 00:11:06.686 fused_ordering(584) 00:11:06.686 fused_ordering(585) 00:11:06.686 fused_ordering(586) 00:11:06.686 fused_ordering(587) 00:11:06.686 fused_ordering(588) 00:11:06.686 fused_ordering(589) 00:11:06.686 fused_ordering(590) 00:11:06.686 fused_ordering(591) 00:11:06.686 fused_ordering(592) 00:11:06.686 fused_ordering(593) 00:11:06.686 fused_ordering(594) 00:11:06.686 fused_ordering(595) 00:11:06.686 fused_ordering(596) 00:11:06.686 fused_ordering(597) 00:11:06.686 fused_ordering(598) 00:11:06.686 fused_ordering(599) 00:11:06.686 fused_ordering(600) 00:11:06.686 fused_ordering(601) 00:11:06.686 fused_ordering(602) 00:11:06.686 fused_ordering(603) 00:11:06.686 fused_ordering(604) 00:11:06.686 fused_ordering(605) 00:11:06.686 fused_ordering(606) 00:11:06.686 fused_ordering(607) 00:11:06.686 fused_ordering(608) 00:11:06.686 fused_ordering(609) 00:11:06.686 fused_ordering(610) 00:11:06.686 fused_ordering(611) 00:11:06.686 fused_ordering(612) 00:11:06.686 fused_ordering(613) 00:11:06.686 fused_ordering(614) 00:11:06.686 fused_ordering(615) 00:11:07.258 fused_ordering(616) 00:11:07.258 fused_ordering(617) 00:11:07.258 fused_ordering(618) 00:11:07.258 fused_ordering(619) 00:11:07.258 fused_ordering(620) 00:11:07.258 fused_ordering(621) 00:11:07.258 fused_ordering(622) 00:11:07.258 fused_ordering(623) 00:11:07.258 fused_ordering(624) 00:11:07.258 fused_ordering(625) 00:11:07.258 fused_ordering(626) 00:11:07.258 fused_ordering(627) 00:11:07.258 fused_ordering(628) 00:11:07.258 fused_ordering(629) 00:11:07.258 fused_ordering(630) 00:11:07.258 fused_ordering(631) 00:11:07.258 fused_ordering(632) 00:11:07.258 fused_ordering(633) 00:11:07.258 fused_ordering(634) 00:11:07.258 fused_ordering(635) 00:11:07.258 fused_ordering(636) 00:11:07.258 fused_ordering(637) 00:11:07.258 fused_ordering(638) 00:11:07.258 fused_ordering(639) 00:11:07.258 fused_ordering(640) 00:11:07.258 fused_ordering(641) 00:11:07.258 fused_ordering(642) 00:11:07.258 fused_ordering(643) 00:11:07.258 fused_ordering(644) 00:11:07.258 fused_ordering(645) 00:11:07.258 fused_ordering(646) 00:11:07.258 fused_ordering(647) 00:11:07.258 fused_ordering(648) 00:11:07.258 fused_ordering(649) 00:11:07.258 fused_ordering(650) 00:11:07.258 fused_ordering(651) 00:11:07.258 fused_ordering(652) 00:11:07.258 fused_ordering(653) 00:11:07.258 fused_ordering(654) 00:11:07.258 fused_ordering(655) 00:11:07.258 fused_ordering(656) 00:11:07.258 fused_ordering(657) 00:11:07.258 fused_ordering(658) 00:11:07.258 fused_ordering(659) 00:11:07.258 fused_ordering(660) 00:11:07.258 fused_ordering(661) 00:11:07.258 fused_ordering(662) 00:11:07.258 fused_ordering(663) 00:11:07.258 fused_ordering(664) 00:11:07.258 fused_ordering(665) 00:11:07.258 fused_ordering(666) 00:11:07.258 fused_ordering(667) 00:11:07.258 fused_ordering(668) 00:11:07.258 fused_ordering(669) 00:11:07.259 fused_ordering(670) 00:11:07.259 fused_ordering(671) 00:11:07.259 fused_ordering(672) 00:11:07.259 fused_ordering(673) 00:11:07.259 fused_ordering(674) 00:11:07.259 fused_ordering(675) 00:11:07.259 fused_ordering(676) 00:11:07.259 fused_ordering(677) 00:11:07.259 fused_ordering(678) 00:11:07.259 fused_ordering(679) 00:11:07.259 fused_ordering(680) 00:11:07.259 fused_ordering(681) 00:11:07.259 fused_ordering(682) 00:11:07.259 fused_ordering(683) 00:11:07.259 fused_ordering(684) 00:11:07.259 fused_ordering(685) 00:11:07.259 fused_ordering(686) 00:11:07.259 fused_ordering(687) 00:11:07.259 fused_ordering(688) 00:11:07.259 fused_ordering(689) 00:11:07.259 fused_ordering(690) 00:11:07.259 fused_ordering(691) 00:11:07.259 fused_ordering(692) 00:11:07.259 fused_ordering(693) 00:11:07.259 fused_ordering(694) 00:11:07.259 fused_ordering(695) 00:11:07.259 fused_ordering(696) 00:11:07.259 fused_ordering(697) 00:11:07.259 fused_ordering(698) 00:11:07.259 fused_ordering(699) 00:11:07.259 fused_ordering(700) 00:11:07.259 fused_ordering(701) 00:11:07.259 fused_ordering(702) 00:11:07.259 fused_ordering(703) 00:11:07.259 fused_ordering(704) 00:11:07.259 fused_ordering(705) 00:11:07.259 fused_ordering(706) 00:11:07.259 fused_ordering(707) 00:11:07.259 fused_ordering(708) 00:11:07.259 fused_ordering(709) 00:11:07.259 fused_ordering(710) 00:11:07.259 fused_ordering(711) 00:11:07.259 fused_ordering(712) 00:11:07.259 fused_ordering(713) 00:11:07.259 fused_ordering(714) 00:11:07.259 fused_ordering(715) 00:11:07.259 fused_ordering(716) 00:11:07.259 fused_ordering(717) 00:11:07.259 fused_ordering(718) 00:11:07.259 fused_ordering(719) 00:11:07.259 fused_ordering(720) 00:11:07.259 fused_ordering(721) 00:11:07.259 fused_ordering(722) 00:11:07.259 fused_ordering(723) 00:11:07.259 fused_ordering(724) 00:11:07.259 fused_ordering(725) 00:11:07.259 fused_ordering(726) 00:11:07.259 fused_ordering(727) 00:11:07.259 fused_ordering(728) 00:11:07.259 fused_ordering(729) 00:11:07.259 fused_ordering(730) 00:11:07.259 fused_ordering(731) 00:11:07.259 fused_ordering(732) 00:11:07.259 fused_ordering(733) 00:11:07.259 fused_ordering(734) 00:11:07.259 fused_ordering(735) 00:11:07.259 fused_ordering(736) 00:11:07.259 fused_ordering(737) 00:11:07.259 fused_ordering(738) 00:11:07.259 fused_ordering(739) 00:11:07.259 fused_ordering(740) 00:11:07.259 fused_ordering(741) 00:11:07.259 fused_ordering(742) 00:11:07.259 fused_ordering(743) 00:11:07.259 fused_ordering(744) 00:11:07.259 fused_ordering(745) 00:11:07.259 fused_ordering(746) 00:11:07.259 fused_ordering(747) 00:11:07.259 fused_ordering(748) 00:11:07.259 fused_ordering(749) 00:11:07.259 fused_ordering(750) 00:11:07.259 fused_ordering(751) 00:11:07.259 fused_ordering(752) 00:11:07.259 fused_ordering(753) 00:11:07.259 fused_ordering(754) 00:11:07.259 fused_ordering(755) 00:11:07.259 fused_ordering(756) 00:11:07.259 fused_ordering(757) 00:11:07.259 fused_ordering(758) 00:11:07.259 fused_ordering(759) 00:11:07.259 fused_ordering(760) 00:11:07.259 fused_ordering(761) 00:11:07.259 fused_ordering(762) 00:11:07.259 fused_ordering(763) 00:11:07.259 fused_ordering(764) 00:11:07.259 fused_ordering(765) 00:11:07.259 fused_ordering(766) 00:11:07.259 fused_ordering(767) 00:11:07.259 fused_ordering(768) 00:11:07.259 fused_ordering(769) 00:11:07.259 fused_ordering(770) 00:11:07.259 fused_ordering(771) 00:11:07.259 fused_ordering(772) 00:11:07.259 fused_ordering(773) 00:11:07.259 fused_ordering(774) 00:11:07.259 fused_ordering(775) 00:11:07.259 fused_ordering(776) 00:11:07.259 fused_ordering(777) 00:11:07.259 fused_ordering(778) 00:11:07.259 fused_ordering(779) 00:11:07.259 fused_ordering(780) 00:11:07.259 fused_ordering(781) 00:11:07.259 fused_ordering(782) 00:11:07.259 fused_ordering(783) 00:11:07.259 fused_ordering(784) 00:11:07.259 fused_ordering(785) 00:11:07.259 fused_ordering(786) 00:11:07.259 fused_ordering(787) 00:11:07.259 fused_ordering(788) 00:11:07.259 fused_ordering(789) 00:11:07.259 fused_ordering(790) 00:11:07.259 fused_ordering(791) 00:11:07.259 fused_ordering(792) 00:11:07.259 fused_ordering(793) 00:11:07.259 fused_ordering(794) 00:11:07.259 fused_ordering(795) 00:11:07.259 fused_ordering(796) 00:11:07.259 fused_ordering(797) 00:11:07.259 fused_ordering(798) 00:11:07.259 fused_ordering(799) 00:11:07.259 fused_ordering(800) 00:11:07.259 fused_ordering(801) 00:11:07.259 fused_ordering(802) 00:11:07.259 fused_ordering(803) 00:11:07.259 fused_ordering(804) 00:11:07.259 fused_ordering(805) 00:11:07.259 fused_ordering(806) 00:11:07.259 fused_ordering(807) 00:11:07.259 fused_ordering(808) 00:11:07.259 fused_ordering(809) 00:11:07.259 fused_ordering(810) 00:11:07.259 fused_ordering(811) 00:11:07.259 fused_ordering(812) 00:11:07.259 fused_ordering(813) 00:11:07.259 fused_ordering(814) 00:11:07.259 fused_ordering(815) 00:11:07.259 fused_ordering(816) 00:11:07.259 fused_ordering(817) 00:11:07.259 fused_ordering(818) 00:11:07.259 fused_ordering(819) 00:11:07.259 fused_ordering(820) 00:11:07.830 fused_ordering(821) 00:11:07.830 fused_ordering(822) 00:11:07.830 fused_ordering(823) 00:11:07.830 fused_ordering(824) 00:11:07.830 fused_ordering(825) 00:11:07.830 fused_ordering(826) 00:11:07.830 fused_ordering(827) 00:11:07.830 fused_ordering(828) 00:11:07.830 fused_ordering(829) 00:11:07.830 fused_ordering(830) 00:11:07.830 fused_ordering(831) 00:11:07.830 fused_ordering(832) 00:11:07.830 fused_ordering(833) 00:11:07.830 fused_ordering(834) 00:11:07.830 fused_ordering(835) 00:11:07.830 fused_ordering(836) 00:11:07.830 fused_ordering(837) 00:11:07.830 fused_ordering(838) 00:11:07.830 fused_ordering(839) 00:11:07.830 fused_ordering(840) 00:11:07.830 fused_ordering(841) 00:11:07.830 fused_ordering(842) 00:11:07.830 fused_ordering(843) 00:11:07.830 fused_ordering(844) 00:11:07.830 fused_ordering(845) 00:11:07.830 fused_ordering(846) 00:11:07.830 fused_ordering(847) 00:11:07.830 fused_ordering(848) 00:11:07.830 fused_ordering(849) 00:11:07.830 fused_ordering(850) 00:11:07.830 fused_ordering(851) 00:11:07.830 fused_ordering(852) 00:11:07.830 fused_ordering(853) 00:11:07.830 fused_ordering(854) 00:11:07.830 fused_ordering(855) 00:11:07.830 fused_ordering(856) 00:11:07.830 fused_ordering(857) 00:11:07.830 fused_ordering(858) 00:11:07.830 fused_ordering(859) 00:11:07.830 fused_ordering(860) 00:11:07.830 fused_ordering(861) 00:11:07.830 fused_ordering(862) 00:11:07.830 fused_ordering(863) 00:11:07.830 fused_ordering(864) 00:11:07.830 fused_ordering(865) 00:11:07.830 fused_ordering(866) 00:11:07.830 fused_ordering(867) 00:11:07.830 fused_ordering(868) 00:11:07.830 fused_ordering(869) 00:11:07.830 fused_ordering(870) 00:11:07.830 fused_ordering(871) 00:11:07.830 fused_ordering(872) 00:11:07.830 fused_ordering(873) 00:11:07.830 fused_ordering(874) 00:11:07.830 fused_ordering(875) 00:11:07.830 fused_ordering(876) 00:11:07.830 fused_ordering(877) 00:11:07.830 fused_ordering(878) 00:11:07.830 fused_ordering(879) 00:11:07.830 fused_ordering(880) 00:11:07.830 fused_ordering(881) 00:11:07.830 fused_ordering(882) 00:11:07.830 fused_ordering(883) 00:11:07.830 fused_ordering(884) 00:11:07.830 fused_ordering(885) 00:11:07.830 fused_ordering(886) 00:11:07.830 fused_ordering(887) 00:11:07.830 fused_ordering(888) 00:11:07.830 fused_ordering(889) 00:11:07.830 fused_ordering(890) 00:11:07.830 fused_ordering(891) 00:11:07.830 fused_ordering(892) 00:11:07.830 fused_ordering(893) 00:11:07.830 fused_ordering(894) 00:11:07.830 fused_ordering(895) 00:11:07.830 fused_ordering(896) 00:11:07.830 fused_ordering(897) 00:11:07.830 fused_ordering(898) 00:11:07.830 fused_ordering(899) 00:11:07.830 fused_ordering(900) 00:11:07.830 fused_ordering(901) 00:11:07.830 fused_ordering(902) 00:11:07.830 fused_ordering(903) 00:11:07.830 fused_ordering(904) 00:11:07.830 fused_ordering(905) 00:11:07.830 fused_ordering(906) 00:11:07.830 fused_ordering(907) 00:11:07.830 fused_ordering(908) 00:11:07.830 fused_ordering(909) 00:11:07.830 fused_ordering(910) 00:11:07.830 fused_ordering(911) 00:11:07.830 fused_ordering(912) 00:11:07.830 fused_ordering(913) 00:11:07.830 fused_ordering(914) 00:11:07.831 fused_ordering(915) 00:11:07.831 fused_ordering(916) 00:11:07.831 fused_ordering(917) 00:11:07.831 fused_ordering(918) 00:11:07.831 fused_ordering(919) 00:11:07.831 fused_ordering(920) 00:11:07.831 fused_ordering(921) 00:11:07.831 fused_ordering(922) 00:11:07.831 fused_ordering(923) 00:11:07.831 fused_ordering(924) 00:11:07.831 fused_ordering(925) 00:11:07.831 fused_ordering(926) 00:11:07.831 fused_ordering(927) 00:11:07.831 fused_ordering(928) 00:11:07.831 fused_ordering(929) 00:11:07.831 fused_ordering(930) 00:11:07.831 fused_ordering(931) 00:11:07.831 fused_ordering(932) 00:11:07.831 fused_ordering(933) 00:11:07.831 fused_ordering(934) 00:11:07.831 fused_ordering(935) 00:11:07.831 fused_ordering(936) 00:11:07.831 fused_ordering(937) 00:11:07.831 fused_ordering(938) 00:11:07.831 fused_ordering(939) 00:11:07.831 fused_ordering(940) 00:11:07.831 fused_ordering(941) 00:11:07.831 fused_ordering(942) 00:11:07.831 fused_ordering(943) 00:11:07.831 fused_ordering(944) 00:11:07.831 fused_ordering(945) 00:11:07.831 fused_ordering(946) 00:11:07.831 fused_ordering(947) 00:11:07.831 fused_ordering(948) 00:11:07.831 fused_ordering(949) 00:11:07.831 fused_ordering(950) 00:11:07.831 fused_ordering(951) 00:11:07.831 fused_ordering(952) 00:11:07.831 fused_ordering(953) 00:11:07.831 fused_ordering(954) 00:11:07.831 fused_ordering(955) 00:11:07.831 fused_ordering(956) 00:11:07.831 fused_ordering(957) 00:11:07.831 fused_ordering(958) 00:11:07.831 fused_ordering(959) 00:11:07.831 fused_ordering(960) 00:11:07.831 fused_ordering(961) 00:11:07.831 fused_ordering(962) 00:11:07.831 fused_ordering(963) 00:11:07.831 fused_ordering(964) 00:11:07.831 fused_ordering(965) 00:11:07.831 fused_ordering(966) 00:11:07.831 fused_ordering(967) 00:11:07.831 fused_ordering(968) 00:11:07.831 fused_ordering(969) 00:11:07.831 fused_ordering(970) 00:11:07.831 fused_ordering(971) 00:11:07.831 fused_ordering(972) 00:11:07.831 fused_ordering(973) 00:11:07.831 fused_ordering(974) 00:11:07.831 fused_ordering(975) 00:11:07.831 fused_ordering(976) 00:11:07.831 fused_ordering(977) 00:11:07.831 fused_ordering(978) 00:11:07.831 fused_ordering(979) 00:11:07.831 fused_ordering(980) 00:11:07.831 fused_ordering(981) 00:11:07.831 fused_ordering(982) 00:11:07.831 fused_ordering(983) 00:11:07.831 fused_ordering(984) 00:11:07.831 fused_ordering(985) 00:11:07.831 fused_ordering(986) 00:11:07.831 fused_ordering(987) 00:11:07.831 fused_ordering(988) 00:11:07.831 fused_ordering(989) 00:11:07.831 fused_ordering(990) 00:11:07.831 fused_ordering(991) 00:11:07.831 fused_ordering(992) 00:11:07.831 fused_ordering(993) 00:11:07.831 fused_ordering(994) 00:11:07.831 fused_ordering(995) 00:11:07.831 fused_ordering(996) 00:11:07.831 fused_ordering(997) 00:11:07.831 fused_ordering(998) 00:11:07.831 fused_ordering(999) 00:11:07.831 fused_ordering(1000) 00:11:07.831 fused_ordering(1001) 00:11:07.831 fused_ordering(1002) 00:11:07.831 fused_ordering(1003) 00:11:07.831 fused_ordering(1004) 00:11:07.831 fused_ordering(1005) 00:11:07.831 fused_ordering(1006) 00:11:07.831 fused_ordering(1007) 00:11:07.831 fused_ordering(1008) 00:11:07.831 fused_ordering(1009) 00:11:07.831 fused_ordering(1010) 00:11:07.831 fused_ordering(1011) 00:11:07.831 fused_ordering(1012) 00:11:07.831 fused_ordering(1013) 00:11:07.831 fused_ordering(1014) 00:11:07.831 fused_ordering(1015) 00:11:07.831 fused_ordering(1016) 00:11:07.831 fused_ordering(1017) 00:11:07.831 fused_ordering(1018) 00:11:07.831 fused_ordering(1019) 00:11:07.831 fused_ordering(1020) 00:11:07.831 fused_ordering(1021) 00:11:07.831 fused_ordering(1022) 00:11:07.831 fused_ordering(1023) 00:11:07.831 12:07:08 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:11:07.831 12:07:08 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:11:07.831 12:07:08 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:07.831 12:07:08 -- nvmf/common.sh@117 -- # sync 00:11:07.831 12:07:08 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:07.831 12:07:08 -- nvmf/common.sh@120 -- # set +e 00:11:07.831 12:07:08 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:07.831 12:07:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:07.831 rmmod nvme_tcp 00:11:07.831 rmmod nvme_fabrics 00:11:07.831 rmmod nvme_keyring 00:11:07.831 12:07:08 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:07.831 12:07:08 -- nvmf/common.sh@124 -- # set -e 00:11:07.831 12:07:08 -- nvmf/common.sh@125 -- # return 0 00:11:07.831 12:07:08 -- nvmf/common.sh@478 -- # '[' -n 3300412 ']' 00:11:07.831 12:07:08 -- nvmf/common.sh@479 -- # killprocess 3300412 00:11:07.831 12:07:08 -- common/autotest_common.sh@936 -- # '[' -z 3300412 ']' 00:11:07.831 12:07:08 -- common/autotest_common.sh@940 -- # kill -0 3300412 00:11:07.831 12:07:08 -- common/autotest_common.sh@941 -- # uname 00:11:07.831 12:07:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:07.831 12:07:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3300412 00:11:07.831 12:07:09 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:07.831 12:07:09 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:07.831 12:07:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3300412' 00:11:07.831 killing process with pid 3300412 00:11:07.831 12:07:09 -- common/autotest_common.sh@955 -- # kill 3300412 00:11:07.831 12:07:09 -- common/autotest_common.sh@960 -- # wait 3300412 00:11:08.093 12:07:09 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:08.093 12:07:09 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:08.093 12:07:09 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:08.093 12:07:09 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:08.093 12:07:09 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:08.093 12:07:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:08.093 12:07:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:08.093 12:07:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:10.004 12:07:11 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:10.265 00:11:10.265 real 0m12.960s 00:11:10.265 user 0m6.908s 00:11:10.265 sys 0m6.683s 00:11:10.265 12:07:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:10.265 12:07:11 -- common/autotest_common.sh@10 -- # set +x 00:11:10.265 ************************************ 00:11:10.265 END TEST nvmf_fused_ordering 00:11:10.265 ************************************ 00:11:10.265 12:07:11 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:10.265 12:07:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:10.265 12:07:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:10.265 12:07:11 -- common/autotest_common.sh@10 -- # set +x 00:11:10.265 ************************************ 00:11:10.265 START TEST nvmf_delete_subsystem 00:11:10.265 ************************************ 00:11:10.265 12:07:11 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:10.527 * Looking for test storage... 00:11:10.527 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:10.527 12:07:11 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:10.527 12:07:11 -- nvmf/common.sh@7 -- # uname -s 00:11:10.527 12:07:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:10.527 12:07:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:10.527 12:07:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:10.527 12:07:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:10.527 12:07:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:10.527 12:07:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:10.527 12:07:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:10.527 12:07:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:10.527 12:07:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:10.527 12:07:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:10.527 12:07:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:10.527 12:07:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:10.527 12:07:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:10.527 12:07:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:10.527 12:07:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:10.527 12:07:11 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:10.527 12:07:11 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:10.527 12:07:11 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:10.527 12:07:11 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:10.527 12:07:11 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:10.527 12:07:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.527 12:07:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.527 12:07:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.527 12:07:11 -- paths/export.sh@5 -- # export PATH 00:11:10.527 12:07:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.527 12:07:11 -- nvmf/common.sh@47 -- # : 0 00:11:10.527 12:07:11 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:10.527 12:07:11 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:10.527 12:07:11 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:10.527 12:07:11 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:10.527 12:07:11 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:10.527 12:07:11 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:10.527 12:07:11 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:10.527 12:07:11 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:10.527 12:07:11 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:11:10.527 12:07:11 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:10.527 12:07:11 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:10.527 12:07:11 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:10.527 12:07:11 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:10.527 12:07:11 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:10.527 12:07:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:10.527 12:07:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:10.527 12:07:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:10.527 12:07:11 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:10.527 12:07:11 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:10.527 12:07:11 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:10.527 12:07:11 -- common/autotest_common.sh@10 -- # set +x 00:11:18.673 12:07:18 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:18.673 12:07:18 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:18.673 12:07:18 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:18.673 12:07:18 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:18.673 12:07:18 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:18.673 12:07:18 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:18.673 12:07:18 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:18.673 12:07:18 -- nvmf/common.sh@295 -- # net_devs=() 00:11:18.673 12:07:18 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:18.673 12:07:18 -- nvmf/common.sh@296 -- # e810=() 00:11:18.673 12:07:18 -- nvmf/common.sh@296 -- # local -ga e810 00:11:18.673 12:07:18 -- nvmf/common.sh@297 -- # x722=() 00:11:18.673 12:07:18 -- nvmf/common.sh@297 -- # local -ga x722 00:11:18.673 12:07:18 -- nvmf/common.sh@298 -- # mlx=() 00:11:18.673 12:07:18 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:18.673 12:07:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:18.673 12:07:18 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:18.673 12:07:18 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:18.673 12:07:18 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:18.673 12:07:18 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:18.673 12:07:18 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:18.673 12:07:18 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:18.673 12:07:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:18.673 12:07:18 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:18.673 12:07:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:18.673 12:07:18 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:18.673 12:07:18 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:18.673 12:07:18 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:18.673 12:07:18 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:18.673 12:07:18 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:18.673 12:07:18 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:18.673 12:07:18 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:18.673 12:07:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:18.673 12:07:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:18.673 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:18.673 12:07:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:18.673 12:07:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:18.673 12:07:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:18.673 12:07:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:18.673 12:07:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:18.673 12:07:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:18.673 12:07:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:18.673 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:18.673 12:07:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:18.673 12:07:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:18.673 12:07:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:18.673 12:07:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:18.673 12:07:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:18.673 12:07:18 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:18.673 12:07:18 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:18.673 12:07:18 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:18.673 12:07:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:18.673 12:07:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:18.673 12:07:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:18.673 12:07:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:18.673 12:07:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:18.673 Found net devices under 0000:31:00.0: cvl_0_0 00:11:18.673 12:07:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:18.673 12:07:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:18.673 12:07:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:18.673 12:07:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:18.673 12:07:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:18.673 12:07:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:18.673 Found net devices under 0000:31:00.1: cvl_0_1 00:11:18.673 12:07:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:18.673 12:07:18 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:18.673 12:07:18 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:18.673 12:07:18 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:18.673 12:07:18 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:18.673 12:07:18 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:18.673 12:07:18 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:18.673 12:07:18 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:18.673 12:07:18 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:18.673 12:07:18 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:18.673 12:07:18 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:18.673 12:07:18 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:18.673 12:07:18 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:18.673 12:07:18 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:18.673 12:07:18 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:18.673 12:07:18 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:18.673 12:07:18 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:18.673 12:07:18 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:18.673 12:07:18 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:18.673 12:07:18 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:18.673 12:07:18 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:18.673 12:07:18 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:18.673 12:07:18 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:18.673 12:07:18 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:18.673 12:07:18 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:18.673 12:07:18 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:18.673 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:18.674 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.524 ms 00:11:18.674 00:11:18.674 --- 10.0.0.2 ping statistics --- 00:11:18.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.674 rtt min/avg/max/mdev = 0.524/0.524/0.524/0.000 ms 00:11:18.674 12:07:18 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:18.674 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:18.674 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:11:18.674 00:11:18.674 --- 10.0.0.1 ping statistics --- 00:11:18.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.674 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:11:18.674 12:07:18 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:18.674 12:07:18 -- nvmf/common.sh@411 -- # return 0 00:11:18.674 12:07:18 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:18.674 12:07:18 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:18.674 12:07:18 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:18.674 12:07:18 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:18.674 12:07:18 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:18.674 12:07:18 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:18.674 12:07:18 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:18.674 12:07:18 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:11:18.674 12:07:18 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:18.674 12:07:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:18.674 12:07:18 -- common/autotest_common.sh@10 -- # set +x 00:11:18.674 12:07:18 -- nvmf/common.sh@470 -- # nvmfpid=3305379 00:11:18.674 12:07:18 -- nvmf/common.sh@471 -- # waitforlisten 3305379 00:11:18.674 12:07:18 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:11:18.674 12:07:18 -- common/autotest_common.sh@817 -- # '[' -z 3305379 ']' 00:11:18.674 12:07:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:18.674 12:07:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:18.674 12:07:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:18.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:18.674 12:07:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:18.674 12:07:18 -- common/autotest_common.sh@10 -- # set +x 00:11:18.674 [2024-04-26 12:07:18.934260] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:11:18.674 [2024-04-26 12:07:18.934322] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:18.674 EAL: No free 2048 kB hugepages reported on node 1 00:11:18.674 [2024-04-26 12:07:19.005984] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:18.674 [2024-04-26 12:07:19.078200] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:18.674 [2024-04-26 12:07:19.078239] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:18.674 [2024-04-26 12:07:19.078247] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:18.674 [2024-04-26 12:07:19.078254] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:18.674 [2024-04-26 12:07:19.078259] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:18.674 [2024-04-26 12:07:19.078409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:18.674 [2024-04-26 12:07:19.078411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.674 12:07:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:18.674 12:07:19 -- common/autotest_common.sh@850 -- # return 0 00:11:18.674 12:07:19 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:18.674 12:07:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:18.674 12:07:19 -- common/autotest_common.sh@10 -- # set +x 00:11:18.674 12:07:19 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:18.674 12:07:19 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:18.674 12:07:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:18.674 12:07:19 -- common/autotest_common.sh@10 -- # set +x 00:11:18.674 [2024-04-26 12:07:19.738022] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:18.674 12:07:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:18.674 12:07:19 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:18.674 12:07:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:18.674 12:07:19 -- common/autotest_common.sh@10 -- # set +x 00:11:18.674 12:07:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:18.674 12:07:19 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:18.674 12:07:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:18.674 12:07:19 -- common/autotest_common.sh@10 -- # set +x 00:11:18.674 [2024-04-26 12:07:19.754181] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:18.674 12:07:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:18.674 12:07:19 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:18.674 12:07:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:18.674 12:07:19 -- common/autotest_common.sh@10 -- # set +x 00:11:18.674 NULL1 00:11:18.674 12:07:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:18.674 12:07:19 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:18.674 12:07:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:18.674 12:07:19 -- common/autotest_common.sh@10 -- # set +x 00:11:18.674 Delay0 00:11:18.674 12:07:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:18.674 12:07:19 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:18.674 12:07:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:18.674 12:07:19 -- common/autotest_common.sh@10 -- # set +x 00:11:18.674 12:07:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:18.674 12:07:19 -- target/delete_subsystem.sh@28 -- # perf_pid=3305707 00:11:18.674 12:07:19 -- target/delete_subsystem.sh@30 -- # sleep 2 00:11:18.674 12:07:19 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:18.674 EAL: No free 2048 kB hugepages reported on node 1 00:11:18.674 [2024-04-26 12:07:19.828780] subsystem.c:1435:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:20.590 12:07:21 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:20.590 12:07:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:20.591 12:07:21 -- common/autotest_common.sh@10 -- # set +x 00:11:20.852 Read completed with error (sct=0, sc=8) 00:11:20.852 starting I/O failed: -6 00:11:20.852 Read completed with error (sct=0, sc=8) 00:11:20.852 Read completed with error (sct=0, sc=8) 00:11:20.852 Write completed with error (sct=0, sc=8) 00:11:20.852 Write completed with error (sct=0, sc=8) 00:11:20.852 starting I/O failed: -6 00:11:20.852 Read completed with error (sct=0, sc=8) 00:11:20.852 Read completed with error (sct=0, sc=8) 00:11:20.852 Write completed with error (sct=0, sc=8) 00:11:20.852 Read completed with error (sct=0, sc=8) 00:11:20.852 starting I/O failed: -6 00:11:20.852 Read completed with error (sct=0, sc=8) 00:11:20.852 Read completed with error (sct=0, sc=8) 00:11:20.852 Read completed with error (sct=0, sc=8) 00:11:20.852 Write completed with error (sct=0, sc=8) 00:11:20.852 starting I/O failed: -6 00:11:20.852 Read completed with error (sct=0, sc=8) 00:11:20.852 Read completed with error (sct=0, sc=8) 00:11:20.852 Read completed with error (sct=0, sc=8) 00:11:20.852 Write completed with error (sct=0, sc=8) 00:11:20.852 starting I/O failed: -6 00:11:20.852 Write completed with error (sct=0, sc=8) 00:11:20.852 Read completed with error (sct=0, sc=8) 00:11:20.852 Read completed with error (sct=0, sc=8) 00:11:20.852 Read completed with error (sct=0, sc=8) 00:11:20.852 starting I/O failed: -6 00:11:20.852 Write completed with error (sct=0, sc=8) 00:11:20.852 Read completed with error (sct=0, sc=8) 00:11:20.852 Read completed with error (sct=0, sc=8) 00:11:20.852 Read completed with error (sct=0, sc=8) 00:11:20.852 starting I/O failed: -6 00:11:20.852 Read completed with error (sct=0, sc=8) 00:11:20.852 Write completed with error (sct=0, sc=8) 00:11:20.852 Write completed with error (sct=0, sc=8) 00:11:20.852 Read completed with error (sct=0, sc=8) 00:11:20.852 starting I/O failed: -6 00:11:20.852 Read completed with error (sct=0, sc=8) 00:11:20.852 Write completed with error (sct=0, sc=8) 00:11:20.852 Write completed with error (sct=0, sc=8) 00:11:20.852 Read completed with error (sct=0, sc=8) 00:11:20.852 starting I/O failed: -6 00:11:20.852 Write completed with error (sct=0, sc=8) 00:11:20.852 Write completed with error (sct=0, sc=8) 00:11:20.852 Write completed with error (sct=0, sc=8) 00:11:20.852 Read completed with error (sct=0, sc=8) 00:11:20.852 starting I/O failed: -6 00:11:20.852 Write completed with error (sct=0, sc=8) 00:11:20.852 Read completed with error (sct=0, sc=8) 00:11:20.852 Write completed with error (sct=0, sc=8) 00:11:20.852 [2024-04-26 12:07:21.952179] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53ab40 is same with the state(5) to be set 00:11:20.852 Read completed with error (sct=0, sc=8) 00:11:20.852 Read completed with error (sct=0, sc=8) 00:11:20.852 Write completed with error (sct=0, sc=8) 00:11:20.852 Read completed with error (sct=0, sc=8) 00:11:20.852 Read completed with error (sct=0, sc=8) 00:11:20.852 Read completed with error (sct=0, sc=8) 00:11:20.852 Write completed with error (sct=0, sc=8) 00:11:20.852 Read completed with error (sct=0, sc=8) 00:11:20.852 Read completed with error (sct=0, sc=8) 00:11:20.852 Write completed with error (sct=0, sc=8) 00:11:20.852 Read completed with error (sct=0, sc=8) 00:11:20.852 Write completed with error (sct=0, sc=8) 00:11:20.852 Read completed with error (sct=0, sc=8) 00:11:20.852 Read completed with error (sct=0, sc=8) 00:11:20.852 Read completed with error (sct=0, sc=8) 00:11:20.852 Read completed with error (sct=0, sc=8) 00:11:20.852 Read completed with error (sct=0, sc=8) 00:11:20.852 Write completed with error (sct=0, sc=8) 00:11:20.852 Read completed with error (sct=0, sc=8) 00:11:20.852 Read completed with error (sct=0, sc=8) 00:11:20.852 Write completed with error (sct=0, sc=8) 00:11:20.852 Read completed with error (sct=0, sc=8) 00:11:20.852 Read completed with error (sct=0, sc=8) 00:11:20.852 Write completed with error (sct=0, sc=8) 00:11:20.852 Write completed with error (sct=0, sc=8) 00:11:20.852 Read completed with error (sct=0, sc=8) 00:11:20.852 Write completed with error (sct=0, sc=8) 00:11:20.852 Read completed with error (sct=0, sc=8) 00:11:20.852 Read completed with error (sct=0, sc=8) 00:11:20.852 Read completed with error (sct=0, sc=8) 00:11:20.852 Write completed with error (sct=0, sc=8) 00:11:20.852 Read completed with error (sct=0, sc=8) 00:11:20.852 Write completed with error (sct=0, sc=8) 00:11:20.852 Write completed with error (sct=0, sc=8) 00:11:20.852 Read completed with error (sct=0, sc=8) 00:11:20.852 Read completed with error (sct=0, sc=8) 00:11:20.852 Read completed with error (sct=0, sc=8) 00:11:20.853 Read completed with error (sct=0, sc=8) 00:11:20.853 Read completed with error (sct=0, sc=8) 00:11:20.853 Write completed with error (sct=0, sc=8) 00:11:20.853 Read completed with error (sct=0, sc=8) 00:11:20.853 Read completed with error (sct=0, sc=8) 00:11:20.853 Write completed with error (sct=0, sc=8) 00:11:20.853 Read completed with error (sct=0, sc=8) 00:11:20.853 Read completed with error (sct=0, sc=8) 00:11:20.853 Read completed with error (sct=0, sc=8) 00:11:20.853 Write completed with error (sct=0, sc=8) 00:11:20.853 Read completed with error (sct=0, sc=8) 00:11:20.853 Read completed with error (sct=0, sc=8) 00:11:20.853 Read completed with error (sct=0, sc=8) 00:11:20.853 Write completed with error (sct=0, sc=8) 00:11:20.853 [2024-04-26 12:07:21.953442] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53a780 is same with the state(5) to be set 00:11:20.853 Read completed with error (sct=0, sc=8) 00:11:20.853 Read completed with error (sct=0, sc=8) 00:11:20.853 starting I/O failed: -6 00:11:20.853 Write completed with error (sct=0, sc=8) 00:11:20.853 Read completed with error (sct=0, sc=8) 00:11:20.853 Read completed with error (sct=0, sc=8) 00:11:20.853 Read completed with error (sct=0, sc=8) 00:11:20.853 starting I/O failed: -6 00:11:20.853 Read completed with error (sct=0, sc=8) 00:11:20.853 Read completed with error (sct=0, sc=8) 00:11:20.853 Write completed with error (sct=0, sc=8) 00:11:20.853 Read completed with error (sct=0, sc=8) 00:11:20.853 starting I/O failed: -6 00:11:20.853 Read completed with error (sct=0, sc=8) 00:11:20.853 Read completed with error (sct=0, sc=8) 00:11:20.853 Read completed with error (sct=0, sc=8) 00:11:20.853 Read completed with error (sct=0, sc=8) 00:11:20.853 starting I/O failed: -6 00:11:20.853 Read completed with error (sct=0, sc=8) 00:11:20.853 Read completed with error (sct=0, sc=8) 00:11:20.853 Write completed with error (sct=0, sc=8) 00:11:20.853 Write completed with error (sct=0, sc=8) 00:11:20.853 starting I/O failed: -6 00:11:20.853 Read completed with error (sct=0, sc=8) 00:11:20.853 Read completed with error (sct=0, sc=8) 00:11:20.853 Read completed with error (sct=0, sc=8) 00:11:20.853 Read completed with error (sct=0, sc=8) 00:11:20.853 starting I/O failed: -6 00:11:20.853 Read completed with error (sct=0, sc=8) 00:11:20.853 Write completed with error (sct=0, sc=8) 00:11:20.853 Read completed with error (sct=0, sc=8) 00:11:20.853 Read completed with error (sct=0, sc=8) 00:11:20.853 starting I/O failed: -6 00:11:20.853 Read completed with error (sct=0, sc=8) 00:11:20.853 Read completed with error (sct=0, sc=8) 00:11:20.853 Write completed with error (sct=0, sc=8) 00:11:20.853 Read completed with error (sct=0, sc=8) 00:11:20.853 starting I/O failed: -6 00:11:20.853 Write completed with error (sct=0, sc=8) 00:11:20.853 Write completed with error (sct=0, sc=8) 00:11:20.853 Read completed with error (sct=0, sc=8) 00:11:20.853 Read completed with error (sct=0, sc=8) 00:11:20.853 starting I/O failed: -6 00:11:20.853 Write completed with error (sct=0, sc=8) 00:11:20.853 Write completed with error (sct=0, sc=8) 00:11:20.853 Write completed with error (sct=0, sc=8) 00:11:20.853 [2024-04-26 12:07:21.957558] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff010000c00 is same with the state(5) to be set 00:11:20.853 Read completed with error (sct=0, sc=8) 00:11:20.853 Read completed with error (sct=0, sc=8) 00:11:20.853 Read completed with error (sct=0, sc=8) 00:11:20.853 Read completed with error (sct=0, sc=8) 00:11:20.853 Write completed with error (sct=0, sc=8) 00:11:20.853 Read completed with error (sct=0, sc=8) 00:11:20.853 Write completed with error (sct=0, sc=8) 00:11:20.853 Read completed with error (sct=0, sc=8) 00:11:20.853 Write completed with error (sct=0, sc=8) 00:11:20.853 Write completed with error (sct=0, sc=8) 00:11:20.853 Read completed with error (sct=0, sc=8) 00:11:20.853 Read completed with error (sct=0, sc=8) 00:11:20.853 Read completed with error (sct=0, sc=8) 00:11:20.853 Write completed with error (sct=0, sc=8) 00:11:20.853 Read completed with error (sct=0, sc=8) 00:11:20.853 Read completed with error (sct=0, sc=8) 00:11:20.853 Read completed with error (sct=0, sc=8) 00:11:20.853 Read completed with error (sct=0, sc=8) 00:11:20.853 Write completed with error (sct=0, sc=8) 00:11:20.853 Read completed with error (sct=0, sc=8) 00:11:20.853 Read completed with error (sct=0, sc=8) 00:11:20.853 Read completed with error (sct=0, sc=8) 00:11:20.853 Write completed with error (sct=0, sc=8) 00:11:20.853 Write completed with error (sct=0, sc=8) 00:11:20.853 Read completed with error (sct=0, sc=8) 00:11:20.853 Write completed with error (sct=0, sc=8) 00:11:20.853 Read completed with error (sct=0, sc=8) 00:11:20.853 Read completed with error (sct=0, sc=8) 00:11:20.853 Write completed with error (sct=0, sc=8) 00:11:20.853 Read completed with error (sct=0, sc=8) 00:11:20.853 Read completed with error (sct=0, sc=8) 00:11:20.853 Read completed with error (sct=0, sc=8) 00:11:20.853 Read completed with error (sct=0, sc=8) 00:11:20.853 Read completed with error (sct=0, sc=8) 00:11:20.853 Read completed with error (sct=0, sc=8) 00:11:20.853 Write completed with error (sct=0, sc=8) 00:11:20.853 Write completed with error (sct=0, sc=8) 00:11:20.853 Read completed with error (sct=0, sc=8) 00:11:20.853 Write completed with error (sct=0, sc=8) 00:11:20.853 Write completed with error (sct=0, sc=8) 00:11:20.853 Read completed with error (sct=0, sc=8) 00:11:20.853 Read completed with error (sct=0, sc=8) 00:11:20.853 Write completed with error (sct=0, sc=8) 00:11:20.853 Read completed with error (sct=0, sc=8) 00:11:20.853 Read completed with error (sct=0, sc=8) 00:11:20.853 Read completed with error (sct=0, sc=8) 00:11:21.797 [2024-04-26 12:07:22.927029] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550c40 is same with the state(5) to be set 00:11:21.797 Read completed with error (sct=0, sc=8) 00:11:21.797 Read completed with error (sct=0, sc=8) 00:11:21.797 Read completed with error (sct=0, sc=8) 00:11:21.797 Write completed with error (sct=0, sc=8) 00:11:21.797 Write completed with error (sct=0, sc=8) 00:11:21.797 Read completed with error (sct=0, sc=8) 00:11:21.797 Read completed with error (sct=0, sc=8) 00:11:21.797 Read completed with error (sct=0, sc=8) 00:11:21.797 Read completed with error (sct=0, sc=8) 00:11:21.797 Read completed with error (sct=0, sc=8) 00:11:21.797 Read completed with error (sct=0, sc=8) 00:11:21.797 Read completed with error (sct=0, sc=8) 00:11:21.797 Read completed with error (sct=0, sc=8) 00:11:21.797 Read completed with error (sct=0, sc=8) 00:11:21.797 Read completed with error (sct=0, sc=8) 00:11:21.797 Read completed with error (sct=0, sc=8) 00:11:21.797 Write completed with error (sct=0, sc=8) 00:11:21.797 Read completed with error (sct=0, sc=8) 00:11:21.797 Read completed with error (sct=0, sc=8) 00:11:21.797 [2024-04-26 12:07:22.955869] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53a910 is same with the state(5) to be set 00:11:21.797 Write completed with error (sct=0, sc=8) 00:11:21.797 Write completed with error (sct=0, sc=8) 00:11:21.797 Write completed with error (sct=0, sc=8) 00:11:21.797 Read completed with error (sct=0, sc=8) 00:11:21.797 Read completed with error (sct=0, sc=8) 00:11:21.797 Write completed with error (sct=0, sc=8) 00:11:21.797 Read completed with error (sct=0, sc=8) 00:11:21.797 Read completed with error (sct=0, sc=8) 00:11:21.797 Write completed with error (sct=0, sc=8) 00:11:21.797 Read completed with error (sct=0, sc=8) 00:11:21.797 Read completed with error (sct=0, sc=8) 00:11:21.797 Write completed with error (sct=0, sc=8) 00:11:21.797 Write completed with error (sct=0, sc=8) 00:11:21.797 Write completed with error (sct=0, sc=8) 00:11:21.797 Read completed with error (sct=0, sc=8) 00:11:21.797 Read completed with error (sct=0, sc=8) 00:11:21.797 Write completed with error (sct=0, sc=8) 00:11:21.797 Read completed with error (sct=0, sc=8) 00:11:21.797 [2024-04-26 12:07:22.956027] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53acd0 is same with the state(5) to be set 00:11:21.797 Read completed with error (sct=0, sc=8) 00:11:21.797 Read completed with error (sct=0, sc=8) 00:11:21.797 Read completed with error (sct=0, sc=8) 00:11:21.797 Read completed with error (sct=0, sc=8) 00:11:21.797 Read completed with error (sct=0, sc=8) 00:11:21.797 Read completed with error (sct=0, sc=8) 00:11:21.797 Read completed with error (sct=0, sc=8) 00:11:21.797 Read completed with error (sct=0, sc=8) 00:11:21.797 Write completed with error (sct=0, sc=8) 00:11:21.797 Read completed with error (sct=0, sc=8) 00:11:21.797 Read completed with error (sct=0, sc=8) 00:11:21.797 Read completed with error (sct=0, sc=8) 00:11:21.797 Write completed with error (sct=0, sc=8) 00:11:21.797 Read completed with error (sct=0, sc=8) 00:11:21.797 Read completed with error (sct=0, sc=8) 00:11:21.797 Write completed with error (sct=0, sc=8) 00:11:21.797 Read completed with error (sct=0, sc=8) 00:11:21.797 Read completed with error (sct=0, sc=8) 00:11:21.797 Read completed with error (sct=0, sc=8) 00:11:21.797 Read completed with error (sct=0, sc=8) 00:11:21.797 Read completed with error (sct=0, sc=8) 00:11:21.797 Read completed with error (sct=0, sc=8) 00:11:21.797 Read completed with error (sct=0, sc=8) 00:11:21.797 Write completed with error (sct=0, sc=8) 00:11:21.797 [2024-04-26 12:07:22.958417] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff01000bf90 is same with the state(5) to be set 00:11:21.797 Read completed with error (sct=0, sc=8) 00:11:21.797 Read completed with error (sct=0, sc=8) 00:11:21.797 Write completed with error (sct=0, sc=8) 00:11:21.797 Read completed with error (sct=0, sc=8) 00:11:21.797 Read completed with error (sct=0, sc=8) 00:11:21.797 Read completed with error (sct=0, sc=8) 00:11:21.797 Read completed with error (sct=0, sc=8) 00:11:21.797 Read completed with error (sct=0, sc=8) 00:11:21.797 Read completed with error (sct=0, sc=8) 00:11:21.797 Read completed with error (sct=0, sc=8) 00:11:21.797 Read completed with error (sct=0, sc=8) 00:11:21.797 Read completed with error (sct=0, sc=8) 00:11:21.797 Read completed with error (sct=0, sc=8) 00:11:21.797 Read completed with error (sct=0, sc=8) 00:11:21.797 [2024-04-26 12:07:22.958528] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff01000c690 is same with the state(5) to be set 00:11:21.797 [2024-04-26 12:07:22.959148] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x550c40 (9): Bad file descriptor 00:11:21.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:11:21.797 12:07:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:21.797 12:07:22 -- target/delete_subsystem.sh@34 -- # delay=0 00:11:21.797 12:07:22 -- target/delete_subsystem.sh@35 -- # kill -0 3305707 00:11:21.797 12:07:22 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:11:21.797 Initializing NVMe Controllers 00:11:21.797 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:21.797 Controller IO queue size 128, less than required. 00:11:21.797 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:21.797 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:21.797 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:21.797 Initialization complete. Launching workers. 00:11:21.797 ======================================================== 00:11:21.797 Latency(us) 00:11:21.797 Device Information : IOPS MiB/s Average min max 00:11:21.797 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 159.45 0.08 920597.42 371.96 1042612.10 00:11:21.797 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 150.98 0.07 974921.40 262.95 2001249.61 00:11:21.797 ======================================================== 00:11:21.797 Total : 310.43 0.15 947018.24 262.95 2001249.61 00:11:21.797 00:11:22.368 12:07:23 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:11:22.368 12:07:23 -- target/delete_subsystem.sh@35 -- # kill -0 3305707 00:11:22.368 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3305707) - No such process 00:11:22.368 12:07:23 -- target/delete_subsystem.sh@45 -- # NOT wait 3305707 00:11:22.368 12:07:23 -- common/autotest_common.sh@638 -- # local es=0 00:11:22.368 12:07:23 -- common/autotest_common.sh@640 -- # valid_exec_arg wait 3305707 00:11:22.368 12:07:23 -- common/autotest_common.sh@626 -- # local arg=wait 00:11:22.368 12:07:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:22.368 12:07:23 -- common/autotest_common.sh@630 -- # type -t wait 00:11:22.368 12:07:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:22.368 12:07:23 -- common/autotest_common.sh@641 -- # wait 3305707 00:11:22.368 12:07:23 -- common/autotest_common.sh@641 -- # es=1 00:11:22.368 12:07:23 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:22.368 12:07:23 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:22.368 12:07:23 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:22.368 12:07:23 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:22.368 12:07:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:22.368 12:07:23 -- common/autotest_common.sh@10 -- # set +x 00:11:22.368 12:07:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:22.368 12:07:23 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:22.368 12:07:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:22.368 12:07:23 -- common/autotest_common.sh@10 -- # set +x 00:11:22.368 [2024-04-26 12:07:23.489383] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:22.368 12:07:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:22.368 12:07:23 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:22.368 12:07:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:22.368 12:07:23 -- common/autotest_common.sh@10 -- # set +x 00:11:22.368 12:07:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:22.368 12:07:23 -- target/delete_subsystem.sh@54 -- # perf_pid=3306396 00:11:22.368 12:07:23 -- target/delete_subsystem.sh@56 -- # delay=0 00:11:22.368 12:07:23 -- target/delete_subsystem.sh@57 -- # kill -0 3306396 00:11:22.368 12:07:23 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:22.368 12:07:23 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:22.368 EAL: No free 2048 kB hugepages reported on node 1 00:11:22.368 [2024-04-26 12:07:23.559392] subsystem.c:1435:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:22.938 12:07:24 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:22.938 12:07:24 -- target/delete_subsystem.sh@57 -- # kill -0 3306396 00:11:22.938 12:07:24 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:23.575 12:07:24 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:23.575 12:07:24 -- target/delete_subsystem.sh@57 -- # kill -0 3306396 00:11:23.575 12:07:24 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:23.837 12:07:25 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:23.837 12:07:25 -- target/delete_subsystem.sh@57 -- # kill -0 3306396 00:11:23.837 12:07:25 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:24.408 12:07:25 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:24.408 12:07:25 -- target/delete_subsystem.sh@57 -- # kill -0 3306396 00:11:24.408 12:07:25 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:24.981 12:07:26 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:24.981 12:07:26 -- target/delete_subsystem.sh@57 -- # kill -0 3306396 00:11:24.981 12:07:26 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:25.552 12:07:26 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:25.552 12:07:26 -- target/delete_subsystem.sh@57 -- # kill -0 3306396 00:11:25.552 12:07:26 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:25.552 Initializing NVMe Controllers 00:11:25.552 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:25.552 Controller IO queue size 128, less than required. 00:11:25.552 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:25.552 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:25.552 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:25.552 Initialization complete. Launching workers. 00:11:25.552 ======================================================== 00:11:25.552 Latency(us) 00:11:25.552 Device Information : IOPS MiB/s Average min max 00:11:25.552 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002394.80 1000169.99 1041832.70 00:11:25.552 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003348.77 1000342.95 1010058.70 00:11:25.552 ======================================================== 00:11:25.552 Total : 256.00 0.12 1002871.79 1000169.99 1041832.70 00:11:25.552 00:11:26.122 12:07:27 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:26.122 12:07:27 -- target/delete_subsystem.sh@57 -- # kill -0 3306396 00:11:26.122 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3306396) - No such process 00:11:26.122 12:07:27 -- target/delete_subsystem.sh@67 -- # wait 3306396 00:11:26.122 12:07:27 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:26.122 12:07:27 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:11:26.122 12:07:27 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:26.122 12:07:27 -- nvmf/common.sh@117 -- # sync 00:11:26.122 12:07:27 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:26.122 12:07:27 -- nvmf/common.sh@120 -- # set +e 00:11:26.122 12:07:27 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:26.122 12:07:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:26.122 rmmod nvme_tcp 00:11:26.122 rmmod nvme_fabrics 00:11:26.122 rmmod nvme_keyring 00:11:26.122 12:07:27 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:26.122 12:07:27 -- nvmf/common.sh@124 -- # set -e 00:11:26.122 12:07:27 -- nvmf/common.sh@125 -- # return 0 00:11:26.122 12:07:27 -- nvmf/common.sh@478 -- # '[' -n 3305379 ']' 00:11:26.122 12:07:27 -- nvmf/common.sh@479 -- # killprocess 3305379 00:11:26.122 12:07:27 -- common/autotest_common.sh@936 -- # '[' -z 3305379 ']' 00:11:26.122 12:07:27 -- common/autotest_common.sh@940 -- # kill -0 3305379 00:11:26.122 12:07:27 -- common/autotest_common.sh@941 -- # uname 00:11:26.122 12:07:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:26.122 12:07:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3305379 00:11:26.122 12:07:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:26.122 12:07:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:26.122 12:07:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3305379' 00:11:26.122 killing process with pid 3305379 00:11:26.122 12:07:27 -- common/autotest_common.sh@955 -- # kill 3305379 00:11:26.122 12:07:27 -- common/autotest_common.sh@960 -- # wait 3305379 00:11:26.122 12:07:27 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:26.122 12:07:27 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:26.122 12:07:27 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:26.122 12:07:27 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:26.122 12:07:27 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:26.122 12:07:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:26.122 12:07:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:26.122 12:07:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:28.663 12:07:29 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:28.663 00:11:28.663 real 0m17.949s 00:11:28.663 user 0m30.593s 00:11:28.663 sys 0m6.289s 00:11:28.663 12:07:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:28.663 12:07:29 -- common/autotest_common.sh@10 -- # set +x 00:11:28.663 ************************************ 00:11:28.663 END TEST nvmf_delete_subsystem 00:11:28.663 ************************************ 00:11:28.663 12:07:29 -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:11:28.663 12:07:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:28.663 12:07:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:28.663 12:07:29 -- common/autotest_common.sh@10 -- # set +x 00:11:28.663 ************************************ 00:11:28.663 START TEST nvmf_ns_masking 00:11:28.663 ************************************ 00:11:28.663 12:07:29 -- common/autotest_common.sh@1111 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:11:28.664 * Looking for test storage... 00:11:28.664 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:28.664 12:07:29 -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:28.664 12:07:29 -- nvmf/common.sh@7 -- # uname -s 00:11:28.664 12:07:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:28.664 12:07:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:28.664 12:07:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:28.664 12:07:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:28.664 12:07:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:28.664 12:07:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:28.664 12:07:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:28.664 12:07:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:28.664 12:07:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:28.664 12:07:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:28.664 12:07:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:28.664 12:07:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:28.664 12:07:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:28.664 12:07:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:28.664 12:07:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:28.664 12:07:29 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:28.664 12:07:29 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:28.664 12:07:29 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:28.664 12:07:29 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:28.664 12:07:29 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:28.664 12:07:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.664 12:07:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.664 12:07:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.664 12:07:29 -- paths/export.sh@5 -- # export PATH 00:11:28.664 12:07:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.664 12:07:29 -- nvmf/common.sh@47 -- # : 0 00:11:28.664 12:07:29 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:28.664 12:07:29 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:28.664 12:07:29 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:28.664 12:07:29 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:28.664 12:07:29 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:28.664 12:07:29 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:28.664 12:07:29 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:28.664 12:07:29 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:28.664 12:07:29 -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:28.664 12:07:29 -- target/ns_masking.sh@11 -- # loops=5 00:11:28.664 12:07:29 -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:28.664 12:07:29 -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:11:28.664 12:07:29 -- target/ns_masking.sh@15 -- # uuidgen 00:11:28.664 12:07:29 -- target/ns_masking.sh@15 -- # HOSTID=b6bbdd01-566b-4a8e-9780-b583091ba35b 00:11:28.664 12:07:29 -- target/ns_masking.sh@44 -- # nvmftestinit 00:11:28.664 12:07:29 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:28.664 12:07:29 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:28.664 12:07:29 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:28.664 12:07:29 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:28.664 12:07:29 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:28.664 12:07:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:28.664 12:07:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:28.664 12:07:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:28.664 12:07:29 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:28.664 12:07:29 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:28.664 12:07:29 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:28.664 12:07:29 -- common/autotest_common.sh@10 -- # set +x 00:11:35.255 12:07:36 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:35.255 12:07:36 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:35.255 12:07:36 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:35.255 12:07:36 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:35.255 12:07:36 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:35.255 12:07:36 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:35.255 12:07:36 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:35.255 12:07:36 -- nvmf/common.sh@295 -- # net_devs=() 00:11:35.255 12:07:36 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:35.255 12:07:36 -- nvmf/common.sh@296 -- # e810=() 00:11:35.255 12:07:36 -- nvmf/common.sh@296 -- # local -ga e810 00:11:35.255 12:07:36 -- nvmf/common.sh@297 -- # x722=() 00:11:35.255 12:07:36 -- nvmf/common.sh@297 -- # local -ga x722 00:11:35.255 12:07:36 -- nvmf/common.sh@298 -- # mlx=() 00:11:35.255 12:07:36 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:35.255 12:07:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:35.256 12:07:36 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:35.256 12:07:36 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:35.256 12:07:36 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:35.256 12:07:36 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:35.256 12:07:36 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:35.256 12:07:36 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:35.256 12:07:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:35.256 12:07:36 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:35.256 12:07:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:35.256 12:07:36 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:35.256 12:07:36 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:35.256 12:07:36 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:35.256 12:07:36 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:35.256 12:07:36 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:35.256 12:07:36 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:35.256 12:07:36 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:35.256 12:07:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:35.256 12:07:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:35.256 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:35.256 12:07:36 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:35.256 12:07:36 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:35.256 12:07:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:35.256 12:07:36 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:35.256 12:07:36 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:35.256 12:07:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:35.256 12:07:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:35.256 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:35.256 12:07:36 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:35.256 12:07:36 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:35.256 12:07:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:35.256 12:07:36 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:35.256 12:07:36 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:35.256 12:07:36 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:35.256 12:07:36 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:35.256 12:07:36 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:35.256 12:07:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:35.256 12:07:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:35.256 12:07:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:35.256 12:07:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:35.256 12:07:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:35.256 Found net devices under 0000:31:00.0: cvl_0_0 00:11:35.256 12:07:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:35.256 12:07:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:35.256 12:07:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:35.256 12:07:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:35.256 12:07:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:35.256 12:07:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:35.256 Found net devices under 0000:31:00.1: cvl_0_1 00:11:35.256 12:07:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:35.256 12:07:36 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:35.256 12:07:36 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:35.256 12:07:36 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:35.256 12:07:36 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:35.256 12:07:36 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:35.256 12:07:36 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:35.256 12:07:36 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:35.256 12:07:36 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:35.256 12:07:36 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:35.256 12:07:36 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:35.256 12:07:36 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:35.256 12:07:36 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:35.256 12:07:36 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:35.256 12:07:36 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:35.256 12:07:36 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:35.256 12:07:36 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:35.256 12:07:36 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:35.256 12:07:36 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:35.516 12:07:36 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:35.516 12:07:36 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:35.516 12:07:36 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:35.516 12:07:36 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:35.516 12:07:36 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:35.516 12:07:36 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:35.777 12:07:36 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:35.777 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:35.777 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.637 ms 00:11:35.777 00:11:35.777 --- 10.0.0.2 ping statistics --- 00:11:35.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.777 rtt min/avg/max/mdev = 0.637/0.637/0.637/0.000 ms 00:11:35.777 12:07:36 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:35.777 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:35.777 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:11:35.777 00:11:35.777 --- 10.0.0.1 ping statistics --- 00:11:35.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.777 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:11:35.777 12:07:36 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:35.777 12:07:36 -- nvmf/common.sh@411 -- # return 0 00:11:35.777 12:07:36 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:35.777 12:07:36 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:35.777 12:07:36 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:35.777 12:07:36 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:35.777 12:07:36 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:35.777 12:07:36 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:35.777 12:07:36 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:35.777 12:07:36 -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:11:35.777 12:07:36 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:35.777 12:07:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:35.777 12:07:36 -- common/autotest_common.sh@10 -- # set +x 00:11:35.777 12:07:36 -- nvmf/common.sh@470 -- # nvmfpid=3311322 00:11:35.777 12:07:36 -- nvmf/common.sh@471 -- # waitforlisten 3311322 00:11:35.777 12:07:36 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:35.777 12:07:36 -- common/autotest_common.sh@817 -- # '[' -z 3311322 ']' 00:11:35.777 12:07:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.777 12:07:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:35.777 12:07:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.777 12:07:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:35.777 12:07:36 -- common/autotest_common.sh@10 -- # set +x 00:11:35.777 [2024-04-26 12:07:36.856930] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:11:35.777 [2024-04-26 12:07:36.856995] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:35.777 EAL: No free 2048 kB hugepages reported on node 1 00:11:35.777 [2024-04-26 12:07:36.930097] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:36.038 [2024-04-26 12:07:37.004581] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:36.038 [2024-04-26 12:07:37.004623] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:36.038 [2024-04-26 12:07:37.004632] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:36.038 [2024-04-26 12:07:37.004640] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:36.038 [2024-04-26 12:07:37.004647] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:36.038 [2024-04-26 12:07:37.004860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:36.038 [2024-04-26 12:07:37.004967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:36.038 [2024-04-26 12:07:37.005274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:36.038 [2024-04-26 12:07:37.005275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.609 12:07:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:36.609 12:07:37 -- common/autotest_common.sh@850 -- # return 0 00:11:36.609 12:07:37 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:36.609 12:07:37 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:36.609 12:07:37 -- common/autotest_common.sh@10 -- # set +x 00:11:36.609 12:07:37 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:36.609 12:07:37 -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:36.609 [2024-04-26 12:07:37.817798] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:36.869 12:07:37 -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:11:36.869 12:07:37 -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:11:36.869 12:07:37 -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:36.869 Malloc1 00:11:36.869 12:07:38 -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:37.130 Malloc2 00:11:37.130 12:07:38 -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:37.392 12:07:38 -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:11:37.392 12:07:38 -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:37.653 [2024-04-26 12:07:38.666698] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:37.653 12:07:38 -- target/ns_masking.sh@61 -- # connect 00:11:37.653 12:07:38 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b6bbdd01-566b-4a8e-9780-b583091ba35b -a 10.0.0.2 -s 4420 -i 4 00:11:37.914 12:07:38 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:11:37.914 12:07:38 -- common/autotest_common.sh@1184 -- # local i=0 00:11:37.914 12:07:38 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:11:37.914 12:07:38 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:11:37.914 12:07:38 -- common/autotest_common.sh@1191 -- # sleep 2 00:11:39.828 12:07:40 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:11:39.828 12:07:40 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:11:39.828 12:07:40 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:11:39.828 12:07:40 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:11:39.828 12:07:40 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:11:39.828 12:07:40 -- common/autotest_common.sh@1194 -- # return 0 00:11:39.828 12:07:40 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:11:39.828 12:07:40 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:39.828 12:07:40 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:11:39.828 12:07:40 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:11:39.828 12:07:40 -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:11:39.828 12:07:40 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:39.828 12:07:40 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:39.828 [ 0]:0x1 00:11:39.828 12:07:41 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:39.828 12:07:41 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:39.828 12:07:41 -- target/ns_masking.sh@40 -- # nguid=ab382a45bd45441cbbd41ca75c53f60d 00:11:39.828 12:07:41 -- target/ns_masking.sh@41 -- # [[ ab382a45bd45441cbbd41ca75c53f60d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:39.828 12:07:41 -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:11:40.088 12:07:41 -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:11:40.088 12:07:41 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:40.088 12:07:41 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:40.088 [ 0]:0x1 00:11:40.088 12:07:41 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:40.088 12:07:41 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:40.088 12:07:41 -- target/ns_masking.sh@40 -- # nguid=ab382a45bd45441cbbd41ca75c53f60d 00:11:40.088 12:07:41 -- target/ns_masking.sh@41 -- # [[ ab382a45bd45441cbbd41ca75c53f60d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:40.088 12:07:41 -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:11:40.088 12:07:41 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:40.088 12:07:41 -- target/ns_masking.sh@39 -- # grep 0x2 00:11:40.349 [ 1]:0x2 00:11:40.349 12:07:41 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:40.349 12:07:41 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:40.349 12:07:41 -- target/ns_masking.sh@40 -- # nguid=af156f3dd0914442af6fe8996e9b3483 00:11:40.349 12:07:41 -- target/ns_masking.sh@41 -- # [[ af156f3dd0914442af6fe8996e9b3483 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:40.349 12:07:41 -- target/ns_masking.sh@69 -- # disconnect 00:11:40.349 12:07:41 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:40.610 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.610 12:07:41 -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:40.610 12:07:41 -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:11:40.871 12:07:41 -- target/ns_masking.sh@77 -- # connect 1 00:11:40.871 12:07:41 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b6bbdd01-566b-4a8e-9780-b583091ba35b -a 10.0.0.2 -s 4420 -i 4 00:11:41.132 12:07:42 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:11:41.132 12:07:42 -- common/autotest_common.sh@1184 -- # local i=0 00:11:41.132 12:07:42 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:11:41.132 12:07:42 -- common/autotest_common.sh@1186 -- # [[ -n 1 ]] 00:11:41.132 12:07:42 -- common/autotest_common.sh@1187 -- # nvme_device_counter=1 00:11:41.133 12:07:42 -- common/autotest_common.sh@1191 -- # sleep 2 00:11:43.045 12:07:44 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:11:43.045 12:07:44 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:11:43.045 12:07:44 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:11:43.045 12:07:44 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:11:43.045 12:07:44 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:11:43.045 12:07:44 -- common/autotest_common.sh@1194 -- # return 0 00:11:43.045 12:07:44 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:11:43.045 12:07:44 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:43.045 12:07:44 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:11:43.045 12:07:44 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:11:43.045 12:07:44 -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:11:43.045 12:07:44 -- common/autotest_common.sh@638 -- # local es=0 00:11:43.045 12:07:44 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:11:43.045 12:07:44 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:11:43.045 12:07:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:43.045 12:07:44 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:11:43.045 12:07:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:43.045 12:07:44 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:11:43.045 12:07:44 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:43.045 12:07:44 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:43.045 12:07:44 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:43.045 12:07:44 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:43.045 12:07:44 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:43.045 12:07:44 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:43.045 12:07:44 -- common/autotest_common.sh@641 -- # es=1 00:11:43.045 12:07:44 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:43.045 12:07:44 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:43.045 12:07:44 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:43.045 12:07:44 -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:11:43.045 12:07:44 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:43.045 12:07:44 -- target/ns_masking.sh@39 -- # grep 0x2 00:11:43.306 [ 0]:0x2 00:11:43.306 12:07:44 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:43.306 12:07:44 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:43.306 12:07:44 -- target/ns_masking.sh@40 -- # nguid=af156f3dd0914442af6fe8996e9b3483 00:11:43.306 12:07:44 -- target/ns_masking.sh@41 -- # [[ af156f3dd0914442af6fe8996e9b3483 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:43.306 12:07:44 -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:43.306 12:07:44 -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:11:43.306 12:07:44 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:43.306 12:07:44 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:43.306 [ 0]:0x1 00:11:43.306 12:07:44 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:43.306 12:07:44 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:43.566 12:07:44 -- target/ns_masking.sh@40 -- # nguid=ab382a45bd45441cbbd41ca75c53f60d 00:11:43.566 12:07:44 -- target/ns_masking.sh@41 -- # [[ ab382a45bd45441cbbd41ca75c53f60d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:43.566 12:07:44 -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:11:43.566 12:07:44 -- target/ns_masking.sh@39 -- # grep 0x2 00:11:43.566 12:07:44 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:43.566 [ 1]:0x2 00:11:43.566 12:07:44 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:43.566 12:07:44 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:43.566 12:07:44 -- target/ns_masking.sh@40 -- # nguid=af156f3dd0914442af6fe8996e9b3483 00:11:43.566 12:07:44 -- target/ns_masking.sh@41 -- # [[ af156f3dd0914442af6fe8996e9b3483 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:43.566 12:07:44 -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:43.566 12:07:44 -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:11:43.566 12:07:44 -- common/autotest_common.sh@638 -- # local es=0 00:11:43.566 12:07:44 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:11:43.566 12:07:44 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:11:43.566 12:07:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:43.566 12:07:44 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:11:43.566 12:07:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:43.566 12:07:44 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:11:43.566 12:07:44 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:43.566 12:07:44 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:43.827 12:07:44 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:43.827 12:07:44 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:43.827 12:07:44 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:43.827 12:07:44 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:43.827 12:07:44 -- common/autotest_common.sh@641 -- # es=1 00:11:43.827 12:07:44 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:43.827 12:07:44 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:43.828 12:07:44 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:43.828 12:07:44 -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:11:43.828 12:07:44 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:43.828 12:07:44 -- target/ns_masking.sh@39 -- # grep 0x2 00:11:43.828 [ 0]:0x2 00:11:43.828 12:07:44 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:43.828 12:07:44 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:43.828 12:07:44 -- target/ns_masking.sh@40 -- # nguid=af156f3dd0914442af6fe8996e9b3483 00:11:43.828 12:07:44 -- target/ns_masking.sh@41 -- # [[ af156f3dd0914442af6fe8996e9b3483 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:43.828 12:07:44 -- target/ns_masking.sh@91 -- # disconnect 00:11:43.828 12:07:44 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:43.828 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.828 12:07:44 -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:44.087 12:07:45 -- target/ns_masking.sh@95 -- # connect 2 00:11:44.087 12:07:45 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b6bbdd01-566b-4a8e-9780-b583091ba35b -a 10.0.0.2 -s 4420 -i 4 00:11:44.087 12:07:45 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:44.087 12:07:45 -- common/autotest_common.sh@1184 -- # local i=0 00:11:44.087 12:07:45 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:11:44.087 12:07:45 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:11:44.087 12:07:45 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:11:44.087 12:07:45 -- common/autotest_common.sh@1191 -- # sleep 2 00:11:46.631 12:07:47 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:11:46.631 12:07:47 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:11:46.631 12:07:47 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:11:46.631 12:07:47 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:11:46.631 12:07:47 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:11:46.631 12:07:47 -- common/autotest_common.sh@1194 -- # return 0 00:11:46.631 12:07:47 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:11:46.631 12:07:47 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:46.631 12:07:47 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:11:46.631 12:07:47 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:11:46.631 12:07:47 -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:11:46.631 12:07:47 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:46.631 12:07:47 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:46.631 [ 0]:0x1 00:11:46.631 12:07:47 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:46.631 12:07:47 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:46.631 12:07:47 -- target/ns_masking.sh@40 -- # nguid=ab382a45bd45441cbbd41ca75c53f60d 00:11:46.631 12:07:47 -- target/ns_masking.sh@41 -- # [[ ab382a45bd45441cbbd41ca75c53f60d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:46.631 12:07:47 -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:11:46.631 12:07:47 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:46.631 12:07:47 -- target/ns_masking.sh@39 -- # grep 0x2 00:11:46.631 [ 1]:0x2 00:11:46.631 12:07:47 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:46.631 12:07:47 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:46.631 12:07:47 -- target/ns_masking.sh@40 -- # nguid=af156f3dd0914442af6fe8996e9b3483 00:11:46.631 12:07:47 -- target/ns_masking.sh@41 -- # [[ af156f3dd0914442af6fe8996e9b3483 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:46.631 12:07:47 -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:46.631 12:07:47 -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:11:46.631 12:07:47 -- common/autotest_common.sh@638 -- # local es=0 00:11:46.631 12:07:47 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:11:46.631 12:07:47 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:11:46.631 12:07:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:46.631 12:07:47 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:11:46.631 12:07:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:46.631 12:07:47 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:11:46.631 12:07:47 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:46.631 12:07:47 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:46.631 12:07:47 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:46.631 12:07:47 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:46.631 12:07:47 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:46.631 12:07:47 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:46.631 12:07:47 -- common/autotest_common.sh@641 -- # es=1 00:11:46.631 12:07:47 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:46.631 12:07:47 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:46.631 12:07:47 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:46.631 12:07:47 -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:11:46.631 12:07:47 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:46.631 12:07:47 -- target/ns_masking.sh@39 -- # grep 0x2 00:11:46.631 [ 0]:0x2 00:11:46.631 12:07:47 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:46.631 12:07:47 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:46.892 12:07:47 -- target/ns_masking.sh@40 -- # nguid=af156f3dd0914442af6fe8996e9b3483 00:11:46.892 12:07:47 -- target/ns_masking.sh@41 -- # [[ af156f3dd0914442af6fe8996e9b3483 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:46.892 12:07:47 -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:46.892 12:07:47 -- common/autotest_common.sh@638 -- # local es=0 00:11:46.892 12:07:47 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:46.892 12:07:47 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:46.892 12:07:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:46.892 12:07:47 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:46.892 12:07:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:46.892 12:07:47 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:46.892 12:07:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:46.892 12:07:47 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:46.892 12:07:47 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:46.892 12:07:47 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:46.892 [2024-04-26 12:07:48.027523] nvmf_rpc.c:1779:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:11:46.892 request: 00:11:46.892 { 00:11:46.892 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:46.892 "nsid": 2, 00:11:46.892 "host": "nqn.2016-06.io.spdk:host1", 00:11:46.892 "method": "nvmf_ns_remove_host", 00:11:46.892 "req_id": 1 00:11:46.892 } 00:11:46.892 Got JSON-RPC error response 00:11:46.892 response: 00:11:46.892 { 00:11:46.892 "code": -32602, 00:11:46.892 "message": "Invalid parameters" 00:11:46.892 } 00:11:46.892 12:07:48 -- common/autotest_common.sh@641 -- # es=1 00:11:46.892 12:07:48 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:46.892 12:07:48 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:46.892 12:07:48 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:46.892 12:07:48 -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:11:46.892 12:07:48 -- common/autotest_common.sh@638 -- # local es=0 00:11:46.892 12:07:48 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:11:46.892 12:07:48 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:11:46.892 12:07:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:46.892 12:07:48 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:11:46.892 12:07:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:46.892 12:07:48 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:11:46.892 12:07:48 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:46.892 12:07:48 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:46.892 12:07:48 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:46.892 12:07:48 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:46.892 12:07:48 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:46.892 12:07:48 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:46.892 12:07:48 -- common/autotest_common.sh@641 -- # es=1 00:11:46.892 12:07:48 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:46.892 12:07:48 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:46.892 12:07:48 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:46.892 12:07:48 -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:11:46.892 12:07:48 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:46.892 12:07:48 -- target/ns_masking.sh@39 -- # grep 0x2 00:11:46.892 [ 0]:0x2 00:11:47.153 12:07:48 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:47.153 12:07:48 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:47.153 12:07:48 -- target/ns_masking.sh@40 -- # nguid=af156f3dd0914442af6fe8996e9b3483 00:11:47.153 12:07:48 -- target/ns_masking.sh@41 -- # [[ af156f3dd0914442af6fe8996e9b3483 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:47.153 12:07:48 -- target/ns_masking.sh@108 -- # disconnect 00:11:47.153 12:07:48 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:47.153 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.153 12:07:48 -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:47.153 12:07:48 -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:11:47.153 12:07:48 -- target/ns_masking.sh@114 -- # nvmftestfini 00:11:47.153 12:07:48 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:47.153 12:07:48 -- nvmf/common.sh@117 -- # sync 00:11:47.153 12:07:48 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:47.153 12:07:48 -- nvmf/common.sh@120 -- # set +e 00:11:47.153 12:07:48 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:47.153 12:07:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:47.153 rmmod nvme_tcp 00:11:47.414 rmmod nvme_fabrics 00:11:47.414 rmmod nvme_keyring 00:11:47.414 12:07:48 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:47.414 12:07:48 -- nvmf/common.sh@124 -- # set -e 00:11:47.414 12:07:48 -- nvmf/common.sh@125 -- # return 0 00:11:47.414 12:07:48 -- nvmf/common.sh@478 -- # '[' -n 3311322 ']' 00:11:47.414 12:07:48 -- nvmf/common.sh@479 -- # killprocess 3311322 00:11:47.414 12:07:48 -- common/autotest_common.sh@936 -- # '[' -z 3311322 ']' 00:11:47.414 12:07:48 -- common/autotest_common.sh@940 -- # kill -0 3311322 00:11:47.414 12:07:48 -- common/autotest_common.sh@941 -- # uname 00:11:47.414 12:07:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:47.414 12:07:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3311322 00:11:47.414 12:07:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:47.414 12:07:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:47.414 12:07:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3311322' 00:11:47.414 killing process with pid 3311322 00:11:47.414 12:07:48 -- common/autotest_common.sh@955 -- # kill 3311322 00:11:47.414 12:07:48 -- common/autotest_common.sh@960 -- # wait 3311322 00:11:47.674 12:07:48 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:47.674 12:07:48 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:47.674 12:07:48 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:47.674 12:07:48 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:47.674 12:07:48 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:47.674 12:07:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:47.674 12:07:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:47.675 12:07:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:49.587 12:07:50 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:49.587 00:11:49.587 real 0m21.144s 00:11:49.587 user 0m50.841s 00:11:49.587 sys 0m6.917s 00:11:49.587 12:07:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:49.587 12:07:50 -- common/autotest_common.sh@10 -- # set +x 00:11:49.587 ************************************ 00:11:49.587 END TEST nvmf_ns_masking 00:11:49.587 ************************************ 00:11:49.587 12:07:50 -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:11:49.587 12:07:50 -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:49.587 12:07:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:49.587 12:07:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:49.587 12:07:50 -- common/autotest_common.sh@10 -- # set +x 00:11:49.849 ************************************ 00:11:49.849 START TEST nvmf_nvme_cli 00:11:49.849 ************************************ 00:11:49.849 12:07:50 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:49.849 * Looking for test storage... 00:11:49.849 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:49.849 12:07:51 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:49.849 12:07:51 -- nvmf/common.sh@7 -- # uname -s 00:11:49.849 12:07:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:49.849 12:07:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:49.849 12:07:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:49.849 12:07:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:49.849 12:07:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:49.849 12:07:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:49.849 12:07:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:49.849 12:07:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:49.849 12:07:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:49.849 12:07:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:49.849 12:07:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:49.849 12:07:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:49.849 12:07:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:49.849 12:07:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:49.849 12:07:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:49.849 12:07:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:49.849 12:07:51 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:49.849 12:07:51 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:49.849 12:07:51 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:49.849 12:07:51 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:49.849 12:07:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.849 12:07:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.849 12:07:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.849 12:07:51 -- paths/export.sh@5 -- # export PATH 00:11:49.849 12:07:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.849 12:07:51 -- nvmf/common.sh@47 -- # : 0 00:11:49.849 12:07:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:49.849 12:07:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:49.849 12:07:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:49.849 12:07:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:49.849 12:07:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:49.850 12:07:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:49.850 12:07:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:49.850 12:07:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:49.850 12:07:51 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:49.850 12:07:51 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:49.850 12:07:51 -- target/nvme_cli.sh@14 -- # devs=() 00:11:49.850 12:07:51 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:11:49.850 12:07:51 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:49.850 12:07:51 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:49.850 12:07:51 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:49.850 12:07:51 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:49.850 12:07:51 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:49.850 12:07:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:49.850 12:07:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:49.850 12:07:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:49.850 12:07:51 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:49.850 12:07:51 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:49.850 12:07:51 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:49.850 12:07:51 -- common/autotest_common.sh@10 -- # set +x 00:11:57.994 12:07:57 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:57.994 12:07:57 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:57.994 12:07:57 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:57.994 12:07:57 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:57.994 12:07:57 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:57.994 12:07:57 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:57.994 12:07:57 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:57.994 12:07:57 -- nvmf/common.sh@295 -- # net_devs=() 00:11:57.994 12:07:57 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:57.994 12:07:57 -- nvmf/common.sh@296 -- # e810=() 00:11:57.994 12:07:57 -- nvmf/common.sh@296 -- # local -ga e810 00:11:57.994 12:07:57 -- nvmf/common.sh@297 -- # x722=() 00:11:57.994 12:07:57 -- nvmf/common.sh@297 -- # local -ga x722 00:11:57.994 12:07:57 -- nvmf/common.sh@298 -- # mlx=() 00:11:57.994 12:07:57 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:57.994 12:07:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:57.994 12:07:57 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:57.994 12:07:57 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:57.994 12:07:57 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:57.994 12:07:57 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:57.994 12:07:57 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:57.994 12:07:57 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:57.994 12:07:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:57.994 12:07:57 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:57.994 12:07:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:57.994 12:07:57 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:57.994 12:07:57 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:57.994 12:07:57 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:57.994 12:07:57 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:57.994 12:07:57 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:57.994 12:07:57 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:57.994 12:07:57 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:57.994 12:07:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:57.994 12:07:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:57.994 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:57.994 12:07:57 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:57.994 12:07:57 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:57.994 12:07:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:57.994 12:07:57 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:57.994 12:07:57 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:57.994 12:07:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:57.994 12:07:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:57.994 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:57.994 12:07:57 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:57.994 12:07:57 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:57.994 12:07:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:57.994 12:07:57 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:57.994 12:07:57 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:57.994 12:07:57 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:57.994 12:07:57 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:57.994 12:07:57 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:57.994 12:07:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:57.994 12:07:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:57.994 12:07:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:57.994 12:07:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:57.994 12:07:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:57.994 Found net devices under 0000:31:00.0: cvl_0_0 00:11:57.994 12:07:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:57.994 12:07:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:57.994 12:07:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:57.994 12:07:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:57.994 12:07:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:57.994 12:07:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:57.994 Found net devices under 0000:31:00.1: cvl_0_1 00:11:57.994 12:07:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:57.994 12:07:57 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:57.994 12:07:57 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:57.994 12:07:57 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:57.994 12:07:57 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:57.994 12:07:57 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:57.994 12:07:57 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:57.994 12:07:57 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:57.994 12:07:57 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:57.995 12:07:57 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:57.995 12:07:57 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:57.995 12:07:57 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:57.995 12:07:57 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:57.995 12:07:57 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:57.995 12:07:57 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:57.995 12:07:57 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:57.995 12:07:57 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:57.995 12:07:57 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:57.995 12:07:57 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:57.995 12:07:58 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:57.995 12:07:58 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:57.995 12:07:58 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:57.995 12:07:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:57.995 12:07:58 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:57.995 12:07:58 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:57.995 12:07:58 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:57.995 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:57.995 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.657 ms 00:11:57.995 00:11:57.995 --- 10.0.0.2 ping statistics --- 00:11:57.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:57.995 rtt min/avg/max/mdev = 0.657/0.657/0.657/0.000 ms 00:11:57.995 12:07:58 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:57.995 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:57.995 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:11:57.995 00:11:57.995 --- 10.0.0.1 ping statistics --- 00:11:57.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:57.995 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:11:57.995 12:07:58 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:57.995 12:07:58 -- nvmf/common.sh@411 -- # return 0 00:11:57.995 12:07:58 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:57.995 12:07:58 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:57.995 12:07:58 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:57.995 12:07:58 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:57.995 12:07:58 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:57.995 12:07:58 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:57.995 12:07:58 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:57.995 12:07:58 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:11:57.995 12:07:58 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:57.995 12:07:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:57.995 12:07:58 -- common/autotest_common.sh@10 -- # set +x 00:11:57.995 12:07:58 -- nvmf/common.sh@470 -- # nvmfpid=3318042 00:11:57.995 12:07:58 -- nvmf/common.sh@471 -- # waitforlisten 3318042 00:11:57.995 12:07:58 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:57.995 12:07:58 -- common/autotest_common.sh@817 -- # '[' -z 3318042 ']' 00:11:57.995 12:07:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:57.995 12:07:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:57.995 12:07:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:57.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:57.995 12:07:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:57.995 12:07:58 -- common/autotest_common.sh@10 -- # set +x 00:11:57.995 [2024-04-26 12:07:58.281257] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:11:57.995 [2024-04-26 12:07:58.281321] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:57.995 EAL: No free 2048 kB hugepages reported on node 1 00:11:57.995 [2024-04-26 12:07:58.354041] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:57.995 [2024-04-26 12:07:58.426955] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:57.995 [2024-04-26 12:07:58.426999] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:57.995 [2024-04-26 12:07:58.427009] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:57.995 [2024-04-26 12:07:58.427016] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:57.995 [2024-04-26 12:07:58.427023] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:57.995 [2024-04-26 12:07:58.427195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:57.995 [2024-04-26 12:07:58.427311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:57.995 [2024-04-26 12:07:58.427468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.995 [2024-04-26 12:07:58.427469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:57.995 12:07:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:57.995 12:07:59 -- common/autotest_common.sh@850 -- # return 0 00:11:57.995 12:07:59 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:57.995 12:07:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:57.995 12:07:59 -- common/autotest_common.sh@10 -- # set +x 00:11:57.995 12:07:59 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:57.995 12:07:59 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:57.995 12:07:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:57.995 12:07:59 -- common/autotest_common.sh@10 -- # set +x 00:11:57.995 [2024-04-26 12:07:59.111450] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:57.995 12:07:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:57.995 12:07:59 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:57.995 12:07:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:57.995 12:07:59 -- common/autotest_common.sh@10 -- # set +x 00:11:57.995 Malloc0 00:11:57.995 12:07:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:57.995 12:07:59 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:57.995 12:07:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:57.995 12:07:59 -- common/autotest_common.sh@10 -- # set +x 00:11:57.995 Malloc1 00:11:57.995 12:07:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:57.995 12:07:59 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:11:57.995 12:07:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:57.995 12:07:59 -- common/autotest_common.sh@10 -- # set +x 00:11:57.995 12:07:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:57.995 12:07:59 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:57.995 12:07:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:57.995 12:07:59 -- common/autotest_common.sh@10 -- # set +x 00:11:57.995 12:07:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:57.995 12:07:59 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:57.995 12:07:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:57.995 12:07:59 -- common/autotest_common.sh@10 -- # set +x 00:11:57.995 12:07:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:57.995 12:07:59 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:57.995 12:07:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:57.995 12:07:59 -- common/autotest_common.sh@10 -- # set +x 00:11:57.995 [2024-04-26 12:07:59.201518] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:57.995 12:07:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:57.995 12:07:59 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:57.995 12:07:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:57.995 12:07:59 -- common/autotest_common.sh@10 -- # set +x 00:11:58.257 12:07:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:58.257 12:07:59 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:11:58.257 00:11:58.257 Discovery Log Number of Records 2, Generation counter 2 00:11:58.257 =====Discovery Log Entry 0====== 00:11:58.257 trtype: tcp 00:11:58.257 adrfam: ipv4 00:11:58.257 subtype: current discovery subsystem 00:11:58.257 treq: not required 00:11:58.257 portid: 0 00:11:58.257 trsvcid: 4420 00:11:58.257 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:58.257 traddr: 10.0.0.2 00:11:58.257 eflags: explicit discovery connections, duplicate discovery information 00:11:58.257 sectype: none 00:11:58.257 =====Discovery Log Entry 1====== 00:11:58.257 trtype: tcp 00:11:58.257 adrfam: ipv4 00:11:58.257 subtype: nvme subsystem 00:11:58.257 treq: not required 00:11:58.257 portid: 0 00:11:58.257 trsvcid: 4420 00:11:58.257 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:58.257 traddr: 10.0.0.2 00:11:58.257 eflags: none 00:11:58.257 sectype: none 00:11:58.257 12:07:59 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:11:58.257 12:07:59 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:11:58.257 12:07:59 -- nvmf/common.sh@511 -- # local dev _ 00:11:58.257 12:07:59 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:58.257 12:07:59 -- nvmf/common.sh@510 -- # nvme list 00:11:58.257 12:07:59 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:11:58.257 12:07:59 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:58.257 12:07:59 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:11:58.257 12:07:59 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:58.257 12:07:59 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:11:58.257 12:07:59 -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:59.639 12:08:00 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:59.639 12:08:00 -- common/autotest_common.sh@1184 -- # local i=0 00:11:59.639 12:08:00 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:11:59.639 12:08:00 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:11:59.639 12:08:00 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:11:59.639 12:08:00 -- common/autotest_common.sh@1191 -- # sleep 2 00:12:01.649 12:08:02 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:12:01.649 12:08:02 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:12:01.649 12:08:02 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:12:01.649 12:08:02 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:12:01.649 12:08:02 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:12:01.649 12:08:02 -- common/autotest_common.sh@1194 -- # return 0 00:12:01.649 12:08:02 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:12:01.649 12:08:02 -- nvmf/common.sh@511 -- # local dev _ 00:12:01.649 12:08:02 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:01.649 12:08:02 -- nvmf/common.sh@510 -- # nvme list 00:12:01.910 12:08:02 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:12:01.910 12:08:02 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:01.910 12:08:02 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:12:01.910 12:08:02 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:01.910 12:08:02 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:01.910 12:08:02 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:12:01.910 12:08:02 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:01.910 12:08:02 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:01.910 12:08:02 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:12:01.910 12:08:02 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:01.910 12:08:02 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:12:01.910 /dev/nvme0n1 ]] 00:12:01.910 12:08:03 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:12:01.910 12:08:03 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:12:01.910 12:08:03 -- nvmf/common.sh@511 -- # local dev _ 00:12:01.910 12:08:03 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:01.910 12:08:03 -- nvmf/common.sh@510 -- # nvme list 00:12:02.170 12:08:03 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:12:02.170 12:08:03 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:02.170 12:08:03 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:12:02.170 12:08:03 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:02.170 12:08:03 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:02.170 12:08:03 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:12:02.170 12:08:03 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:02.170 12:08:03 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:02.170 12:08:03 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:12:02.170 12:08:03 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:02.170 12:08:03 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:12:02.170 12:08:03 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:02.430 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.430 12:08:03 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:02.430 12:08:03 -- common/autotest_common.sh@1205 -- # local i=0 00:12:02.430 12:08:03 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:12:02.430 12:08:03 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:02.430 12:08:03 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:12:02.430 12:08:03 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:02.430 12:08:03 -- common/autotest_common.sh@1217 -- # return 0 00:12:02.430 12:08:03 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:12:02.430 12:08:03 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:02.430 12:08:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:02.430 12:08:03 -- common/autotest_common.sh@10 -- # set +x 00:12:02.430 12:08:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:02.430 12:08:03 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:02.430 12:08:03 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:12:02.430 12:08:03 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:02.430 12:08:03 -- nvmf/common.sh@117 -- # sync 00:12:02.430 12:08:03 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:02.430 12:08:03 -- nvmf/common.sh@120 -- # set +e 00:12:02.430 12:08:03 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:02.430 12:08:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:02.430 rmmod nvme_tcp 00:12:02.430 rmmod nvme_fabrics 00:12:02.430 rmmod nvme_keyring 00:12:02.430 12:08:03 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:02.430 12:08:03 -- nvmf/common.sh@124 -- # set -e 00:12:02.430 12:08:03 -- nvmf/common.sh@125 -- # return 0 00:12:02.430 12:08:03 -- nvmf/common.sh@478 -- # '[' -n 3318042 ']' 00:12:02.430 12:08:03 -- nvmf/common.sh@479 -- # killprocess 3318042 00:12:02.430 12:08:03 -- common/autotest_common.sh@936 -- # '[' -z 3318042 ']' 00:12:02.430 12:08:03 -- common/autotest_common.sh@940 -- # kill -0 3318042 00:12:02.430 12:08:03 -- common/autotest_common.sh@941 -- # uname 00:12:02.430 12:08:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:02.430 12:08:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3318042 00:12:02.430 12:08:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:02.430 12:08:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:02.430 12:08:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3318042' 00:12:02.430 killing process with pid 3318042 00:12:02.430 12:08:03 -- common/autotest_common.sh@955 -- # kill 3318042 00:12:02.430 12:08:03 -- common/autotest_common.sh@960 -- # wait 3318042 00:12:02.690 12:08:03 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:02.690 12:08:03 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:02.690 12:08:03 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:02.690 12:08:03 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:02.690 12:08:03 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:02.690 12:08:03 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:02.690 12:08:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:02.690 12:08:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:05.231 12:08:05 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:05.231 00:12:05.231 real 0m14.940s 00:12:05.231 user 0m23.381s 00:12:05.231 sys 0m5.912s 00:12:05.231 12:08:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:05.231 12:08:05 -- common/autotest_common.sh@10 -- # set +x 00:12:05.231 ************************************ 00:12:05.231 END TEST nvmf_nvme_cli 00:12:05.231 ************************************ 00:12:05.231 12:08:05 -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:12:05.231 12:08:05 -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:05.231 12:08:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:05.231 12:08:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:05.231 12:08:05 -- common/autotest_common.sh@10 -- # set +x 00:12:05.231 ************************************ 00:12:05.231 START TEST nvmf_vfio_user 00:12:05.231 ************************************ 00:12:05.231 12:08:06 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:05.231 * Looking for test storage... 00:12:05.231 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:05.231 12:08:06 -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:05.231 12:08:06 -- nvmf/common.sh@7 -- # uname -s 00:12:05.231 12:08:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:05.231 12:08:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:05.231 12:08:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:05.231 12:08:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:05.232 12:08:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:05.232 12:08:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:05.232 12:08:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:05.232 12:08:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:05.232 12:08:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:05.232 12:08:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:05.232 12:08:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:05.232 12:08:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:05.232 12:08:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:05.232 12:08:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:05.232 12:08:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:05.232 12:08:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:05.232 12:08:06 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:05.232 12:08:06 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:05.232 12:08:06 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:05.232 12:08:06 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:05.232 12:08:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.232 12:08:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.232 12:08:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.232 12:08:06 -- paths/export.sh@5 -- # export PATH 00:12:05.232 12:08:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.232 12:08:06 -- nvmf/common.sh@47 -- # : 0 00:12:05.232 12:08:06 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:05.232 12:08:06 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:05.232 12:08:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:05.232 12:08:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:05.232 12:08:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:05.232 12:08:06 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:05.232 12:08:06 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:05.232 12:08:06 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:05.232 12:08:06 -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:05.232 12:08:06 -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:05.232 12:08:06 -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:12:05.232 12:08:06 -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:05.232 12:08:06 -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:05.232 12:08:06 -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:05.232 12:08:06 -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:12:05.232 12:08:06 -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:12:05.232 12:08:06 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:12:05.232 12:08:06 -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:12:05.232 12:08:06 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3319752 00:12:05.232 12:08:06 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3319752' 00:12:05.232 Process pid: 3319752 00:12:05.232 12:08:06 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:05.232 12:08:06 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3319752 00:12:05.232 12:08:06 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:12:05.232 12:08:06 -- common/autotest_common.sh@817 -- # '[' -z 3319752 ']' 00:12:05.232 12:08:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:05.232 12:08:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:05.232 12:08:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:05.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:05.232 12:08:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:05.232 12:08:06 -- common/autotest_common.sh@10 -- # set +x 00:12:05.232 [2024-04-26 12:08:06.234962] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:12:05.232 [2024-04-26 12:08:06.235032] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:05.232 EAL: No free 2048 kB hugepages reported on node 1 00:12:05.232 [2024-04-26 12:08:06.300728] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:05.232 [2024-04-26 12:08:06.373689] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:05.232 [2024-04-26 12:08:06.373733] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:05.232 [2024-04-26 12:08:06.373742] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:05.232 [2024-04-26 12:08:06.373748] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:05.232 [2024-04-26 12:08:06.373753] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:05.232 [2024-04-26 12:08:06.373820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:05.232 [2024-04-26 12:08:06.373923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:05.232 [2024-04-26 12:08:06.374063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.232 [2024-04-26 12:08:06.374064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:05.802 12:08:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:05.802 12:08:07 -- common/autotest_common.sh@850 -- # return 0 00:12:05.802 12:08:07 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:07.190 12:08:08 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:12:07.190 12:08:08 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:07.190 12:08:08 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:07.190 12:08:08 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:07.190 12:08:08 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:07.190 12:08:08 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:07.190 Malloc1 00:12:07.190 12:08:08 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:07.451 12:08:08 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:07.712 12:08:08 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:07.712 12:08:08 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:07.712 12:08:08 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:07.712 12:08:08 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:07.973 Malloc2 00:12:07.973 12:08:09 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:08.234 12:08:09 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:08.234 12:08:09 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:08.496 12:08:09 -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:12:08.496 12:08:09 -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:12:08.496 12:08:09 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:08.496 12:08:09 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:08.496 12:08:09 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:12:08.496 12:08:09 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:08.496 [2024-04-26 12:08:09.587587] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:12:08.496 [2024-04-26 12:08:09.587646] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3320435 ] 00:12:08.496 EAL: No free 2048 kB hugepages reported on node 1 00:12:08.496 [2024-04-26 12:08:09.620451] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:12:08.496 [2024-04-26 12:08:09.629156] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:08.496 [2024-04-26 12:08:09.629175] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fa29f0b6000 00:12:08.496 [2024-04-26 12:08:09.630157] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:08.496 [2024-04-26 12:08:09.631156] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:08.496 [2024-04-26 12:08:09.632163] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:08.496 [2024-04-26 12:08:09.633171] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:08.496 [2024-04-26 12:08:09.634178] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:08.496 [2024-04-26 12:08:09.635181] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:08.496 [2024-04-26 12:08:09.636187] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:08.496 [2024-04-26 12:08:09.637188] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:08.496 [2024-04-26 12:08:09.638199] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:08.496 [2024-04-26 12:08:09.638211] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fa29f0ab000 00:12:08.496 [2024-04-26 12:08:09.639541] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:08.496 [2024-04-26 12:08:09.656463] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:12:08.496 [2024-04-26 12:08:09.656488] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:12:08.496 [2024-04-26 12:08:09.659344] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:08.496 [2024-04-26 12:08:09.659393] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:08.496 [2024-04-26 12:08:09.659479] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:12:08.497 [2024-04-26 12:08:09.659497] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:12:08.497 [2024-04-26 12:08:09.659503] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:12:08.497 [2024-04-26 12:08:09.663844] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:12:08.497 [2024-04-26 12:08:09.663854] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:12:08.497 [2024-04-26 12:08:09.663861] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:12:08.497 [2024-04-26 12:08:09.664356] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:08.497 [2024-04-26 12:08:09.664363] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:12:08.497 [2024-04-26 12:08:09.664370] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:12:08.497 [2024-04-26 12:08:09.665358] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:12:08.497 [2024-04-26 12:08:09.665366] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:08.497 [2024-04-26 12:08:09.666364] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:12:08.497 [2024-04-26 12:08:09.666372] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:12:08.497 [2024-04-26 12:08:09.666377] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:12:08.497 [2024-04-26 12:08:09.666383] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:08.497 [2024-04-26 12:08:09.666489] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:12:08.497 [2024-04-26 12:08:09.666496] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:08.497 [2024-04-26 12:08:09.666501] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:12:08.497 [2024-04-26 12:08:09.667373] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:12:08.497 [2024-04-26 12:08:09.668377] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:12:08.497 [2024-04-26 12:08:09.669387] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:08.497 [2024-04-26 12:08:09.670380] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:08.497 [2024-04-26 12:08:09.670433] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:08.497 [2024-04-26 12:08:09.671398] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:12:08.497 [2024-04-26 12:08:09.671406] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:08.497 [2024-04-26 12:08:09.671410] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:12:08.497 [2024-04-26 12:08:09.671431] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:12:08.497 [2024-04-26 12:08:09.671439] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:12:08.497 [2024-04-26 12:08:09.671454] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:08.497 [2024-04-26 12:08:09.671459] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:08.497 [2024-04-26 12:08:09.671472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:08.497 [2024-04-26 12:08:09.671506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:08.497 [2024-04-26 12:08:09.671515] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:12:08.497 [2024-04-26 12:08:09.671520] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:12:08.497 [2024-04-26 12:08:09.671524] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:12:08.497 [2024-04-26 12:08:09.671529] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:08.497 [2024-04-26 12:08:09.671534] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:12:08.497 [2024-04-26 12:08:09.671538] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:12:08.497 [2024-04-26 12:08:09.671543] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:12:08.497 [2024-04-26 12:08:09.671551] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:12:08.497 [2024-04-26 12:08:09.671560] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:08.497 [2024-04-26 12:08:09.671572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:08.497 [2024-04-26 12:08:09.671586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:08.497 [2024-04-26 12:08:09.671595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:08.497 [2024-04-26 12:08:09.671603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:08.497 [2024-04-26 12:08:09.671611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:08.497 [2024-04-26 12:08:09.671616] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:12:08.497 [2024-04-26 12:08:09.671625] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:08.497 [2024-04-26 12:08:09.671634] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:08.497 [2024-04-26 12:08:09.671643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:08.497 [2024-04-26 12:08:09.671649] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:12:08.497 [2024-04-26 12:08:09.671654] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:08.497 [2024-04-26 12:08:09.671662] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:12:08.497 [2024-04-26 12:08:09.671667] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:12:08.497 [2024-04-26 12:08:09.671676] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:08.497 [2024-04-26 12:08:09.671685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:08.497 [2024-04-26 12:08:09.671732] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:12:08.497 [2024-04-26 12:08:09.671740] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:12:08.497 [2024-04-26 12:08:09.671747] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:08.497 [2024-04-26 12:08:09.671751] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:08.497 [2024-04-26 12:08:09.671758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:08.497 [2024-04-26 12:08:09.671770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:08.497 [2024-04-26 12:08:09.671779] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:12:08.497 [2024-04-26 12:08:09.671790] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:12:08.497 [2024-04-26 12:08:09.671798] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:12:08.497 [2024-04-26 12:08:09.671805] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:08.497 [2024-04-26 12:08:09.671809] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:08.497 [2024-04-26 12:08:09.671817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:08.497 [2024-04-26 12:08:09.671832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:08.497 [2024-04-26 12:08:09.671848] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:08.497 [2024-04-26 12:08:09.671856] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:08.497 [2024-04-26 12:08:09.671863] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:08.497 [2024-04-26 12:08:09.671867] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:08.497 [2024-04-26 12:08:09.671873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:08.497 [2024-04-26 12:08:09.671882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:08.497 [2024-04-26 12:08:09.671890] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:08.497 [2024-04-26 12:08:09.671896] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:12:08.497 [2024-04-26 12:08:09.671903] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:12:08.497 [2024-04-26 12:08:09.671909] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:08.497 [2024-04-26 12:08:09.671914] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:12:08.498 [2024-04-26 12:08:09.671919] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:12:08.498 [2024-04-26 12:08:09.671924] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:12:08.498 [2024-04-26 12:08:09.671929] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:12:08.498 [2024-04-26 12:08:09.671946] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:08.498 [2024-04-26 12:08:09.671956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:08.498 [2024-04-26 12:08:09.671967] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:08.498 [2024-04-26 12:08:09.671976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:08.498 [2024-04-26 12:08:09.671986] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:08.498 [2024-04-26 12:08:09.671997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:08.498 [2024-04-26 12:08:09.672008] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:08.498 [2024-04-26 12:08:09.672019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:08.498 [2024-04-26 12:08:09.672029] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:08.498 [2024-04-26 12:08:09.672035] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:08.498 [2024-04-26 12:08:09.672039] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:08.498 [2024-04-26 12:08:09.672042] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:08.498 [2024-04-26 12:08:09.672048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:08.498 [2024-04-26 12:08:09.672056] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:08.498 [2024-04-26 12:08:09.672060] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:08.498 [2024-04-26 12:08:09.672066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:08.498 [2024-04-26 12:08:09.672073] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:08.498 [2024-04-26 12:08:09.672078] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:08.498 [2024-04-26 12:08:09.672083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:08.498 [2024-04-26 12:08:09.672091] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:08.498 [2024-04-26 12:08:09.672095] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:08.498 [2024-04-26 12:08:09.672101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:08.498 [2024-04-26 12:08:09.672108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:08.498 [2024-04-26 12:08:09.672120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:08.498 [2024-04-26 12:08:09.672129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:08.498 [2024-04-26 12:08:09.672136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:08.498 ===================================================== 00:12:08.498 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:08.498 ===================================================== 00:12:08.498 Controller Capabilities/Features 00:12:08.498 ================================ 00:12:08.498 Vendor ID: 4e58 00:12:08.498 Subsystem Vendor ID: 4e58 00:12:08.498 Serial Number: SPDK1 00:12:08.498 Model Number: SPDK bdev Controller 00:12:08.498 Firmware Version: 24.05 00:12:08.498 Recommended Arb Burst: 6 00:12:08.498 IEEE OUI Identifier: 8d 6b 50 00:12:08.498 Multi-path I/O 00:12:08.498 May have multiple subsystem ports: Yes 00:12:08.498 May have multiple controllers: Yes 00:12:08.498 Associated with SR-IOV VF: No 00:12:08.498 Max Data Transfer Size: 131072 00:12:08.498 Max Number of Namespaces: 32 00:12:08.498 Max Number of I/O Queues: 127 00:12:08.498 NVMe Specification Version (VS): 1.3 00:12:08.498 NVMe Specification Version (Identify): 1.3 00:12:08.498 Maximum Queue Entries: 256 00:12:08.498 Contiguous Queues Required: Yes 00:12:08.498 Arbitration Mechanisms Supported 00:12:08.498 Weighted Round Robin: Not Supported 00:12:08.498 Vendor Specific: Not Supported 00:12:08.498 Reset Timeout: 15000 ms 00:12:08.498 Doorbell Stride: 4 bytes 00:12:08.498 NVM Subsystem Reset: Not Supported 00:12:08.498 Command Sets Supported 00:12:08.498 NVM Command Set: Supported 00:12:08.498 Boot Partition: Not Supported 00:12:08.498 Memory Page Size Minimum: 4096 bytes 00:12:08.498 Memory Page Size Maximum: 4096 bytes 00:12:08.498 Persistent Memory Region: Not Supported 00:12:08.498 Optional Asynchronous Events Supported 00:12:08.498 Namespace Attribute Notices: Supported 00:12:08.498 Firmware Activation Notices: Not Supported 00:12:08.498 ANA Change Notices: Not Supported 00:12:08.498 PLE Aggregate Log Change Notices: Not Supported 00:12:08.498 LBA Status Info Alert Notices: Not Supported 00:12:08.498 EGE Aggregate Log Change Notices: Not Supported 00:12:08.498 Normal NVM Subsystem Shutdown event: Not Supported 00:12:08.498 Zone Descriptor Change Notices: Not Supported 00:12:08.498 Discovery Log Change Notices: Not Supported 00:12:08.498 Controller Attributes 00:12:08.498 128-bit Host Identifier: Supported 00:12:08.498 Non-Operational Permissive Mode: Not Supported 00:12:08.498 NVM Sets: Not Supported 00:12:08.498 Read Recovery Levels: Not Supported 00:12:08.498 Endurance Groups: Not Supported 00:12:08.498 Predictable Latency Mode: Not Supported 00:12:08.498 Traffic Based Keep ALive: Not Supported 00:12:08.498 Namespace Granularity: Not Supported 00:12:08.498 SQ Associations: Not Supported 00:12:08.498 UUID List: Not Supported 00:12:08.498 Multi-Domain Subsystem: Not Supported 00:12:08.498 Fixed Capacity Management: Not Supported 00:12:08.498 Variable Capacity Management: Not Supported 00:12:08.498 Delete Endurance Group: Not Supported 00:12:08.498 Delete NVM Set: Not Supported 00:12:08.498 Extended LBA Formats Supported: Not Supported 00:12:08.498 Flexible Data Placement Supported: Not Supported 00:12:08.498 00:12:08.498 Controller Memory Buffer Support 00:12:08.498 ================================ 00:12:08.498 Supported: No 00:12:08.498 00:12:08.498 Persistent Memory Region Support 00:12:08.498 ================================ 00:12:08.498 Supported: No 00:12:08.498 00:12:08.498 Admin Command Set Attributes 00:12:08.498 ============================ 00:12:08.498 Security Send/Receive: Not Supported 00:12:08.498 Format NVM: Not Supported 00:12:08.498 Firmware Activate/Download: Not Supported 00:12:08.498 Namespace Management: Not Supported 00:12:08.498 Device Self-Test: Not Supported 00:12:08.498 Directives: Not Supported 00:12:08.498 NVMe-MI: Not Supported 00:12:08.498 Virtualization Management: Not Supported 00:12:08.498 Doorbell Buffer Config: Not Supported 00:12:08.498 Get LBA Status Capability: Not Supported 00:12:08.498 Command & Feature Lockdown Capability: Not Supported 00:12:08.498 Abort Command Limit: 4 00:12:08.498 Async Event Request Limit: 4 00:12:08.498 Number of Firmware Slots: N/A 00:12:08.498 Firmware Slot 1 Read-Only: N/A 00:12:08.498 Firmware Activation Without Reset: N/A 00:12:08.498 Multiple Update Detection Support: N/A 00:12:08.498 Firmware Update Granularity: No Information Provided 00:12:08.498 Per-Namespace SMART Log: No 00:12:08.498 Asymmetric Namespace Access Log Page: Not Supported 00:12:08.498 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:12:08.498 Command Effects Log Page: Supported 00:12:08.498 Get Log Page Extended Data: Supported 00:12:08.498 Telemetry Log Pages: Not Supported 00:12:08.498 Persistent Event Log Pages: Not Supported 00:12:08.498 Supported Log Pages Log Page: May Support 00:12:08.498 Commands Supported & Effects Log Page: Not Supported 00:12:08.498 Feature Identifiers & Effects Log Page:May Support 00:12:08.498 NVMe-MI Commands & Effects Log Page: May Support 00:12:08.498 Data Area 4 for Telemetry Log: Not Supported 00:12:08.498 Error Log Page Entries Supported: 128 00:12:08.498 Keep Alive: Supported 00:12:08.498 Keep Alive Granularity: 10000 ms 00:12:08.498 00:12:08.498 NVM Command Set Attributes 00:12:08.498 ========================== 00:12:08.498 Submission Queue Entry Size 00:12:08.498 Max: 64 00:12:08.498 Min: 64 00:12:08.498 Completion Queue Entry Size 00:12:08.498 Max: 16 00:12:08.498 Min: 16 00:12:08.498 Number of Namespaces: 32 00:12:08.498 Compare Command: Supported 00:12:08.498 Write Uncorrectable Command: Not Supported 00:12:08.498 Dataset Management Command: Supported 00:12:08.498 Write Zeroes Command: Supported 00:12:08.498 Set Features Save Field: Not Supported 00:12:08.498 Reservations: Not Supported 00:12:08.498 Timestamp: Not Supported 00:12:08.498 Copy: Supported 00:12:08.498 Volatile Write Cache: Present 00:12:08.498 Atomic Write Unit (Normal): 1 00:12:08.498 Atomic Write Unit (PFail): 1 00:12:08.498 Atomic Compare & Write Unit: 1 00:12:08.498 Fused Compare & Write: Supported 00:12:08.498 Scatter-Gather List 00:12:08.499 SGL Command Set: Supported (Dword aligned) 00:12:08.499 SGL Keyed: Not Supported 00:12:08.499 SGL Bit Bucket Descriptor: Not Supported 00:12:08.499 SGL Metadata Pointer: Not Supported 00:12:08.499 Oversized SGL: Not Supported 00:12:08.499 SGL Metadata Address: Not Supported 00:12:08.499 SGL Offset: Not Supported 00:12:08.499 Transport SGL Data Block: Not Supported 00:12:08.499 Replay Protected Memory Block: Not Supported 00:12:08.499 00:12:08.499 Firmware Slot Information 00:12:08.499 ========================= 00:12:08.499 Active slot: 1 00:12:08.499 Slot 1 Firmware Revision: 24.05 00:12:08.499 00:12:08.499 00:12:08.499 Commands Supported and Effects 00:12:08.499 ============================== 00:12:08.499 Admin Commands 00:12:08.499 -------------- 00:12:08.499 Get Log Page (02h): Supported 00:12:08.499 Identify (06h): Supported 00:12:08.499 Abort (08h): Supported 00:12:08.499 Set Features (09h): Supported 00:12:08.499 Get Features (0Ah): Supported 00:12:08.499 Asynchronous Event Request (0Ch): Supported 00:12:08.499 Keep Alive (18h): Supported 00:12:08.499 I/O Commands 00:12:08.499 ------------ 00:12:08.499 Flush (00h): Supported LBA-Change 00:12:08.499 Write (01h): Supported LBA-Change 00:12:08.499 Read (02h): Supported 00:12:08.499 Compare (05h): Supported 00:12:08.499 Write Zeroes (08h): Supported LBA-Change 00:12:08.499 Dataset Management (09h): Supported LBA-Change 00:12:08.499 Copy (19h): Supported LBA-Change 00:12:08.499 Unknown (79h): Supported LBA-Change 00:12:08.499 Unknown (7Ah): Supported 00:12:08.499 00:12:08.499 Error Log 00:12:08.499 ========= 00:12:08.499 00:12:08.499 Arbitration 00:12:08.499 =========== 00:12:08.499 Arbitration Burst: 1 00:12:08.499 00:12:08.499 Power Management 00:12:08.499 ================ 00:12:08.499 Number of Power States: 1 00:12:08.499 Current Power State: Power State #0 00:12:08.499 Power State #0: 00:12:08.499 Max Power: 0.00 W 00:12:08.499 Non-Operational State: Operational 00:12:08.499 Entry Latency: Not Reported 00:12:08.499 Exit Latency: Not Reported 00:12:08.499 Relative Read Throughput: 0 00:12:08.499 Relative Read Latency: 0 00:12:08.499 Relative Write Throughput: 0 00:12:08.499 Relative Write Latency: 0 00:12:08.499 Idle Power: Not Reported 00:12:08.499 Active Power: Not Reported 00:12:08.499 Non-Operational Permissive Mode: Not Supported 00:12:08.499 00:12:08.499 Health Information 00:12:08.499 ================== 00:12:08.499 Critical Warnings: 00:12:08.499 Available Spare Space: OK 00:12:08.499 Temperature: OK 00:12:08.499 Device Reliability: OK 00:12:08.499 Read Only: No 00:12:08.499 Volatile Memory Backup: OK 00:12:08.499 Current Temperature: 0 Kelvin (-2[2024-04-26 12:08:09.672243] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:08.499 [2024-04-26 12:08:09.672254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:08.499 [2024-04-26 12:08:09.672279] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:12:08.499 [2024-04-26 12:08:09.672288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:08.499 [2024-04-26 12:08:09.672295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:08.499 [2024-04-26 12:08:09.672301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:08.499 [2024-04-26 12:08:09.672307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:08.499 [2024-04-26 12:08:09.672399] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:08.499 [2024-04-26 12:08:09.672409] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:12:08.499 [2024-04-26 12:08:09.673404] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:08.499 [2024-04-26 12:08:09.673442] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:12:08.499 [2024-04-26 12:08:09.673451] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:12:08.499 [2024-04-26 12:08:09.674416] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:12:08.499 [2024-04-26 12:08:09.674426] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:12:08.499 [2024-04-26 12:08:09.674484] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:12:08.499 [2024-04-26 12:08:09.676436] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:08.760 73 Celsius) 00:12:08.760 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:08.760 Available Spare: 0% 00:12:08.760 Available Spare Threshold: 0% 00:12:08.760 Life Percentage Used: 0% 00:12:08.760 Data Units Read: 0 00:12:08.760 Data Units Written: 0 00:12:08.760 Host Read Commands: 0 00:12:08.760 Host Write Commands: 0 00:12:08.760 Controller Busy Time: 0 minutes 00:12:08.760 Power Cycles: 0 00:12:08.760 Power On Hours: 0 hours 00:12:08.760 Unsafe Shutdowns: 0 00:12:08.760 Unrecoverable Media Errors: 0 00:12:08.760 Lifetime Error Log Entries: 0 00:12:08.760 Warning Temperature Time: 0 minutes 00:12:08.760 Critical Temperature Time: 0 minutes 00:12:08.760 00:12:08.760 Number of Queues 00:12:08.760 ================ 00:12:08.760 Number of I/O Submission Queues: 127 00:12:08.760 Number of I/O Completion Queues: 127 00:12:08.760 00:12:08.760 Active Namespaces 00:12:08.760 ================= 00:12:08.760 Namespace ID:1 00:12:08.760 Error Recovery Timeout: Unlimited 00:12:08.760 Command Set Identifier: NVM (00h) 00:12:08.760 Deallocate: Supported 00:12:08.760 Deallocated/Unwritten Error: Not Supported 00:12:08.760 Deallocated Read Value: Unknown 00:12:08.760 Deallocate in Write Zeroes: Not Supported 00:12:08.760 Deallocated Guard Field: 0xFFFF 00:12:08.760 Flush: Supported 00:12:08.760 Reservation: Supported 00:12:08.760 Namespace Sharing Capabilities: Multiple Controllers 00:12:08.760 Size (in LBAs): 131072 (0GiB) 00:12:08.760 Capacity (in LBAs): 131072 (0GiB) 00:12:08.760 Utilization (in LBAs): 131072 (0GiB) 00:12:08.760 NGUID: 6917D4D6FD0F4B3BB20F5453C314184D 00:12:08.760 UUID: 6917d4d6-fd0f-4b3b-b20f-5453c314184d 00:12:08.760 Thin Provisioning: Not Supported 00:12:08.760 Per-NS Atomic Units: Yes 00:12:08.760 Atomic Boundary Size (Normal): 0 00:12:08.760 Atomic Boundary Size (PFail): 0 00:12:08.760 Atomic Boundary Offset: 0 00:12:08.760 Maximum Single Source Range Length: 65535 00:12:08.760 Maximum Copy Length: 65535 00:12:08.760 Maximum Source Range Count: 1 00:12:08.760 NGUID/EUI64 Never Reused: No 00:12:08.760 Namespace Write Protected: No 00:12:08.760 Number of LBA Formats: 1 00:12:08.760 Current LBA Format: LBA Format #00 00:12:08.760 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:08.760 00:12:08.760 12:08:09 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:08.760 EAL: No free 2048 kB hugepages reported on node 1 00:12:08.760 [2024-04-26 12:08:09.861498] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:14.044 [2024-04-26 12:08:14.880433] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:14.044 Initializing NVMe Controllers 00:12:14.044 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:14.044 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:14.044 Initialization complete. Launching workers. 00:12:14.044 ======================================================== 00:12:14.044 Latency(us) 00:12:14.044 Device Information : IOPS MiB/s Average min max 00:12:14.044 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 40084.84 156.58 3193.10 850.62 6819.74 00:12:14.044 ======================================================== 00:12:14.044 Total : 40084.84 156.58 3193.10 850.62 6819.74 00:12:14.044 00:12:14.044 12:08:14 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:14.044 EAL: No free 2048 kB hugepages reported on node 1 00:12:14.044 [2024-04-26 12:08:15.054236] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:19.327 [2024-04-26 12:08:20.091202] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:19.327 Initializing NVMe Controllers 00:12:19.327 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:19.327 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:19.327 Initialization complete. Launching workers. 00:12:19.327 ======================================================== 00:12:19.327 Latency(us) 00:12:19.327 Device Information : IOPS MiB/s Average min max 00:12:19.327 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.20 62.70 7983.92 4987.06 10975.07 00:12:19.327 ======================================================== 00:12:19.327 Total : 16051.20 62.70 7983.92 4987.06 10975.07 00:12:19.327 00:12:19.327 12:08:20 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:19.327 EAL: No free 2048 kB hugepages reported on node 1 00:12:19.327 [2024-04-26 12:08:20.269012] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:24.613 [2024-04-26 12:08:25.358161] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:24.613 Initializing NVMe Controllers 00:12:24.613 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:24.613 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:24.613 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:12:24.613 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:12:24.613 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:12:24.613 Initialization complete. Launching workers. 00:12:24.613 Starting thread on core 2 00:12:24.613 Starting thread on core 3 00:12:24.613 Starting thread on core 1 00:12:24.613 12:08:25 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:12:24.613 EAL: No free 2048 kB hugepages reported on node 1 00:12:24.613 [2024-04-26 12:08:25.614234] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:27.911 [2024-04-26 12:08:28.670767] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:27.911 Initializing NVMe Controllers 00:12:27.911 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:27.911 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:27.911 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:12:27.911 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:12:27.911 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:12:27.911 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:12:27.911 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:27.911 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:27.911 Initialization complete. Launching workers. 00:12:27.911 Starting thread on core 1 with urgent priority queue 00:12:27.911 Starting thread on core 2 with urgent priority queue 00:12:27.911 Starting thread on core 3 with urgent priority queue 00:12:27.911 Starting thread on core 0 with urgent priority queue 00:12:27.911 SPDK bdev Controller (SPDK1 ) core 0: 10775.67 IO/s 9.28 secs/100000 ios 00:12:27.911 SPDK bdev Controller (SPDK1 ) core 1: 16801.67 IO/s 5.95 secs/100000 ios 00:12:27.911 SPDK bdev Controller (SPDK1 ) core 2: 8092.67 IO/s 12.36 secs/100000 ios 00:12:27.911 SPDK bdev Controller (SPDK1 ) core 3: 15473.67 IO/s 6.46 secs/100000 ios 00:12:27.911 ======================================================== 00:12:27.911 00:12:27.911 12:08:28 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:27.911 EAL: No free 2048 kB hugepages reported on node 1 00:12:27.911 [2024-04-26 12:08:28.929333] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:27.911 [2024-04-26 12:08:28.963527] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:27.911 Initializing NVMe Controllers 00:12:27.911 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:27.911 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:27.911 Namespace ID: 1 size: 0GB 00:12:27.911 Initialization complete. 00:12:27.911 INFO: using host memory buffer for IO 00:12:27.911 Hello world! 00:12:27.911 12:08:29 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:27.911 EAL: No free 2048 kB hugepages reported on node 1 00:12:28.171 [2024-04-26 12:08:29.215700] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:29.112 Initializing NVMe Controllers 00:12:29.112 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:29.112 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:29.112 Initialization complete. Launching workers. 00:12:29.112 submit (in ns) avg, min, max = 8039.5, 3866.7, 5994457.5 00:12:29.112 complete (in ns) avg, min, max = 16534.3, 2346.7, 4031070.0 00:12:29.112 00:12:29.112 Submit histogram 00:12:29.112 ================ 00:12:29.112 Range in us Cumulative Count 00:12:29.112 3.867 - 3.893: 1.7883% ( 347) 00:12:29.112 3.893 - 3.920: 7.0449% ( 1020) 00:12:29.112 3.920 - 3.947: 16.1049% ( 1758) 00:12:29.112 3.947 - 3.973: 28.5920% ( 2423) 00:12:29.112 3.973 - 4.000: 40.4247% ( 2296) 00:12:29.112 4.000 - 4.027: 55.0660% ( 2841) 00:12:29.112 4.027 - 4.053: 71.8306% ( 3253) 00:12:29.112 4.053 - 4.080: 86.2760% ( 2803) 00:12:29.112 4.080 - 4.107: 93.8209% ( 1464) 00:12:29.112 4.107 - 4.133: 97.5160% ( 717) 00:12:29.112 4.133 - 4.160: 98.9384% ( 276) 00:12:29.112 4.160 - 4.187: 99.4022% ( 90) 00:12:29.112 4.187 - 4.213: 99.5104% ( 21) 00:12:29.112 4.213 - 4.240: 99.5259% ( 3) 00:12:29.112 4.240 - 4.267: 99.5362% ( 2) 00:12:29.112 4.267 - 4.293: 99.5413% ( 1) 00:12:29.112 4.320 - 4.347: 99.5465% ( 1) 00:12:29.112 4.347 - 4.373: 99.5516% ( 1) 00:12:29.112 4.453 - 4.480: 99.5568% ( 1) 00:12:29.112 4.560 - 4.587: 99.5619% ( 1) 00:12:29.112 4.880 - 4.907: 99.5671% ( 1) 00:12:29.112 5.120 - 5.147: 99.5723% ( 1) 00:12:29.112 5.227 - 5.253: 99.5774% ( 1) 00:12:29.112 5.600 - 5.627: 99.5826% ( 1) 00:12:29.112 5.707 - 5.733: 99.5877% ( 1) 00:12:29.112 5.947 - 5.973: 99.5980% ( 2) 00:12:29.112 6.053 - 6.080: 99.6083% ( 2) 00:12:29.112 6.107 - 6.133: 99.6135% ( 1) 00:12:29.112 6.320 - 6.347: 99.6238% ( 2) 00:12:29.112 6.347 - 6.373: 99.6289% ( 1) 00:12:29.112 6.427 - 6.453: 99.6341% ( 1) 00:12:29.112 6.453 - 6.480: 99.6444% ( 2) 00:12:29.112 6.480 - 6.507: 99.6599% ( 3) 00:12:29.112 6.507 - 6.533: 99.6650% ( 1) 00:12:29.112 6.533 - 6.560: 99.6702% ( 1) 00:12:29.112 6.587 - 6.613: 99.6753% ( 1) 00:12:29.112 6.613 - 6.640: 99.6908% ( 3) 00:12:29.112 6.693 - 6.720: 99.6959% ( 1) 00:12:29.112 6.720 - 6.747: 99.7011% ( 1) 00:12:29.112 6.773 - 6.800: 99.7062% ( 1) 00:12:29.112 6.827 - 6.880: 99.7166% ( 2) 00:12:29.112 6.880 - 6.933: 99.7269% ( 2) 00:12:29.112 6.933 - 6.987: 99.7423% ( 3) 00:12:29.112 6.987 - 7.040: 99.7526% ( 2) 00:12:29.112 7.040 - 7.093: 99.7629% ( 2) 00:12:29.112 7.147 - 7.200: 99.7784% ( 3) 00:12:29.112 7.200 - 7.253: 99.7835% ( 1) 00:12:29.112 7.253 - 7.307: 99.8042% ( 4) 00:12:29.112 7.360 - 7.413: 99.8093% ( 1) 00:12:29.112 7.413 - 7.467: 99.8145% ( 1) 00:12:29.112 7.467 - 7.520: 99.8248% ( 2) 00:12:29.112 7.573 - 7.627: 99.8299% ( 1) 00:12:29.112 7.627 - 7.680: 99.8351% ( 1) 00:12:29.112 7.680 - 7.733: 99.8402% ( 1) 00:12:29.112 7.733 - 7.787: 99.8454% ( 1) 00:12:29.112 7.893 - 7.947: 99.8505% ( 1) 00:12:29.112 7.947 - 8.000: 99.8557% ( 1) 00:12:29.112 8.000 - 8.053: 99.8609% ( 1) 00:12:29.112 8.053 - 8.107: 99.8712% ( 2) 00:12:29.112 8.107 - 8.160: 99.8763% ( 1) 00:12:29.112 8.213 - 8.267: 99.8866% ( 2) 00:12:29.112 8.267 - 8.320: 99.8918% ( 1) 00:12:29.112 9.067 - 9.120: 99.8969% ( 1) 00:12:29.112 9.120 - 9.173: 99.9021% ( 1) 00:12:29.112 3986.773 - 4014.080: 99.9897% ( 17) 00:12:29.112 4014.080 - 4041.387: 99.9948% ( 1) 00:12:29.112 5980.160 - 6007.467: 100.0000% ( 1) 00:12:29.112 00:12:29.112 Complete histogram 00:12:29.112 ================== 00:12:29.112 Range in us Cumulative Count 00:12:29.112 2.347 - 2.360: 0.0052% ( 1) 00:12:29.112 2.360 - 2.373: 0.8503% ( 164) 00:12:29.112 2.373 - 2.387: 1.2214% ( 72) 00:12:29.112 2.387 - 2.400: 1.3554% ( 26) 00:12:29.112 2.400 - 2.413: 38.4199% ( 7192) 00:12:29.112 2.413 - 2.427: 61.5749% ( 4493) 00:12:29.112 2.427 - 2.440: 71.4646% ( 1919) 00:12:29.112 2.440 - 2.453: 78.5096% ( 1367) 00:12:29.112 2.453 - 2.467: 81.6533% ( 610) 00:12:29.112 2.467 - 2.480: 83.5292% ( 364) 00:12:29.112 2.480 - [2024-04-26 12:08:30.235238] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:29.112 2.493: 89.6155% ( 1181) 00:12:29.112 2.493 - 2.507: 94.2692% ( 903) 00:12:29.112 2.507 - 2.520: 96.9749% ( 525) 00:12:29.112 2.520 - 2.533: 98.4333% ( 283) 00:12:29.112 2.533 - 2.547: 99.1290% ( 135) 00:12:29.112 2.547 - 2.560: 99.3506% ( 43) 00:12:29.112 2.560 - 2.573: 99.3816% ( 6) 00:12:29.112 2.573 - 2.587: 99.3867% ( 1) 00:12:29.112 4.587 - 4.613: 99.3919% ( 1) 00:12:29.112 4.613 - 4.640: 99.3970% ( 1) 00:12:29.112 4.693 - 4.720: 99.4022% ( 1) 00:12:29.112 4.853 - 4.880: 99.4073% ( 1) 00:12:29.112 4.907 - 4.933: 99.4176% ( 2) 00:12:29.112 4.960 - 4.987: 99.4228% ( 1) 00:12:29.112 5.040 - 5.067: 99.4280% ( 1) 00:12:29.112 5.067 - 5.093: 99.4331% ( 1) 00:12:29.112 5.120 - 5.147: 99.4383% ( 1) 00:12:29.112 5.227 - 5.253: 99.4434% ( 1) 00:12:29.112 5.253 - 5.280: 99.4589% ( 3) 00:12:29.112 5.280 - 5.307: 99.4743% ( 3) 00:12:29.112 5.307 - 5.333: 99.4795% ( 1) 00:12:29.112 5.333 - 5.360: 99.4898% ( 2) 00:12:29.112 5.360 - 5.387: 99.4949% ( 1) 00:12:29.112 5.387 - 5.413: 99.5104% ( 3) 00:12:29.112 5.413 - 5.440: 99.5156% ( 1) 00:12:29.112 5.520 - 5.547: 99.5310% ( 3) 00:12:29.112 5.573 - 5.600: 99.5465% ( 3) 00:12:29.112 5.653 - 5.680: 99.5619% ( 3) 00:12:29.112 5.680 - 5.707: 99.5723% ( 2) 00:12:29.112 5.707 - 5.733: 99.5774% ( 1) 00:12:29.112 5.787 - 5.813: 99.5877% ( 2) 00:12:29.112 5.840 - 5.867: 99.5929% ( 1) 00:12:29.112 5.893 - 5.920: 99.5980% ( 1) 00:12:29.112 5.947 - 5.973: 99.6032% ( 1) 00:12:29.112 6.027 - 6.053: 99.6135% ( 2) 00:12:29.112 6.480 - 6.507: 99.6186% ( 1) 00:12:29.112 11.520 - 11.573: 99.6238% ( 1) 00:12:29.112 11.893 - 11.947: 99.6289% ( 1) 00:12:29.112 13.013 - 13.067: 99.6341% ( 1) 00:12:29.112 14.400 - 14.507: 99.6392% ( 1) 00:12:29.112 44.160 - 44.373: 99.6444% ( 1) 00:12:29.112 1699.840 - 1706.667: 99.6496% ( 1) 00:12:29.112 3986.773 - 4014.080: 99.9948% ( 67) 00:12:29.112 4014.080 - 4041.387: 100.0000% ( 1) 00:12:29.112 00:12:29.112 12:08:30 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:12:29.112 12:08:30 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:29.112 12:08:30 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:12:29.112 12:08:30 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:12:29.112 12:08:30 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:29.372 [2024-04-26 12:08:30.432179] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:12:29.372 [ 00:12:29.372 { 00:12:29.372 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:29.372 "subtype": "Discovery", 00:12:29.372 "listen_addresses": [], 00:12:29.372 "allow_any_host": true, 00:12:29.372 "hosts": [] 00:12:29.372 }, 00:12:29.372 { 00:12:29.372 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:29.372 "subtype": "NVMe", 00:12:29.372 "listen_addresses": [ 00:12:29.372 { 00:12:29.372 "transport": "VFIOUSER", 00:12:29.372 "trtype": "VFIOUSER", 00:12:29.372 "adrfam": "IPv4", 00:12:29.372 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:29.372 "trsvcid": "0" 00:12:29.372 } 00:12:29.372 ], 00:12:29.372 "allow_any_host": true, 00:12:29.372 "hosts": [], 00:12:29.372 "serial_number": "SPDK1", 00:12:29.372 "model_number": "SPDK bdev Controller", 00:12:29.372 "max_namespaces": 32, 00:12:29.372 "min_cntlid": 1, 00:12:29.372 "max_cntlid": 65519, 00:12:29.372 "namespaces": [ 00:12:29.372 { 00:12:29.372 "nsid": 1, 00:12:29.372 "bdev_name": "Malloc1", 00:12:29.372 "name": "Malloc1", 00:12:29.372 "nguid": "6917D4D6FD0F4B3BB20F5453C314184D", 00:12:29.372 "uuid": "6917d4d6-fd0f-4b3b-b20f-5453c314184d" 00:12:29.372 } 00:12:29.372 ] 00:12:29.372 }, 00:12:29.372 { 00:12:29.372 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:29.372 "subtype": "NVMe", 00:12:29.372 "listen_addresses": [ 00:12:29.372 { 00:12:29.372 "transport": "VFIOUSER", 00:12:29.372 "trtype": "VFIOUSER", 00:12:29.372 "adrfam": "IPv4", 00:12:29.372 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:29.372 "trsvcid": "0" 00:12:29.372 } 00:12:29.372 ], 00:12:29.372 "allow_any_host": true, 00:12:29.372 "hosts": [], 00:12:29.372 "serial_number": "SPDK2", 00:12:29.372 "model_number": "SPDK bdev Controller", 00:12:29.372 "max_namespaces": 32, 00:12:29.372 "min_cntlid": 1, 00:12:29.372 "max_cntlid": 65519, 00:12:29.372 "namespaces": [ 00:12:29.372 { 00:12:29.372 "nsid": 1, 00:12:29.372 "bdev_name": "Malloc2", 00:12:29.372 "name": "Malloc2", 00:12:29.372 "nguid": "7357E9996A904560B63518C85D571730", 00:12:29.372 "uuid": "7357e999-6a90-4560-b635-18c85d571730" 00:12:29.372 } 00:12:29.372 ] 00:12:29.372 } 00:12:29.372 ] 00:12:29.372 12:08:30 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:29.372 12:08:30 -- target/nvmf_vfio_user.sh@34 -- # aerpid=3324581 00:12:29.372 12:08:30 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:29.372 12:08:30 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:12:29.372 12:08:30 -- common/autotest_common.sh@1251 -- # local i=0 00:12:29.372 12:08:30 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:29.372 12:08:30 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:29.372 12:08:30 -- common/autotest_common.sh@1262 -- # return 0 00:12:29.372 12:08:30 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:29.372 12:08:30 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:12:29.373 EAL: No free 2048 kB hugepages reported on node 1 00:12:29.632 Malloc3 00:12:29.633 [2024-04-26 12:08:30.627117] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:29.633 12:08:30 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:12:29.633 [2024-04-26 12:08:30.790182] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:29.633 12:08:30 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:29.633 Asynchronous Event Request test 00:12:29.633 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:29.633 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:29.633 Registering asynchronous event callbacks... 00:12:29.633 Starting namespace attribute notice tests for all controllers... 00:12:29.633 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:29.633 aer_cb - Changed Namespace 00:12:29.633 Cleaning up... 00:12:29.894 [ 00:12:29.894 { 00:12:29.894 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:29.894 "subtype": "Discovery", 00:12:29.894 "listen_addresses": [], 00:12:29.894 "allow_any_host": true, 00:12:29.894 "hosts": [] 00:12:29.894 }, 00:12:29.894 { 00:12:29.894 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:29.894 "subtype": "NVMe", 00:12:29.894 "listen_addresses": [ 00:12:29.894 { 00:12:29.894 "transport": "VFIOUSER", 00:12:29.894 "trtype": "VFIOUSER", 00:12:29.894 "adrfam": "IPv4", 00:12:29.894 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:29.894 "trsvcid": "0" 00:12:29.894 } 00:12:29.894 ], 00:12:29.894 "allow_any_host": true, 00:12:29.894 "hosts": [], 00:12:29.894 "serial_number": "SPDK1", 00:12:29.894 "model_number": "SPDK bdev Controller", 00:12:29.894 "max_namespaces": 32, 00:12:29.894 "min_cntlid": 1, 00:12:29.894 "max_cntlid": 65519, 00:12:29.894 "namespaces": [ 00:12:29.894 { 00:12:29.894 "nsid": 1, 00:12:29.894 "bdev_name": "Malloc1", 00:12:29.894 "name": "Malloc1", 00:12:29.894 "nguid": "6917D4D6FD0F4B3BB20F5453C314184D", 00:12:29.894 "uuid": "6917d4d6-fd0f-4b3b-b20f-5453c314184d" 00:12:29.894 }, 00:12:29.894 { 00:12:29.894 "nsid": 2, 00:12:29.894 "bdev_name": "Malloc3", 00:12:29.894 "name": "Malloc3", 00:12:29.894 "nguid": "C22DAAA11A494E70AB0B4B0F4A57545E", 00:12:29.894 "uuid": "c22daaa1-1a49-4e70-ab0b-4b0f4a57545e" 00:12:29.894 } 00:12:29.894 ] 00:12:29.894 }, 00:12:29.894 { 00:12:29.894 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:29.894 "subtype": "NVMe", 00:12:29.894 "listen_addresses": [ 00:12:29.894 { 00:12:29.894 "transport": "VFIOUSER", 00:12:29.894 "trtype": "VFIOUSER", 00:12:29.894 "adrfam": "IPv4", 00:12:29.894 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:29.894 "trsvcid": "0" 00:12:29.894 } 00:12:29.894 ], 00:12:29.894 "allow_any_host": true, 00:12:29.894 "hosts": [], 00:12:29.894 "serial_number": "SPDK2", 00:12:29.894 "model_number": "SPDK bdev Controller", 00:12:29.894 "max_namespaces": 32, 00:12:29.894 "min_cntlid": 1, 00:12:29.894 "max_cntlid": 65519, 00:12:29.894 "namespaces": [ 00:12:29.894 { 00:12:29.894 "nsid": 1, 00:12:29.894 "bdev_name": "Malloc2", 00:12:29.894 "name": "Malloc2", 00:12:29.894 "nguid": "7357E9996A904560B63518C85D571730", 00:12:29.894 "uuid": "7357e999-6a90-4560-b635-18c85d571730" 00:12:29.894 } 00:12:29.894 ] 00:12:29.894 } 00:12:29.894 ] 00:12:29.894 12:08:30 -- target/nvmf_vfio_user.sh@44 -- # wait 3324581 00:12:29.894 12:08:30 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:29.894 12:08:30 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:29.894 12:08:30 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:12:29.894 12:08:30 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:29.894 [2024-04-26 12:08:31.006180] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:12:29.895 [2024-04-26 12:08:31.006221] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3324602 ] 00:12:29.895 EAL: No free 2048 kB hugepages reported on node 1 00:12:29.895 [2024-04-26 12:08:31.038338] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:12:29.895 [2024-04-26 12:08:31.047073] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:29.895 [2024-04-26 12:08:31.047094] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f3e9daec000 00:12:29.895 [2024-04-26 12:08:31.048077] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:29.895 [2024-04-26 12:08:31.049079] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:29.895 [2024-04-26 12:08:31.050084] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:29.895 [2024-04-26 12:08:31.051090] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:29.895 [2024-04-26 12:08:31.052100] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:29.895 [2024-04-26 12:08:31.053103] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:29.895 [2024-04-26 12:08:31.054112] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:29.895 [2024-04-26 12:08:31.055116] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:29.895 [2024-04-26 12:08:31.056130] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:29.895 [2024-04-26 12:08:31.056142] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f3e9dae1000 00:12:29.895 [2024-04-26 12:08:31.057469] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:29.895 [2024-04-26 12:08:31.073672] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:12:29.895 [2024-04-26 12:08:31.073695] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:12:29.895 [2024-04-26 12:08:31.078778] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:29.895 [2024-04-26 12:08:31.078820] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:29.895 [2024-04-26 12:08:31.078901] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:12:29.895 [2024-04-26 12:08:31.078917] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:12:29.895 [2024-04-26 12:08:31.078922] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:12:29.895 [2024-04-26 12:08:31.079778] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:12:29.895 [2024-04-26 12:08:31.079787] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:12:29.895 [2024-04-26 12:08:31.079794] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:12:29.895 [2024-04-26 12:08:31.080782] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:29.895 [2024-04-26 12:08:31.080790] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:12:29.895 [2024-04-26 12:08:31.080798] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:12:29.895 [2024-04-26 12:08:31.081785] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:12:29.895 [2024-04-26 12:08:31.081794] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:29.895 [2024-04-26 12:08:31.082795] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:12:29.895 [2024-04-26 12:08:31.082803] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:12:29.895 [2024-04-26 12:08:31.082808] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:12:29.895 [2024-04-26 12:08:31.082819] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:29.895 [2024-04-26 12:08:31.082924] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:12:29.895 [2024-04-26 12:08:31.082930] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:29.895 [2024-04-26 12:08:31.082934] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:12:29.895 [2024-04-26 12:08:31.083800] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:12:29.895 [2024-04-26 12:08:31.084802] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:12:29.895 [2024-04-26 12:08:31.085809] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:29.895 [2024-04-26 12:08:31.086811] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:29.895 [2024-04-26 12:08:31.086851] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:29.895 [2024-04-26 12:08:31.087824] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:12:29.895 [2024-04-26 12:08:31.087832] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:29.895 [2024-04-26 12:08:31.087842] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:12:29.895 [2024-04-26 12:08:31.087863] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:12:29.895 [2024-04-26 12:08:31.087874] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:12:29.895 [2024-04-26 12:08:31.087888] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:29.895 [2024-04-26 12:08:31.087892] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:29.895 [2024-04-26 12:08:31.087904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:29.895 [2024-04-26 12:08:31.095844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:29.895 [2024-04-26 12:08:31.095856] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:12:29.895 [2024-04-26 12:08:31.095860] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:12:29.895 [2024-04-26 12:08:31.095865] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:12:29.895 [2024-04-26 12:08:31.095869] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:29.895 [2024-04-26 12:08:31.095874] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:12:29.895 [2024-04-26 12:08:31.095878] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:12:29.895 [2024-04-26 12:08:31.095883] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:12:29.895 [2024-04-26 12:08:31.095890] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:12:29.895 [2024-04-26 12:08:31.095903] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:29.895 [2024-04-26 12:08:31.103843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:29.895 [2024-04-26 12:08:31.103857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:29.895 [2024-04-26 12:08:31.103866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:29.895 [2024-04-26 12:08:31.103874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:29.896 [2024-04-26 12:08:31.103882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:29.896 [2024-04-26 12:08:31.103887] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:12:29.896 [2024-04-26 12:08:31.103895] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:29.896 [2024-04-26 12:08:31.103904] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:29.896 [2024-04-26 12:08:31.111848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:29.896 [2024-04-26 12:08:31.111855] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:12:29.896 [2024-04-26 12:08:31.111860] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:29.896 [2024-04-26 12:08:31.111869] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:12:29.896 [2024-04-26 12:08:31.111874] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:12:29.896 [2024-04-26 12:08:31.111883] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:30.158 [2024-04-26 12:08:31.119842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:30.158 [2024-04-26 12:08:31.119893] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:12:30.158 [2024-04-26 12:08:31.119901] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:12:30.158 [2024-04-26 12:08:31.119908] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:30.158 [2024-04-26 12:08:31.119913] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:30.158 [2024-04-26 12:08:31.119919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:30.158 [2024-04-26 12:08:31.127844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:30.158 [2024-04-26 12:08:31.127863] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:12:30.158 [2024-04-26 12:08:31.127872] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:12:30.158 [2024-04-26 12:08:31.127882] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:12:30.158 [2024-04-26 12:08:31.127889] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:30.158 [2024-04-26 12:08:31.127893] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:30.158 [2024-04-26 12:08:31.127899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:30.158 [2024-04-26 12:08:31.135844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:30.158 [2024-04-26 12:08:31.135858] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:30.158 [2024-04-26 12:08:31.135865] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:30.158 [2024-04-26 12:08:31.135872] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:30.158 [2024-04-26 12:08:31.135876] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:30.158 [2024-04-26 12:08:31.135882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:30.158 [2024-04-26 12:08:31.143845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:30.158 [2024-04-26 12:08:31.143854] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:30.158 [2024-04-26 12:08:31.143860] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:12:30.158 [2024-04-26 12:08:31.143868] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:12:30.158 [2024-04-26 12:08:31.143874] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:30.158 [2024-04-26 12:08:31.143878] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:12:30.158 [2024-04-26 12:08:31.143883] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:12:30.158 [2024-04-26 12:08:31.143888] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:12:30.158 [2024-04-26 12:08:31.143893] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:12:30.159 [2024-04-26 12:08:31.143908] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:30.159 [2024-04-26 12:08:31.151842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:30.159 [2024-04-26 12:08:31.151855] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:30.159 [2024-04-26 12:08:31.159844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:30.159 [2024-04-26 12:08:31.159856] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:30.159 [2024-04-26 12:08:31.167842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:30.159 [2024-04-26 12:08:31.167855] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:30.159 [2024-04-26 12:08:31.175842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:30.159 [2024-04-26 12:08:31.175854] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:30.159 [2024-04-26 12:08:31.175859] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:30.159 [2024-04-26 12:08:31.175862] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:30.159 [2024-04-26 12:08:31.175866] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:30.159 [2024-04-26 12:08:31.175872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:30.159 [2024-04-26 12:08:31.175879] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:30.159 [2024-04-26 12:08:31.175884] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:30.159 [2024-04-26 12:08:31.175889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:30.159 [2024-04-26 12:08:31.175897] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:30.159 [2024-04-26 12:08:31.175901] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:30.159 [2024-04-26 12:08:31.175906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:30.159 [2024-04-26 12:08:31.175914] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:30.159 [2024-04-26 12:08:31.175918] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:30.159 [2024-04-26 12:08:31.175924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:30.159 [2024-04-26 12:08:31.183842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:30.159 [2024-04-26 12:08:31.183857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:30.159 [2024-04-26 12:08:31.183866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:30.159 [2024-04-26 12:08:31.183872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:30.159 ===================================================== 00:12:30.159 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:30.159 ===================================================== 00:12:30.159 Controller Capabilities/Features 00:12:30.159 ================================ 00:12:30.159 Vendor ID: 4e58 00:12:30.159 Subsystem Vendor ID: 4e58 00:12:30.159 Serial Number: SPDK2 00:12:30.159 Model Number: SPDK bdev Controller 00:12:30.159 Firmware Version: 24.05 00:12:30.159 Recommended Arb Burst: 6 00:12:30.159 IEEE OUI Identifier: 8d 6b 50 00:12:30.159 Multi-path I/O 00:12:30.159 May have multiple subsystem ports: Yes 00:12:30.159 May have multiple controllers: Yes 00:12:30.159 Associated with SR-IOV VF: No 00:12:30.159 Max Data Transfer Size: 131072 00:12:30.159 Max Number of Namespaces: 32 00:12:30.159 Max Number of I/O Queues: 127 00:12:30.159 NVMe Specification Version (VS): 1.3 00:12:30.159 NVMe Specification Version (Identify): 1.3 00:12:30.159 Maximum Queue Entries: 256 00:12:30.159 Contiguous Queues Required: Yes 00:12:30.159 Arbitration Mechanisms Supported 00:12:30.159 Weighted Round Robin: Not Supported 00:12:30.159 Vendor Specific: Not Supported 00:12:30.159 Reset Timeout: 15000 ms 00:12:30.159 Doorbell Stride: 4 bytes 00:12:30.159 NVM Subsystem Reset: Not Supported 00:12:30.159 Command Sets Supported 00:12:30.159 NVM Command Set: Supported 00:12:30.159 Boot Partition: Not Supported 00:12:30.159 Memory Page Size Minimum: 4096 bytes 00:12:30.159 Memory Page Size Maximum: 4096 bytes 00:12:30.159 Persistent Memory Region: Not Supported 00:12:30.159 Optional Asynchronous Events Supported 00:12:30.159 Namespace Attribute Notices: Supported 00:12:30.159 Firmware Activation Notices: Not Supported 00:12:30.159 ANA Change Notices: Not Supported 00:12:30.159 PLE Aggregate Log Change Notices: Not Supported 00:12:30.159 LBA Status Info Alert Notices: Not Supported 00:12:30.159 EGE Aggregate Log Change Notices: Not Supported 00:12:30.159 Normal NVM Subsystem Shutdown event: Not Supported 00:12:30.159 Zone Descriptor Change Notices: Not Supported 00:12:30.159 Discovery Log Change Notices: Not Supported 00:12:30.159 Controller Attributes 00:12:30.159 128-bit Host Identifier: Supported 00:12:30.159 Non-Operational Permissive Mode: Not Supported 00:12:30.159 NVM Sets: Not Supported 00:12:30.159 Read Recovery Levels: Not Supported 00:12:30.159 Endurance Groups: Not Supported 00:12:30.159 Predictable Latency Mode: Not Supported 00:12:30.159 Traffic Based Keep ALive: Not Supported 00:12:30.159 Namespace Granularity: Not Supported 00:12:30.159 SQ Associations: Not Supported 00:12:30.159 UUID List: Not Supported 00:12:30.159 Multi-Domain Subsystem: Not Supported 00:12:30.159 Fixed Capacity Management: Not Supported 00:12:30.159 Variable Capacity Management: Not Supported 00:12:30.159 Delete Endurance Group: Not Supported 00:12:30.159 Delete NVM Set: Not Supported 00:12:30.159 Extended LBA Formats Supported: Not Supported 00:12:30.159 Flexible Data Placement Supported: Not Supported 00:12:30.159 00:12:30.159 Controller Memory Buffer Support 00:12:30.159 ================================ 00:12:30.159 Supported: No 00:12:30.159 00:12:30.159 Persistent Memory Region Support 00:12:30.159 ================================ 00:12:30.159 Supported: No 00:12:30.159 00:12:30.159 Admin Command Set Attributes 00:12:30.159 ============================ 00:12:30.159 Security Send/Receive: Not Supported 00:12:30.159 Format NVM: Not Supported 00:12:30.159 Firmware Activate/Download: Not Supported 00:12:30.159 Namespace Management: Not Supported 00:12:30.159 Device Self-Test: Not Supported 00:12:30.159 Directives: Not Supported 00:12:30.159 NVMe-MI: Not Supported 00:12:30.159 Virtualization Management: Not Supported 00:12:30.159 Doorbell Buffer Config: Not Supported 00:12:30.159 Get LBA Status Capability: Not Supported 00:12:30.159 Command & Feature Lockdown Capability: Not Supported 00:12:30.159 Abort Command Limit: 4 00:12:30.159 Async Event Request Limit: 4 00:12:30.159 Number of Firmware Slots: N/A 00:12:30.159 Firmware Slot 1 Read-Only: N/A 00:12:30.159 Firmware Activation Without Reset: N/A 00:12:30.159 Multiple Update Detection Support: N/A 00:12:30.159 Firmware Update Granularity: No Information Provided 00:12:30.159 Per-Namespace SMART Log: No 00:12:30.159 Asymmetric Namespace Access Log Page: Not Supported 00:12:30.159 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:12:30.159 Command Effects Log Page: Supported 00:12:30.159 Get Log Page Extended Data: Supported 00:12:30.159 Telemetry Log Pages: Not Supported 00:12:30.159 Persistent Event Log Pages: Not Supported 00:12:30.159 Supported Log Pages Log Page: May Support 00:12:30.159 Commands Supported & Effects Log Page: Not Supported 00:12:30.159 Feature Identifiers & Effects Log Page:May Support 00:12:30.159 NVMe-MI Commands & Effects Log Page: May Support 00:12:30.159 Data Area 4 for Telemetry Log: Not Supported 00:12:30.159 Error Log Page Entries Supported: 128 00:12:30.159 Keep Alive: Supported 00:12:30.159 Keep Alive Granularity: 10000 ms 00:12:30.159 00:12:30.159 NVM Command Set Attributes 00:12:30.159 ========================== 00:12:30.159 Submission Queue Entry Size 00:12:30.159 Max: 64 00:12:30.159 Min: 64 00:12:30.159 Completion Queue Entry Size 00:12:30.159 Max: 16 00:12:30.159 Min: 16 00:12:30.159 Number of Namespaces: 32 00:12:30.159 Compare Command: Supported 00:12:30.159 Write Uncorrectable Command: Not Supported 00:12:30.159 Dataset Management Command: Supported 00:12:30.159 Write Zeroes Command: Supported 00:12:30.159 Set Features Save Field: Not Supported 00:12:30.159 Reservations: Not Supported 00:12:30.159 Timestamp: Not Supported 00:12:30.159 Copy: Supported 00:12:30.159 Volatile Write Cache: Present 00:12:30.159 Atomic Write Unit (Normal): 1 00:12:30.159 Atomic Write Unit (PFail): 1 00:12:30.159 Atomic Compare & Write Unit: 1 00:12:30.159 Fused Compare & Write: Supported 00:12:30.159 Scatter-Gather List 00:12:30.159 SGL Command Set: Supported (Dword aligned) 00:12:30.159 SGL Keyed: Not Supported 00:12:30.159 SGL Bit Bucket Descriptor: Not Supported 00:12:30.159 SGL Metadata Pointer: Not Supported 00:12:30.160 Oversized SGL: Not Supported 00:12:30.160 SGL Metadata Address: Not Supported 00:12:30.160 SGL Offset: Not Supported 00:12:30.160 Transport SGL Data Block: Not Supported 00:12:30.160 Replay Protected Memory Block: Not Supported 00:12:30.160 00:12:30.160 Firmware Slot Information 00:12:30.160 ========================= 00:12:30.160 Active slot: 1 00:12:30.160 Slot 1 Firmware Revision: 24.05 00:12:30.160 00:12:30.160 00:12:30.160 Commands Supported and Effects 00:12:30.160 ============================== 00:12:30.160 Admin Commands 00:12:30.160 -------------- 00:12:30.160 Get Log Page (02h): Supported 00:12:30.160 Identify (06h): Supported 00:12:30.160 Abort (08h): Supported 00:12:30.160 Set Features (09h): Supported 00:12:30.160 Get Features (0Ah): Supported 00:12:30.160 Asynchronous Event Request (0Ch): Supported 00:12:30.160 Keep Alive (18h): Supported 00:12:30.160 I/O Commands 00:12:30.160 ------------ 00:12:30.160 Flush (00h): Supported LBA-Change 00:12:30.160 Write (01h): Supported LBA-Change 00:12:30.160 Read (02h): Supported 00:12:30.160 Compare (05h): Supported 00:12:30.160 Write Zeroes (08h): Supported LBA-Change 00:12:30.160 Dataset Management (09h): Supported LBA-Change 00:12:30.160 Copy (19h): Supported LBA-Change 00:12:30.160 Unknown (79h): Supported LBA-Change 00:12:30.160 Unknown (7Ah): Supported 00:12:30.160 00:12:30.160 Error Log 00:12:30.160 ========= 00:12:30.160 00:12:30.160 Arbitration 00:12:30.160 =========== 00:12:30.160 Arbitration Burst: 1 00:12:30.160 00:12:30.160 Power Management 00:12:30.160 ================ 00:12:30.160 Number of Power States: 1 00:12:30.160 Current Power State: Power State #0 00:12:30.160 Power State #0: 00:12:30.160 Max Power: 0.00 W 00:12:30.160 Non-Operational State: Operational 00:12:30.160 Entry Latency: Not Reported 00:12:30.160 Exit Latency: Not Reported 00:12:30.160 Relative Read Throughput: 0 00:12:30.160 Relative Read Latency: 0 00:12:30.160 Relative Write Throughput: 0 00:12:30.160 Relative Write Latency: 0 00:12:30.160 Idle Power: Not Reported 00:12:30.160 Active Power: Not Reported 00:12:30.160 Non-Operational Permissive Mode: Not Supported 00:12:30.160 00:12:30.160 Health Information 00:12:30.160 ================== 00:12:30.160 Critical Warnings: 00:12:30.160 Available Spare Space: OK 00:12:30.160 Temperature: OK 00:12:30.160 Device Reliability: OK 00:12:30.160 Read Only: No 00:12:30.160 Volatile Memory Backup: OK 00:12:30.160 Current Temperature: 0 Kelvin (-2[2024-04-26 12:08:31.183972] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:30.160 [2024-04-26 12:08:31.191844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:30.160 [2024-04-26 12:08:31.191871] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:12:30.160 [2024-04-26 12:08:31.191881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:30.160 [2024-04-26 12:08:31.191887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:30.160 [2024-04-26 12:08:31.191893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:30.160 [2024-04-26 12:08:31.191900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:30.160 [2024-04-26 12:08:31.191948] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:30.160 [2024-04-26 12:08:31.191958] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:12:30.160 [2024-04-26 12:08:31.192948] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:30.160 [2024-04-26 12:08:31.192995] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:12:30.160 [2024-04-26 12:08:31.193001] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:12:30.160 [2024-04-26 12:08:31.193951] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:12:30.160 [2024-04-26 12:08:31.193962] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:12:30.160 [2024-04-26 12:08:31.194009] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:12:30.160 [2024-04-26 12:08:31.195389] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:30.160 73 Celsius) 00:12:30.160 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:30.160 Available Spare: 0% 00:12:30.160 Available Spare Threshold: 0% 00:12:30.160 Life Percentage Used: 0% 00:12:30.160 Data Units Read: 0 00:12:30.160 Data Units Written: 0 00:12:30.160 Host Read Commands: 0 00:12:30.160 Host Write Commands: 0 00:12:30.160 Controller Busy Time: 0 minutes 00:12:30.160 Power Cycles: 0 00:12:30.160 Power On Hours: 0 hours 00:12:30.160 Unsafe Shutdowns: 0 00:12:30.160 Unrecoverable Media Errors: 0 00:12:30.160 Lifetime Error Log Entries: 0 00:12:30.160 Warning Temperature Time: 0 minutes 00:12:30.160 Critical Temperature Time: 0 minutes 00:12:30.160 00:12:30.160 Number of Queues 00:12:30.160 ================ 00:12:30.160 Number of I/O Submission Queues: 127 00:12:30.160 Number of I/O Completion Queues: 127 00:12:30.160 00:12:30.160 Active Namespaces 00:12:30.160 ================= 00:12:30.160 Namespace ID:1 00:12:30.160 Error Recovery Timeout: Unlimited 00:12:30.160 Command Set Identifier: NVM (00h) 00:12:30.160 Deallocate: Supported 00:12:30.160 Deallocated/Unwritten Error: Not Supported 00:12:30.160 Deallocated Read Value: Unknown 00:12:30.160 Deallocate in Write Zeroes: Not Supported 00:12:30.160 Deallocated Guard Field: 0xFFFF 00:12:30.160 Flush: Supported 00:12:30.160 Reservation: Supported 00:12:30.160 Namespace Sharing Capabilities: Multiple Controllers 00:12:30.160 Size (in LBAs): 131072 (0GiB) 00:12:30.160 Capacity (in LBAs): 131072 (0GiB) 00:12:30.160 Utilization (in LBAs): 131072 (0GiB) 00:12:30.160 NGUID: 7357E9996A904560B63518C85D571730 00:12:30.160 UUID: 7357e999-6a90-4560-b635-18c85d571730 00:12:30.160 Thin Provisioning: Not Supported 00:12:30.160 Per-NS Atomic Units: Yes 00:12:30.160 Atomic Boundary Size (Normal): 0 00:12:30.160 Atomic Boundary Size (PFail): 0 00:12:30.160 Atomic Boundary Offset: 0 00:12:30.160 Maximum Single Source Range Length: 65535 00:12:30.160 Maximum Copy Length: 65535 00:12:30.160 Maximum Source Range Count: 1 00:12:30.160 NGUID/EUI64 Never Reused: No 00:12:30.160 Namespace Write Protected: No 00:12:30.160 Number of LBA Formats: 1 00:12:30.160 Current LBA Format: LBA Format #00 00:12:30.160 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:30.160 00:12:30.160 12:08:31 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:30.160 EAL: No free 2048 kB hugepages reported on node 1 00:12:30.422 [2024-04-26 12:08:31.379217] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:35.730 [2024-04-26 12:08:36.485014] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:35.730 Initializing NVMe Controllers 00:12:35.730 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:35.730 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:35.730 Initialization complete. Launching workers. 00:12:35.730 ======================================================== 00:12:35.730 Latency(us) 00:12:35.730 Device Information : IOPS MiB/s Average min max 00:12:35.730 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39954.00 156.07 3206.08 847.47 6857.65 00:12:35.730 ======================================================== 00:12:35.731 Total : 39954.00 156.07 3206.08 847.47 6857.65 00:12:35.731 00:12:35.731 12:08:36 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:35.731 EAL: No free 2048 kB hugepages reported on node 1 00:12:35.731 [2024-04-26 12:08:36.655569] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:41.019 [2024-04-26 12:08:41.676965] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:41.019 Initializing NVMe Controllers 00:12:41.020 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:41.020 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:41.020 Initialization complete. Launching workers. 00:12:41.020 ======================================================== 00:12:41.020 Latency(us) 00:12:41.020 Device Information : IOPS MiB/s Average min max 00:12:41.020 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35391.55 138.25 3616.30 1128.75 8984.25 00:12:41.020 ======================================================== 00:12:41.020 Total : 35391.55 138.25 3616.30 1128.75 8984.25 00:12:41.020 00:12:41.020 12:08:41 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:41.020 EAL: No free 2048 kB hugepages reported on node 1 00:12:41.020 [2024-04-26 12:08:41.859223] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:46.362 [2024-04-26 12:08:46.996914] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:46.362 Initializing NVMe Controllers 00:12:46.362 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:46.362 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:46.362 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:12:46.362 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:12:46.362 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:12:46.362 Initialization complete. Launching workers. 00:12:46.362 Starting thread on core 2 00:12:46.362 Starting thread on core 3 00:12:46.362 Starting thread on core 1 00:12:46.362 12:08:47 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:12:46.362 EAL: No free 2048 kB hugepages reported on node 1 00:12:46.362 [2024-04-26 12:08:47.257339] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:49.664 [2024-04-26 12:08:50.313429] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:49.664 Initializing NVMe Controllers 00:12:49.664 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:49.664 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:49.664 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:12:49.665 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:12:49.665 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:12:49.665 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:12:49.665 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:49.665 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:49.665 Initialization complete. Launching workers. 00:12:49.665 Starting thread on core 1 with urgent priority queue 00:12:49.665 Starting thread on core 2 with urgent priority queue 00:12:49.665 Starting thread on core 3 with urgent priority queue 00:12:49.665 Starting thread on core 0 with urgent priority queue 00:12:49.665 SPDK bdev Controller (SPDK2 ) core 0: 12753.33 IO/s 7.84 secs/100000 ios 00:12:49.665 SPDK bdev Controller (SPDK2 ) core 1: 16764.00 IO/s 5.97 secs/100000 ios 00:12:49.665 SPDK bdev Controller (SPDK2 ) core 2: 7603.33 IO/s 13.15 secs/100000 ios 00:12:49.665 SPDK bdev Controller (SPDK2 ) core 3: 13296.67 IO/s 7.52 secs/100000 ios 00:12:49.665 ======================================================== 00:12:49.665 00:12:49.665 12:08:50 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:49.665 EAL: No free 2048 kB hugepages reported on node 1 00:12:49.665 [2024-04-26 12:08:50.571369] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:49.665 [2024-04-26 12:08:50.582440] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:49.665 Initializing NVMe Controllers 00:12:49.665 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:49.665 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:49.665 Namespace ID: 1 size: 0GB 00:12:49.665 Initialization complete. 00:12:49.665 INFO: using host memory buffer for IO 00:12:49.665 Hello world! 00:12:49.665 12:08:50 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:49.665 EAL: No free 2048 kB hugepages reported on node 1 00:12:49.665 [2024-04-26 12:08:50.843146] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:51.051 Initializing NVMe Controllers 00:12:51.051 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:51.051 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:51.051 Initialization complete. Launching workers. 00:12:51.051 submit (in ns) avg, min, max = 10006.8, 3860.0, 4002432.5 00:12:51.051 complete (in ns) avg, min, max = 15854.1, 2360.0, 3998486.7 00:12:51.051 00:12:51.051 Submit histogram 00:12:51.051 ================ 00:12:51.051 Range in us Cumulative Count 00:12:51.051 3.840 - 3.867: 0.1395% ( 27) 00:12:51.051 3.867 - 3.893: 2.9971% ( 553) 00:12:51.051 3.893 - 3.920: 8.3816% ( 1042) 00:12:51.051 3.920 - 3.947: 18.2823% ( 1916) 00:12:51.051 3.947 - 3.973: 30.1726% ( 2301) 00:12:51.051 3.973 - 4.000: 41.7166% ( 2234) 00:12:51.051 4.000 - 4.027: 55.8650% ( 2738) 00:12:51.051 4.027 - 4.053: 72.9382% ( 3304) 00:12:51.051 4.053 - 4.080: 86.3993% ( 2605) 00:12:51.051 4.080 - 4.107: 94.4399% ( 1556) 00:12:51.051 4.107 - 4.133: 98.0002% ( 689) 00:12:51.051 4.133 - 4.160: 99.0544% ( 204) 00:12:51.051 4.160 - 4.187: 99.3592% ( 59) 00:12:51.051 4.187 - 4.213: 99.4109% ( 10) 00:12:51.051 4.213 - 4.240: 99.4264% ( 3) 00:12:51.051 4.240 - 4.267: 99.4419% ( 3) 00:12:51.051 4.267 - 4.293: 99.4523% ( 2) 00:12:51.051 4.320 - 4.347: 99.4574% ( 1) 00:12:51.051 4.347 - 4.373: 99.4626% ( 1) 00:12:51.051 4.400 - 4.427: 99.4678% ( 1) 00:12:51.051 4.427 - 4.453: 99.4729% ( 1) 00:12:51.051 4.507 - 4.533: 99.4781% ( 1) 00:12:51.051 4.640 - 4.667: 99.4833% ( 1) 00:12:51.051 4.693 - 4.720: 99.4884% ( 1) 00:12:51.051 4.720 - 4.747: 99.4936% ( 1) 00:12:51.051 4.827 - 4.853: 99.4988% ( 1) 00:12:51.051 4.987 - 5.013: 99.5039% ( 1) 00:12:51.051 5.307 - 5.333: 99.5091% ( 1) 00:12:51.051 5.413 - 5.440: 99.5143% ( 1) 00:12:51.051 5.627 - 5.653: 99.5194% ( 1) 00:12:51.051 5.707 - 5.733: 99.5246% ( 1) 00:12:51.051 5.733 - 5.760: 99.5298% ( 1) 00:12:51.051 5.973 - 6.000: 99.5401% ( 2) 00:12:51.051 6.053 - 6.080: 99.5453% ( 1) 00:12:51.051 6.080 - 6.107: 99.5504% ( 1) 00:12:51.051 6.107 - 6.133: 99.5556% ( 1) 00:12:51.051 6.133 - 6.160: 99.5659% ( 2) 00:12:51.051 6.160 - 6.187: 99.5763% ( 2) 00:12:51.051 6.187 - 6.213: 99.5866% ( 2) 00:12:51.051 6.213 - 6.240: 99.5918% ( 1) 00:12:51.051 6.267 - 6.293: 99.5969% ( 1) 00:12:51.051 6.293 - 6.320: 99.6021% ( 1) 00:12:51.051 6.347 - 6.373: 99.6073% ( 1) 00:12:51.051 6.400 - 6.427: 99.6279% ( 4) 00:12:51.051 6.427 - 6.453: 99.6486% ( 4) 00:12:51.051 6.507 - 6.533: 99.6589% ( 2) 00:12:51.051 6.533 - 6.560: 99.6641% ( 1) 00:12:51.051 6.587 - 6.613: 99.6693% ( 1) 00:12:51.051 6.613 - 6.640: 99.6745% ( 1) 00:12:51.051 6.640 - 6.667: 99.6848% ( 2) 00:12:51.051 6.667 - 6.693: 99.6951% ( 2) 00:12:51.051 6.693 - 6.720: 99.7003% ( 1) 00:12:51.051 6.720 - 6.747: 99.7106% ( 2) 00:12:51.051 6.747 - 6.773: 99.7210% ( 2) 00:12:51.051 6.773 - 6.800: 99.7261% ( 1) 00:12:51.051 6.827 - 6.880: 99.7416% ( 3) 00:12:51.051 6.933 - 6.987: 99.7520% ( 2) 00:12:51.051 7.040 - 7.093: 99.7623% ( 2) 00:12:51.051 7.147 - 7.200: 99.7675% ( 1) 00:12:51.051 7.200 - 7.253: 99.7881% ( 4) 00:12:51.051 7.307 - 7.360: 99.7985% ( 2) 00:12:51.051 7.467 - 7.520: 99.8036% ( 1) 00:12:51.051 7.520 - 7.573: 99.8088% ( 1) 00:12:51.051 7.627 - 7.680: 99.8140% ( 1) 00:12:51.051 8.160 - 8.213: 99.8191% ( 1) 00:12:51.051 8.267 - 8.320: 99.8243% ( 1) 00:12:51.051 9.013 - 9.067: 99.8295% ( 1) 00:12:51.051 9.280 - 9.333: 99.8346% ( 1) 00:12:51.051 12.533 - 12.587: 99.8398% ( 1) 00:12:51.051 13.120 - 13.173: 99.8450% ( 1) 00:12:51.051 14.293 - 14.400: 99.8501% ( 1) 00:12:51.051 3986.773 - 4014.080: 100.0000% ( 29) 00:12:51.051 00:12:51.051 Complete histogram 00:12:51.051 ================== 00:12:51.051 Range in us Cumulative Count 00:12:51.051 2.360 - 2.373: 2.0205% ( 391) 00:12:51.051 2.373 - 2.387: 2.1755% ( 30) 00:12:51.051 2.387 - 2.400: 2.6044% ( 83) 00:12:51.051 2.400 - 2.413: 53.2296% ( 9797) 00:12:51.051 2.413 - 2.427: 62.0298% ( 1703) 00:12:51.051 2.427 - [2024-04-26 12:08:51.937515] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:51.051 2.440: 74.2869% ( 2372) 00:12:51.051 2.440 - 2.453: 79.7024% ( 1048) 00:12:51.051 2.453 - 2.467: 82.0587% ( 456) 00:12:51.051 2.467 - 2.480: 85.3348% ( 634) 00:12:51.051 2.480 - 2.493: 91.0552% ( 1107) 00:12:51.051 2.493 - 2.507: 95.3493% ( 831) 00:12:51.051 2.507 - 2.520: 97.4835% ( 413) 00:12:51.051 2.520 - 2.533: 98.7702% ( 249) 00:12:51.051 2.533 - 2.547: 99.2094% ( 85) 00:12:51.051 2.547 - 2.560: 99.3541% ( 28) 00:12:51.051 2.560 - 2.573: 99.3954% ( 8) 00:12:51.051 2.587 - 2.600: 99.4006% ( 1) 00:12:51.051 2.627 - 2.640: 99.4057% ( 1) 00:12:51.051 4.373 - 4.400: 99.4109% ( 1) 00:12:51.051 4.400 - 4.427: 99.4161% ( 1) 00:12:51.051 4.453 - 4.480: 99.4264% ( 2) 00:12:51.051 4.480 - 4.507: 99.4316% ( 1) 00:12:51.051 4.560 - 4.587: 99.4419% ( 2) 00:12:51.051 4.613 - 4.640: 99.4471% ( 1) 00:12:51.051 4.640 - 4.667: 99.4523% ( 1) 00:12:51.051 4.747 - 4.773: 99.4626% ( 2) 00:12:51.051 4.773 - 4.800: 99.4729% ( 2) 00:12:51.051 4.800 - 4.827: 99.4781% ( 1) 00:12:51.051 4.880 - 4.907: 99.4833% ( 1) 00:12:51.051 4.987 - 5.013: 99.4884% ( 1) 00:12:51.051 5.040 - 5.067: 99.4936% ( 1) 00:12:51.051 5.067 - 5.093: 99.5091% ( 3) 00:12:51.051 5.093 - 5.120: 99.5143% ( 1) 00:12:51.051 5.173 - 5.200: 99.5246% ( 2) 00:12:51.051 5.253 - 5.280: 99.5349% ( 2) 00:12:51.051 5.280 - 5.307: 99.5453% ( 2) 00:12:51.051 5.333 - 5.360: 99.5504% ( 1) 00:12:51.051 5.387 - 5.413: 99.5608% ( 2) 00:12:51.051 5.413 - 5.440: 99.5659% ( 1) 00:12:51.051 5.440 - 5.467: 99.5711% ( 1) 00:12:51.051 5.520 - 5.547: 99.5763% ( 1) 00:12:51.051 5.547 - 5.573: 99.5814% ( 1) 00:12:51.051 5.600 - 5.627: 99.5866% ( 1) 00:12:51.051 5.627 - 5.653: 99.5918% ( 1) 00:12:51.051 5.707 - 5.733: 99.5969% ( 1) 00:12:51.051 5.760 - 5.787: 99.6021% ( 1) 00:12:51.051 5.813 - 5.840: 99.6073% ( 1) 00:12:51.051 5.867 - 5.893: 99.6124% ( 1) 00:12:51.051 5.947 - 5.973: 99.6176% ( 1) 00:12:51.051 6.027 - 6.053: 99.6228% ( 1) 00:12:51.051 6.107 - 6.133: 99.6279% ( 1) 00:12:51.051 6.160 - 6.187: 99.6331% ( 1) 00:12:51.051 6.293 - 6.320: 99.6383% ( 1) 00:12:51.051 6.373 - 6.400: 99.6434% ( 1) 00:12:51.051 7.520 - 7.573: 99.6486% ( 1) 00:12:51.051 11.680 - 11.733: 99.6538% ( 1) 00:12:51.051 11.733 - 11.787: 99.6589% ( 1) 00:12:51.051 12.533 - 12.587: 99.6641% ( 1) 00:12:51.051 3986.773 - 4014.080: 100.0000% ( 65) 00:12:51.051 00:12:51.051 12:08:51 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:12:51.051 12:08:51 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:51.051 12:08:51 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:12:51.051 12:08:51 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:12:51.051 12:08:51 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:51.051 [ 00:12:51.051 { 00:12:51.051 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:51.051 "subtype": "Discovery", 00:12:51.051 "listen_addresses": [], 00:12:51.051 "allow_any_host": true, 00:12:51.051 "hosts": [] 00:12:51.051 }, 00:12:51.051 { 00:12:51.051 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:51.052 "subtype": "NVMe", 00:12:51.052 "listen_addresses": [ 00:12:51.052 { 00:12:51.052 "transport": "VFIOUSER", 00:12:51.052 "trtype": "VFIOUSER", 00:12:51.052 "adrfam": "IPv4", 00:12:51.052 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:51.052 "trsvcid": "0" 00:12:51.052 } 00:12:51.052 ], 00:12:51.052 "allow_any_host": true, 00:12:51.052 "hosts": [], 00:12:51.052 "serial_number": "SPDK1", 00:12:51.052 "model_number": "SPDK bdev Controller", 00:12:51.052 "max_namespaces": 32, 00:12:51.052 "min_cntlid": 1, 00:12:51.052 "max_cntlid": 65519, 00:12:51.052 "namespaces": [ 00:12:51.052 { 00:12:51.052 "nsid": 1, 00:12:51.052 "bdev_name": "Malloc1", 00:12:51.052 "name": "Malloc1", 00:12:51.052 "nguid": "6917D4D6FD0F4B3BB20F5453C314184D", 00:12:51.052 "uuid": "6917d4d6-fd0f-4b3b-b20f-5453c314184d" 00:12:51.052 }, 00:12:51.052 { 00:12:51.052 "nsid": 2, 00:12:51.052 "bdev_name": "Malloc3", 00:12:51.052 "name": "Malloc3", 00:12:51.052 "nguid": "C22DAAA11A494E70AB0B4B0F4A57545E", 00:12:51.052 "uuid": "c22daaa1-1a49-4e70-ab0b-4b0f4a57545e" 00:12:51.052 } 00:12:51.052 ] 00:12:51.052 }, 00:12:51.052 { 00:12:51.052 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:51.052 "subtype": "NVMe", 00:12:51.052 "listen_addresses": [ 00:12:51.052 { 00:12:51.052 "transport": "VFIOUSER", 00:12:51.052 "trtype": "VFIOUSER", 00:12:51.052 "adrfam": "IPv4", 00:12:51.052 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:51.052 "trsvcid": "0" 00:12:51.052 } 00:12:51.052 ], 00:12:51.052 "allow_any_host": true, 00:12:51.052 "hosts": [], 00:12:51.052 "serial_number": "SPDK2", 00:12:51.052 "model_number": "SPDK bdev Controller", 00:12:51.052 "max_namespaces": 32, 00:12:51.052 "min_cntlid": 1, 00:12:51.052 "max_cntlid": 65519, 00:12:51.052 "namespaces": [ 00:12:51.052 { 00:12:51.052 "nsid": 1, 00:12:51.052 "bdev_name": "Malloc2", 00:12:51.052 "name": "Malloc2", 00:12:51.052 "nguid": "7357E9996A904560B63518C85D571730", 00:12:51.052 "uuid": "7357e999-6a90-4560-b635-18c85d571730" 00:12:51.052 } 00:12:51.052 ] 00:12:51.052 } 00:12:51.052 ] 00:12:51.052 12:08:52 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:51.052 12:08:52 -- target/nvmf_vfio_user.sh@34 -- # aerpid=3328788 00:12:51.052 12:08:52 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:51.052 12:08:52 -- common/autotest_common.sh@1251 -- # local i=0 00:12:51.052 12:08:52 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:12:51.052 12:08:52 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:51.052 12:08:52 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:51.052 12:08:52 -- common/autotest_common.sh@1262 -- # return 0 00:12:51.052 12:08:52 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:51.052 12:08:52 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:12:51.052 EAL: No free 2048 kB hugepages reported on node 1 00:12:51.312 Malloc4 00:12:51.312 [2024-04-26 12:08:52.320432] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:51.312 12:08:52 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:12:51.312 [2024-04-26 12:08:52.490583] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:51.312 12:08:52 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:51.573 Asynchronous Event Request test 00:12:51.573 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:51.573 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:51.573 Registering asynchronous event callbacks... 00:12:51.573 Starting namespace attribute notice tests for all controllers... 00:12:51.573 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:51.573 aer_cb - Changed Namespace 00:12:51.573 Cleaning up... 00:12:51.573 [ 00:12:51.573 { 00:12:51.573 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:51.573 "subtype": "Discovery", 00:12:51.573 "listen_addresses": [], 00:12:51.573 "allow_any_host": true, 00:12:51.573 "hosts": [] 00:12:51.573 }, 00:12:51.573 { 00:12:51.573 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:51.573 "subtype": "NVMe", 00:12:51.573 "listen_addresses": [ 00:12:51.573 { 00:12:51.573 "transport": "VFIOUSER", 00:12:51.573 "trtype": "VFIOUSER", 00:12:51.573 "adrfam": "IPv4", 00:12:51.573 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:51.573 "trsvcid": "0" 00:12:51.573 } 00:12:51.573 ], 00:12:51.573 "allow_any_host": true, 00:12:51.573 "hosts": [], 00:12:51.573 "serial_number": "SPDK1", 00:12:51.573 "model_number": "SPDK bdev Controller", 00:12:51.573 "max_namespaces": 32, 00:12:51.573 "min_cntlid": 1, 00:12:51.573 "max_cntlid": 65519, 00:12:51.573 "namespaces": [ 00:12:51.573 { 00:12:51.573 "nsid": 1, 00:12:51.573 "bdev_name": "Malloc1", 00:12:51.573 "name": "Malloc1", 00:12:51.573 "nguid": "6917D4D6FD0F4B3BB20F5453C314184D", 00:12:51.573 "uuid": "6917d4d6-fd0f-4b3b-b20f-5453c314184d" 00:12:51.573 }, 00:12:51.573 { 00:12:51.573 "nsid": 2, 00:12:51.573 "bdev_name": "Malloc3", 00:12:51.573 "name": "Malloc3", 00:12:51.573 "nguid": "C22DAAA11A494E70AB0B4B0F4A57545E", 00:12:51.573 "uuid": "c22daaa1-1a49-4e70-ab0b-4b0f4a57545e" 00:12:51.573 } 00:12:51.573 ] 00:12:51.573 }, 00:12:51.573 { 00:12:51.573 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:51.573 "subtype": "NVMe", 00:12:51.573 "listen_addresses": [ 00:12:51.573 { 00:12:51.573 "transport": "VFIOUSER", 00:12:51.573 "trtype": "VFIOUSER", 00:12:51.573 "adrfam": "IPv4", 00:12:51.573 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:51.573 "trsvcid": "0" 00:12:51.573 } 00:12:51.573 ], 00:12:51.573 "allow_any_host": true, 00:12:51.573 "hosts": [], 00:12:51.573 "serial_number": "SPDK2", 00:12:51.573 "model_number": "SPDK bdev Controller", 00:12:51.573 "max_namespaces": 32, 00:12:51.573 "min_cntlid": 1, 00:12:51.573 "max_cntlid": 65519, 00:12:51.573 "namespaces": [ 00:12:51.573 { 00:12:51.573 "nsid": 1, 00:12:51.573 "bdev_name": "Malloc2", 00:12:51.573 "name": "Malloc2", 00:12:51.573 "nguid": "7357E9996A904560B63518C85D571730", 00:12:51.573 "uuid": "7357e999-6a90-4560-b635-18c85d571730" 00:12:51.573 }, 00:12:51.573 { 00:12:51.573 "nsid": 2, 00:12:51.573 "bdev_name": "Malloc4", 00:12:51.573 "name": "Malloc4", 00:12:51.573 "nguid": "D3C41E9C43324B22A4BF9B59E869A04E", 00:12:51.573 "uuid": "d3c41e9c-4332-4b22-a4bf-9b59e869a04e" 00:12:51.573 } 00:12:51.573 ] 00:12:51.573 } 00:12:51.573 ] 00:12:51.573 12:08:52 -- target/nvmf_vfio_user.sh@44 -- # wait 3328788 00:12:51.573 12:08:52 -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:12:51.573 12:08:52 -- target/nvmf_vfio_user.sh@95 -- # killprocess 3319752 00:12:51.573 12:08:52 -- common/autotest_common.sh@936 -- # '[' -z 3319752 ']' 00:12:51.573 12:08:52 -- common/autotest_common.sh@940 -- # kill -0 3319752 00:12:51.573 12:08:52 -- common/autotest_common.sh@941 -- # uname 00:12:51.573 12:08:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:51.573 12:08:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3319752 00:12:51.573 12:08:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:51.573 12:08:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:51.573 12:08:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3319752' 00:12:51.573 killing process with pid 3319752 00:12:51.573 12:08:52 -- common/autotest_common.sh@955 -- # kill 3319752 00:12:51.573 [2024-04-26 12:08:52.735088] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:12:51.573 12:08:52 -- common/autotest_common.sh@960 -- # wait 3319752 00:12:51.835 12:08:52 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:51.835 12:08:52 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:51.835 12:08:52 -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:12:51.835 12:08:52 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:12:51.835 12:08:52 -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:12:51.835 12:08:52 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3328968 00:12:51.835 12:08:52 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3328968' 00:12:51.835 Process pid: 3328968 00:12:51.835 12:08:52 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:51.835 12:08:52 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:12:51.835 12:08:52 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3328968 00:12:51.835 12:08:52 -- common/autotest_common.sh@817 -- # '[' -z 3328968 ']' 00:12:51.835 12:08:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.835 12:08:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:51.835 12:08:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.835 12:08:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:51.835 12:08:52 -- common/autotest_common.sh@10 -- # set +x 00:12:51.835 [2024-04-26 12:08:52.961926] thread.c:2927:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:12:51.835 [2024-04-26 12:08:52.962861] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:12:51.835 [2024-04-26 12:08:52.962901] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:51.835 EAL: No free 2048 kB hugepages reported on node 1 00:12:51.835 [2024-04-26 12:08:53.024853] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:52.096 [2024-04-26 12:08:53.088090] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:52.096 [2024-04-26 12:08:53.088129] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:52.096 [2024-04-26 12:08:53.088138] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:52.096 [2024-04-26 12:08:53.088146] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:52.096 [2024-04-26 12:08:53.088153] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:52.096 [2024-04-26 12:08:53.088340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:52.096 [2024-04-26 12:08:53.088478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:52.096 [2024-04-26 12:08:53.088633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.096 [2024-04-26 12:08:53.088635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:52.096 [2024-04-26 12:08:53.150628] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_0) to intr mode from intr mode. 00:12:52.096 [2024-04-26 12:08:53.150637] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_1) to intr mode from intr mode. 00:12:52.096 [2024-04-26 12:08:53.150938] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_2) to intr mode from intr mode. 00:12:52.096 [2024-04-26 12:08:53.151127] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:12:52.096 [2024-04-26 12:08:53.151218] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_3) to intr mode from intr mode. 00:12:52.680 12:08:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:52.680 12:08:53 -- common/autotest_common.sh@850 -- # return 0 00:12:52.680 12:08:53 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:53.619 12:08:54 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:12:53.879 12:08:54 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:53.879 12:08:54 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:53.879 12:08:54 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:53.879 12:08:54 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:53.879 12:08:54 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:53.879 Malloc1 00:12:53.879 12:08:55 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:54.140 12:08:55 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:54.400 12:08:55 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:54.400 12:08:55 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:54.400 12:08:55 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:54.400 12:08:55 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:54.660 Malloc2 00:12:54.660 12:08:55 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:54.922 12:08:55 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:54.922 12:08:56 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:55.183 12:08:56 -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:12:55.183 12:08:56 -- target/nvmf_vfio_user.sh@95 -- # killprocess 3328968 00:12:55.183 12:08:56 -- common/autotest_common.sh@936 -- # '[' -z 3328968 ']' 00:12:55.183 12:08:56 -- common/autotest_common.sh@940 -- # kill -0 3328968 00:12:55.183 12:08:56 -- common/autotest_common.sh@941 -- # uname 00:12:55.183 12:08:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:55.183 12:08:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3328968 00:12:55.183 12:08:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:55.183 12:08:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:55.183 12:08:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3328968' 00:12:55.183 killing process with pid 3328968 00:12:55.183 12:08:56 -- common/autotest_common.sh@955 -- # kill 3328968 00:12:55.183 12:08:56 -- common/autotest_common.sh@960 -- # wait 3328968 00:12:55.445 12:08:56 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:55.445 12:08:56 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:55.445 00:12:55.445 real 0m50.437s 00:12:55.445 user 3m20.059s 00:12:55.445 sys 0m2.983s 00:12:55.445 12:08:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:55.445 12:08:56 -- common/autotest_common.sh@10 -- # set +x 00:12:55.445 ************************************ 00:12:55.445 END TEST nvmf_vfio_user 00:12:55.445 ************************************ 00:12:55.445 12:08:56 -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:55.445 12:08:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:55.445 12:08:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:55.445 12:08:56 -- common/autotest_common.sh@10 -- # set +x 00:12:55.445 ************************************ 00:12:55.445 START TEST nvmf_vfio_user_nvme_compliance 00:12:55.445 ************************************ 00:12:55.445 12:08:56 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:55.707 * Looking for test storage... 00:12:55.707 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:12:55.707 12:08:56 -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:55.707 12:08:56 -- nvmf/common.sh@7 -- # uname -s 00:12:55.707 12:08:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:55.707 12:08:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:55.707 12:08:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:55.707 12:08:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:55.708 12:08:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:55.708 12:08:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:55.708 12:08:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:55.708 12:08:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:55.708 12:08:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:55.708 12:08:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:55.708 12:08:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:55.708 12:08:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:55.708 12:08:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:55.708 12:08:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:55.708 12:08:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:55.708 12:08:56 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:55.708 12:08:56 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:55.708 12:08:56 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:55.708 12:08:56 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:55.708 12:08:56 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:55.708 12:08:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.708 12:08:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.708 12:08:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.708 12:08:56 -- paths/export.sh@5 -- # export PATH 00:12:55.708 12:08:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.708 12:08:56 -- nvmf/common.sh@47 -- # : 0 00:12:55.708 12:08:56 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:55.708 12:08:56 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:55.708 12:08:56 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:55.708 12:08:56 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:55.708 12:08:56 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:55.708 12:08:56 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:55.708 12:08:56 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:55.708 12:08:56 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:55.708 12:08:56 -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:55.708 12:08:56 -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:55.708 12:08:56 -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:12:55.708 12:08:56 -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:12:55.708 12:08:56 -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:12:55.708 12:08:56 -- compliance/compliance.sh@20 -- # nvmfpid=3329726 00:12:55.708 12:08:56 -- compliance/compliance.sh@21 -- # echo 'Process pid: 3329726' 00:12:55.708 Process pid: 3329726 00:12:55.708 12:08:56 -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:55.708 12:08:56 -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:55.708 12:08:56 -- compliance/compliance.sh@24 -- # waitforlisten 3329726 00:12:55.708 12:08:56 -- common/autotest_common.sh@817 -- # '[' -z 3329726 ']' 00:12:55.708 12:08:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:55.708 12:08:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:55.708 12:08:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:55.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:55.708 12:08:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:55.708 12:08:56 -- common/autotest_common.sh@10 -- # set +x 00:12:55.708 [2024-04-26 12:08:56.855961] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:12:55.708 [2024-04-26 12:08:56.856032] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:55.708 EAL: No free 2048 kB hugepages reported on node 1 00:12:55.708 [2024-04-26 12:08:56.924464] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:55.969 [2024-04-26 12:08:56.997485] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:55.970 [2024-04-26 12:08:56.997523] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:55.970 [2024-04-26 12:08:56.997532] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:55.970 [2024-04-26 12:08:56.997540] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:55.970 [2024-04-26 12:08:56.997547] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:55.970 [2024-04-26 12:08:56.997638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:55.970 [2024-04-26 12:08:56.997744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:55.970 [2024-04-26 12:08:56.997747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.544 12:08:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:56.544 12:08:57 -- common/autotest_common.sh@850 -- # return 0 00:12:56.544 12:08:57 -- compliance/compliance.sh@26 -- # sleep 1 00:12:57.485 12:08:58 -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:12:57.485 12:08:58 -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:12:57.485 12:08:58 -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:12:57.485 12:08:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:57.485 12:08:58 -- common/autotest_common.sh@10 -- # set +x 00:12:57.485 12:08:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:57.485 12:08:58 -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:12:57.485 12:08:58 -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:12:57.485 12:08:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:57.485 12:08:58 -- common/autotest_common.sh@10 -- # set +x 00:12:57.485 malloc0 00:12:57.485 12:08:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:57.485 12:08:58 -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:12:57.485 12:08:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:57.485 12:08:58 -- common/autotest_common.sh@10 -- # set +x 00:12:57.485 12:08:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:57.485 12:08:58 -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:12:57.485 12:08:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:57.485 12:08:58 -- common/autotest_common.sh@10 -- # set +x 00:12:57.745 12:08:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:57.745 12:08:58 -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:12:57.745 12:08:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:57.745 12:08:58 -- common/autotest_common.sh@10 -- # set +x 00:12:57.745 12:08:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:57.745 12:08:58 -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:12:57.745 EAL: No free 2048 kB hugepages reported on node 1 00:12:57.745 00:12:57.745 00:12:57.745 CUnit - A unit testing framework for C - Version 2.1-3 00:12:57.745 http://cunit.sourceforge.net/ 00:12:57.745 00:12:57.745 00:12:57.745 Suite: nvme_compliance 00:12:57.746 Test: admin_identify_ctrlr_verify_dptr ...[2024-04-26 12:08:58.895277] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:57.746 [2024-04-26 12:08:58.896588] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:12:57.746 [2024-04-26 12:08:58.896599] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:12:57.746 [2024-04-26 12:08:58.896603] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:12:57.746 [2024-04-26 12:08:58.898294] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:57.746 passed 00:12:58.006 Test: admin_identify_ctrlr_verify_fused ...[2024-04-26 12:08:58.993879] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:58.006 [2024-04-26 12:08:58.996895] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:58.006 passed 00:12:58.006 Test: admin_identify_ns ...[2024-04-26 12:08:59.093088] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:58.006 [2024-04-26 12:08:59.153851] ctrlr.c:2656:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:12:58.006 [2024-04-26 12:08:59.161849] ctrlr.c:2656:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:12:58.006 [2024-04-26 12:08:59.182959] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:58.006 passed 00:12:58.267 Test: admin_get_features_mandatory_features ...[2024-04-26 12:08:59.273590] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:58.268 [2024-04-26 12:08:59.276604] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:58.268 passed 00:12:58.268 Test: admin_get_features_optional_features ...[2024-04-26 12:08:59.372181] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:58.268 [2024-04-26 12:08:59.375199] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:58.268 passed 00:12:58.268 Test: admin_set_features_number_of_queues ...[2024-04-26 12:08:59.467077] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:58.528 [2024-04-26 12:08:59.571931] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:58.528 passed 00:12:58.528 Test: admin_get_log_page_mandatory_logs ...[2024-04-26 12:08:59.665947] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:58.528 [2024-04-26 12:08:59.668969] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:58.528 passed 00:12:58.790 Test: admin_get_log_page_with_lpo ...[2024-04-26 12:08:59.762070] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:58.790 [2024-04-26 12:08:59.829847] ctrlr.c:2604:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:12:58.790 [2024-04-26 12:08:59.842900] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:58.790 passed 00:12:58.790 Test: fabric_property_get ...[2024-04-26 12:08:59.936933] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:58.790 [2024-04-26 12:08:59.938169] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:12:58.790 [2024-04-26 12:08:59.939953] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:58.790 passed 00:12:59.051 Test: admin_delete_io_sq_use_admin_qid ...[2024-04-26 12:09:00.034475] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:59.051 [2024-04-26 12:09:00.035728] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:12:59.051 [2024-04-26 12:09:00.037508] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:59.052 passed 00:12:59.052 Test: admin_delete_io_sq_delete_sq_twice ...[2024-04-26 12:09:00.130634] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:59.052 [2024-04-26 12:09:00.213843] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:59.052 [2024-04-26 12:09:00.229847] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:59.052 [2024-04-26 12:09:00.234922] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:59.313 passed 00:12:59.313 Test: admin_delete_io_cq_use_admin_qid ...[2024-04-26 12:09:00.326527] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:59.313 [2024-04-26 12:09:00.327756] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:12:59.313 [2024-04-26 12:09:00.329542] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:59.313 passed 00:12:59.313 Test: admin_delete_io_cq_delete_cq_first ...[2024-04-26 12:09:00.423685] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:59.313 [2024-04-26 12:09:00.499853] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:12:59.313 [2024-04-26 12:09:00.523855] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:59.313 [2024-04-26 12:09:00.528924] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:59.575 passed 00:12:59.575 Test: admin_create_io_cq_verify_iv_pc ...[2024-04-26 12:09:00.620537] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:59.575 [2024-04-26 12:09:00.621760] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:12:59.575 [2024-04-26 12:09:00.621781] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:12:59.575 [2024-04-26 12:09:00.623551] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:59.575 passed 00:12:59.575 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-04-26 12:09:00.715102] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:59.836 [2024-04-26 12:09:00.806845] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:12:59.836 [2024-04-26 12:09:00.814847] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:12:59.836 [2024-04-26 12:09:00.822848] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:12:59.836 [2024-04-26 12:09:00.830842] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:12:59.836 [2024-04-26 12:09:00.859923] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:59.836 passed 00:12:59.836 Test: admin_create_io_sq_verify_pc ...[2024-04-26 12:09:00.954052] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:59.836 [2024-04-26 12:09:00.970853] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:12:59.836 [2024-04-26 12:09:00.988200] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:59.836 passed 00:13:00.097 Test: admin_create_io_qp_max_qps ...[2024-04-26 12:09:01.080735] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:01.036 [2024-04-26 12:09:02.189848] nvme_ctrlr.c:5329:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:13:01.606 [2024-04-26 12:09:02.572999] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:01.606 passed 00:13:01.606 Test: admin_create_io_sq_shared_cq ...[2024-04-26 12:09:02.667102] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:01.606 [2024-04-26 12:09:02.798843] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:01.866 [2024-04-26 12:09:02.835914] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:01.866 passed 00:13:01.866 00:13:01.866 Run Summary: Type Total Ran Passed Failed Inactive 00:13:01.866 suites 1 1 n/a 0 0 00:13:01.866 tests 18 18 18 0 0 00:13:01.866 asserts 360 360 360 0 n/a 00:13:01.866 00:13:01.866 Elapsed time = 1.654 seconds 00:13:01.866 12:09:02 -- compliance/compliance.sh@42 -- # killprocess 3329726 00:13:01.866 12:09:02 -- common/autotest_common.sh@936 -- # '[' -z 3329726 ']' 00:13:01.866 12:09:02 -- common/autotest_common.sh@940 -- # kill -0 3329726 00:13:01.866 12:09:02 -- common/autotest_common.sh@941 -- # uname 00:13:01.866 12:09:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:01.866 12:09:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3329726 00:13:01.866 12:09:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:01.866 12:09:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:01.866 12:09:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3329726' 00:13:01.866 killing process with pid 3329726 00:13:01.866 12:09:02 -- common/autotest_common.sh@955 -- # kill 3329726 00:13:01.866 12:09:02 -- common/autotest_common.sh@960 -- # wait 3329726 00:13:01.866 12:09:03 -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:13:02.126 12:09:03 -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:13:02.126 00:13:02.126 real 0m6.428s 00:13:02.126 user 0m18.376s 00:13:02.126 sys 0m0.464s 00:13:02.126 12:09:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:02.126 12:09:03 -- common/autotest_common.sh@10 -- # set +x 00:13:02.126 ************************************ 00:13:02.126 END TEST nvmf_vfio_user_nvme_compliance 00:13:02.126 ************************************ 00:13:02.126 12:09:03 -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:02.126 12:09:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:02.126 12:09:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:02.126 12:09:03 -- common/autotest_common.sh@10 -- # set +x 00:13:02.126 ************************************ 00:13:02.126 START TEST nvmf_vfio_user_fuzz 00:13:02.126 ************************************ 00:13:02.126 12:09:03 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:02.387 * Looking for test storage... 00:13:02.387 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:02.387 12:09:03 -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:02.387 12:09:03 -- nvmf/common.sh@7 -- # uname -s 00:13:02.387 12:09:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:02.387 12:09:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:02.387 12:09:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:02.387 12:09:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:02.387 12:09:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:02.387 12:09:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:02.387 12:09:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:02.387 12:09:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:02.387 12:09:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:02.387 12:09:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:02.387 12:09:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:02.387 12:09:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:02.387 12:09:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:02.387 12:09:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:02.387 12:09:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:02.387 12:09:03 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:02.387 12:09:03 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:02.387 12:09:03 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:02.387 12:09:03 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:02.387 12:09:03 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:02.387 12:09:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.387 12:09:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.387 12:09:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.387 12:09:03 -- paths/export.sh@5 -- # export PATH 00:13:02.387 12:09:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.387 12:09:03 -- nvmf/common.sh@47 -- # : 0 00:13:02.387 12:09:03 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:02.387 12:09:03 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:02.387 12:09:03 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:02.387 12:09:03 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:02.387 12:09:03 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:02.387 12:09:03 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:02.387 12:09:03 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:02.387 12:09:03 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:02.387 12:09:03 -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:02.387 12:09:03 -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:02.387 12:09:03 -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:02.387 12:09:03 -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:13:02.387 12:09:03 -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:02.387 12:09:03 -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:02.387 12:09:03 -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:13:02.387 12:09:03 -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3331232 00:13:02.387 12:09:03 -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3331232' 00:13:02.387 Process pid: 3331232 00:13:02.387 12:09:03 -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:02.387 12:09:03 -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3331232 00:13:02.387 12:09:03 -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:02.387 12:09:03 -- common/autotest_common.sh@817 -- # '[' -z 3331232 ']' 00:13:02.387 12:09:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.387 12:09:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:02.387 12:09:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.387 12:09:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:02.387 12:09:03 -- common/autotest_common.sh@10 -- # set +x 00:13:03.325 12:09:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:03.325 12:09:04 -- common/autotest_common.sh@850 -- # return 0 00:13:03.325 12:09:04 -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:13:04.265 12:09:05 -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:04.265 12:09:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.265 12:09:05 -- common/autotest_common.sh@10 -- # set +x 00:13:04.265 12:09:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.265 12:09:05 -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:13:04.265 12:09:05 -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:04.265 12:09:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.265 12:09:05 -- common/autotest_common.sh@10 -- # set +x 00:13:04.265 malloc0 00:13:04.265 12:09:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.265 12:09:05 -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:13:04.265 12:09:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.265 12:09:05 -- common/autotest_common.sh@10 -- # set +x 00:13:04.265 12:09:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.265 12:09:05 -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:04.265 12:09:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.265 12:09:05 -- common/autotest_common.sh@10 -- # set +x 00:13:04.265 12:09:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.265 12:09:05 -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:04.265 12:09:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.265 12:09:05 -- common/autotest_common.sh@10 -- # set +x 00:13:04.265 12:09:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.265 12:09:05 -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:13:04.265 12:09:05 -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:13:36.423 Fuzzing completed. Shutting down the fuzz application 00:13:36.423 00:13:36.423 Dumping successful admin opcodes: 00:13:36.423 8, 9, 10, 24, 00:13:36.423 Dumping successful io opcodes: 00:13:36.423 0, 00:13:36.423 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1193640, total successful commands: 4695, random_seed: 2623568960 00:13:36.423 NS: 0x200003a1ef00 admin qp, Total commands completed: 149952, total successful commands: 1205, random_seed: 3422308288 00:13:36.423 12:09:36 -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:13:36.423 12:09:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:36.423 12:09:36 -- common/autotest_common.sh@10 -- # set +x 00:13:36.423 12:09:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:36.423 12:09:36 -- target/vfio_user_fuzz.sh@46 -- # killprocess 3331232 00:13:36.423 12:09:36 -- common/autotest_common.sh@936 -- # '[' -z 3331232 ']' 00:13:36.423 12:09:36 -- common/autotest_common.sh@940 -- # kill -0 3331232 00:13:36.423 12:09:36 -- common/autotest_common.sh@941 -- # uname 00:13:36.423 12:09:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:36.423 12:09:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3331232 00:13:36.423 12:09:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:36.423 12:09:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:36.423 12:09:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3331232' 00:13:36.423 killing process with pid 3331232 00:13:36.423 12:09:36 -- common/autotest_common.sh@955 -- # kill 3331232 00:13:36.423 12:09:36 -- common/autotest_common.sh@960 -- # wait 3331232 00:13:36.423 12:09:36 -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:13:36.423 12:09:36 -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:13:36.423 00:13:36.423 real 0m33.677s 00:13:36.423 user 0m40.264s 00:13:36.423 sys 0m22.731s 00:13:36.423 12:09:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:36.423 12:09:36 -- common/autotest_common.sh@10 -- # set +x 00:13:36.423 ************************************ 00:13:36.423 END TEST nvmf_vfio_user_fuzz 00:13:36.423 ************************************ 00:13:36.423 12:09:36 -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:36.423 12:09:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:36.423 12:09:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:36.423 12:09:36 -- common/autotest_common.sh@10 -- # set +x 00:13:36.423 ************************************ 00:13:36.423 START TEST nvmf_host_management 00:13:36.423 ************************************ 00:13:36.423 12:09:37 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:36.423 * Looking for test storage... 00:13:36.423 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:36.423 12:09:37 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:36.423 12:09:37 -- nvmf/common.sh@7 -- # uname -s 00:13:36.423 12:09:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:36.423 12:09:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:36.423 12:09:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:36.423 12:09:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:36.423 12:09:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:36.423 12:09:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:36.424 12:09:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:36.424 12:09:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:36.424 12:09:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:36.424 12:09:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:36.424 12:09:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:36.424 12:09:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:36.424 12:09:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:36.424 12:09:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:36.424 12:09:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:36.424 12:09:37 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:36.424 12:09:37 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:36.424 12:09:37 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:36.424 12:09:37 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:36.424 12:09:37 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:36.424 12:09:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.424 12:09:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.424 12:09:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.424 12:09:37 -- paths/export.sh@5 -- # export PATH 00:13:36.424 12:09:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.424 12:09:37 -- nvmf/common.sh@47 -- # : 0 00:13:36.424 12:09:37 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:36.424 12:09:37 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:36.424 12:09:37 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:36.424 12:09:37 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:36.424 12:09:37 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:36.424 12:09:37 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:36.424 12:09:37 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:36.424 12:09:37 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:36.424 12:09:37 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:36.424 12:09:37 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:36.424 12:09:37 -- target/host_management.sh@105 -- # nvmftestinit 00:13:36.424 12:09:37 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:36.424 12:09:37 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:36.424 12:09:37 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:36.424 12:09:37 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:36.424 12:09:37 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:36.424 12:09:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:36.424 12:09:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:36.424 12:09:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:36.424 12:09:37 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:36.424 12:09:37 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:36.424 12:09:37 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:36.424 12:09:37 -- common/autotest_common.sh@10 -- # set +x 00:13:43.106 12:09:44 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:43.106 12:09:44 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:43.106 12:09:44 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:43.106 12:09:44 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:43.106 12:09:44 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:43.106 12:09:44 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:43.106 12:09:44 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:43.106 12:09:44 -- nvmf/common.sh@295 -- # net_devs=() 00:13:43.106 12:09:44 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:43.106 12:09:44 -- nvmf/common.sh@296 -- # e810=() 00:13:43.106 12:09:44 -- nvmf/common.sh@296 -- # local -ga e810 00:13:43.106 12:09:44 -- nvmf/common.sh@297 -- # x722=() 00:13:43.106 12:09:44 -- nvmf/common.sh@297 -- # local -ga x722 00:13:43.106 12:09:44 -- nvmf/common.sh@298 -- # mlx=() 00:13:43.106 12:09:44 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:43.106 12:09:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:43.106 12:09:44 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:43.106 12:09:44 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:43.106 12:09:44 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:43.106 12:09:44 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:43.106 12:09:44 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:43.106 12:09:44 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:43.106 12:09:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:43.106 12:09:44 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:43.106 12:09:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:43.106 12:09:44 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:43.106 12:09:44 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:43.106 12:09:44 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:43.106 12:09:44 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:43.106 12:09:44 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:43.106 12:09:44 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:43.106 12:09:44 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:43.106 12:09:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:43.106 12:09:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:43.106 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:43.106 12:09:44 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:43.106 12:09:44 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:43.106 12:09:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:43.106 12:09:44 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:43.106 12:09:44 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:43.106 12:09:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:43.106 12:09:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:43.106 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:43.106 12:09:44 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:43.106 12:09:44 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:43.106 12:09:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:43.107 12:09:44 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:43.107 12:09:44 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:43.107 12:09:44 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:43.107 12:09:44 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:43.107 12:09:44 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:43.107 12:09:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:43.107 12:09:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:43.107 12:09:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:43.107 12:09:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:43.107 12:09:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:43.107 Found net devices under 0000:31:00.0: cvl_0_0 00:13:43.107 12:09:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:43.107 12:09:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:43.107 12:09:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:43.107 12:09:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:43.107 12:09:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:43.107 12:09:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:43.107 Found net devices under 0000:31:00.1: cvl_0_1 00:13:43.107 12:09:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:43.107 12:09:44 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:43.107 12:09:44 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:43.107 12:09:44 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:43.107 12:09:44 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:43.107 12:09:44 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:43.107 12:09:44 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:43.107 12:09:44 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:43.107 12:09:44 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:43.107 12:09:44 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:43.107 12:09:44 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:43.107 12:09:44 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:43.107 12:09:44 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:43.107 12:09:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:43.107 12:09:44 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:43.107 12:09:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:43.107 12:09:44 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:43.107 12:09:44 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:43.107 12:09:44 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:43.368 12:09:44 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:43.368 12:09:44 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:43.368 12:09:44 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:43.368 12:09:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:43.368 12:09:44 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:43.368 12:09:44 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:43.628 12:09:44 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:43.628 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:43.629 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.578 ms 00:13:43.629 00:13:43.629 --- 10.0.0.2 ping statistics --- 00:13:43.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.629 rtt min/avg/max/mdev = 0.578/0.578/0.578/0.000 ms 00:13:43.629 12:09:44 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:43.629 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:43.629 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:13:43.629 00:13:43.629 --- 10.0.0.1 ping statistics --- 00:13:43.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.629 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:13:43.629 12:09:44 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:43.629 12:09:44 -- nvmf/common.sh@411 -- # return 0 00:13:43.629 12:09:44 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:43.629 12:09:44 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:43.629 12:09:44 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:43.629 12:09:44 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:43.629 12:09:44 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:43.629 12:09:44 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:43.629 12:09:44 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:43.629 12:09:44 -- target/host_management.sh@107 -- # run_test nvmf_host_management nvmf_host_management 00:13:43.629 12:09:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:43.629 12:09:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:43.629 12:09:44 -- common/autotest_common.sh@10 -- # set +x 00:13:43.629 ************************************ 00:13:43.629 START TEST nvmf_host_management 00:13:43.629 ************************************ 00:13:43.629 12:09:44 -- common/autotest_common.sh@1111 -- # nvmf_host_management 00:13:43.629 12:09:44 -- target/host_management.sh@69 -- # starttarget 00:13:43.629 12:09:44 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:13:43.629 12:09:44 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:43.629 12:09:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:43.629 12:09:44 -- common/autotest_common.sh@10 -- # set +x 00:13:43.629 12:09:44 -- nvmf/common.sh@470 -- # nvmfpid=3342092 00:13:43.629 12:09:44 -- nvmf/common.sh@471 -- # waitforlisten 3342092 00:13:43.629 12:09:44 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:13:43.629 12:09:44 -- common/autotest_common.sh@817 -- # '[' -z 3342092 ']' 00:13:43.629 12:09:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.629 12:09:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:43.629 12:09:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.629 12:09:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:43.629 12:09:44 -- common/autotest_common.sh@10 -- # set +x 00:13:43.889 [2024-04-26 12:09:44.866395] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:13:43.889 [2024-04-26 12:09:44.866440] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:43.889 EAL: No free 2048 kB hugepages reported on node 1 00:13:43.889 [2024-04-26 12:09:44.950283] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:43.889 [2024-04-26 12:09:45.023475] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:43.889 [2024-04-26 12:09:45.023518] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:43.889 [2024-04-26 12:09:45.023528] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:43.889 [2024-04-26 12:09:45.023535] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:43.889 [2024-04-26 12:09:45.023541] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:43.889 [2024-04-26 12:09:45.023682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:43.889 [2024-04-26 12:09:45.023822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:43.889 [2024-04-26 12:09:45.024010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:13:43.889 [2024-04-26 12:09:45.024154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:44.460 12:09:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:44.460 12:09:45 -- common/autotest_common.sh@850 -- # return 0 00:13:44.460 12:09:45 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:44.460 12:09:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:44.460 12:09:45 -- common/autotest_common.sh@10 -- # set +x 00:13:44.720 12:09:45 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:44.720 12:09:45 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:44.720 12:09:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:44.720 12:09:45 -- common/autotest_common.sh@10 -- # set +x 00:13:44.720 [2024-04-26 12:09:45.686346] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:44.720 12:09:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:44.720 12:09:45 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:13:44.720 12:09:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:44.720 12:09:45 -- common/autotest_common.sh@10 -- # set +x 00:13:44.720 12:09:45 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:44.720 12:09:45 -- target/host_management.sh@23 -- # cat 00:13:44.720 12:09:45 -- target/host_management.sh@30 -- # rpc_cmd 00:13:44.720 12:09:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:44.720 12:09:45 -- common/autotest_common.sh@10 -- # set +x 00:13:44.720 Malloc0 00:13:44.720 [2024-04-26 12:09:45.745741] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:44.720 12:09:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:44.720 12:09:45 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:13:44.720 12:09:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:44.720 12:09:45 -- common/autotest_common.sh@10 -- # set +x 00:13:44.720 12:09:45 -- target/host_management.sh@73 -- # perfpid=3342463 00:13:44.720 12:09:45 -- target/host_management.sh@74 -- # waitforlisten 3342463 /var/tmp/bdevperf.sock 00:13:44.720 12:09:45 -- common/autotest_common.sh@817 -- # '[' -z 3342463 ']' 00:13:44.720 12:09:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:44.720 12:09:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:44.720 12:09:45 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:13:44.720 12:09:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:44.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:44.720 12:09:45 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:13:44.720 12:09:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:44.720 12:09:45 -- common/autotest_common.sh@10 -- # set +x 00:13:44.720 12:09:45 -- nvmf/common.sh@521 -- # config=() 00:13:44.720 12:09:45 -- nvmf/common.sh@521 -- # local subsystem config 00:13:44.720 12:09:45 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:13:44.720 12:09:45 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:13:44.720 { 00:13:44.720 "params": { 00:13:44.720 "name": "Nvme$subsystem", 00:13:44.720 "trtype": "$TEST_TRANSPORT", 00:13:44.720 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:44.720 "adrfam": "ipv4", 00:13:44.720 "trsvcid": "$NVMF_PORT", 00:13:44.720 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:44.720 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:44.720 "hdgst": ${hdgst:-false}, 00:13:44.720 "ddgst": ${ddgst:-false} 00:13:44.720 }, 00:13:44.720 "method": "bdev_nvme_attach_controller" 00:13:44.720 } 00:13:44.720 EOF 00:13:44.720 )") 00:13:44.720 12:09:45 -- nvmf/common.sh@543 -- # cat 00:13:44.720 12:09:45 -- nvmf/common.sh@545 -- # jq . 00:13:44.720 12:09:45 -- nvmf/common.sh@546 -- # IFS=, 00:13:44.720 12:09:45 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:13:44.720 "params": { 00:13:44.720 "name": "Nvme0", 00:13:44.720 "trtype": "tcp", 00:13:44.720 "traddr": "10.0.0.2", 00:13:44.720 "adrfam": "ipv4", 00:13:44.720 "trsvcid": "4420", 00:13:44.720 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:44.720 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:44.720 "hdgst": false, 00:13:44.720 "ddgst": false 00:13:44.720 }, 00:13:44.720 "method": "bdev_nvme_attach_controller" 00:13:44.720 }' 00:13:44.720 [2024-04-26 12:09:45.843289] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:13:44.720 [2024-04-26 12:09:45.843339] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3342463 ] 00:13:44.720 EAL: No free 2048 kB hugepages reported on node 1 00:13:44.720 [2024-04-26 12:09:45.903015] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:44.980 [2024-04-26 12:09:45.966012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.980 Running I/O for 10 seconds... 00:13:45.552 12:09:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:45.552 12:09:46 -- common/autotest_common.sh@850 -- # return 0 00:13:45.552 12:09:46 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:13:45.553 12:09:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:45.553 12:09:46 -- common/autotest_common.sh@10 -- # set +x 00:13:45.553 12:09:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:45.553 12:09:46 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:45.553 12:09:46 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:13:45.553 12:09:46 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:13:45.553 12:09:46 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:13:45.553 12:09:46 -- target/host_management.sh@52 -- # local ret=1 00:13:45.553 12:09:46 -- target/host_management.sh@53 -- # local i 00:13:45.553 12:09:46 -- target/host_management.sh@54 -- # (( i = 10 )) 00:13:45.553 12:09:46 -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:45.553 12:09:46 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:45.553 12:09:46 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:45.553 12:09:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:45.553 12:09:46 -- common/autotest_common.sh@10 -- # set +x 00:13:45.553 12:09:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:45.553 12:09:46 -- target/host_management.sh@55 -- # read_io_count=707 00:13:45.553 12:09:46 -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:13:45.553 12:09:46 -- target/host_management.sh@59 -- # ret=0 00:13:45.553 12:09:46 -- target/host_management.sh@60 -- # break 00:13:45.553 12:09:46 -- target/host_management.sh@64 -- # return 0 00:13:45.553 12:09:46 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:45.553 12:09:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:45.553 12:09:46 -- common/autotest_common.sh@10 -- # set +x 00:13:45.553 [2024-04-26 12:09:46.684763] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ba090 is same with the state(5) to be set 00:13:45.553 [2024-04-26 12:09:46.684830] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ba090 is same with the state(5) to be set 00:13:45.553 [2024-04-26 12:09:46.684842] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ba090 is same with the state(5) to be set 00:13:45.553 [2024-04-26 12:09:46.684849] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ba090 is same with the state(5) to be set 00:13:45.553 [2024-04-26 12:09:46.684856] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ba090 is same with the state(5) to be set 00:13:45.553 [2024-04-26 12:09:46.684863] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ba090 is same with the state(5) to be set 00:13:45.553 [2024-04-26 12:09:46.684870] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ba090 is same with the state(5) to be set 00:13:45.553 [2024-04-26 12:09:46.684876] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ba090 is same with the state(5) to be set 00:13:45.553 [2024-04-26 12:09:46.684882] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ba090 is same with the state(5) to be set 00:13:45.553 [2024-04-26 12:09:46.684889] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ba090 is same with the state(5) to be set 00:13:45.553 [2024-04-26 12:09:46.684900] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ba090 is same with the state(5) to be set 00:13:45.553 [2024-04-26 12:09:46.684906] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ba090 is same with the state(5) to be set 00:13:45.553 [2024-04-26 12:09:46.684913] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ba090 is same with the state(5) to be set 00:13:45.553 [2024-04-26 12:09:46.684920] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ba090 is same with the state(5) to be set 00:13:45.553 [2024-04-26 12:09:46.684926] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ba090 is same with the state(5) to be set 00:13:45.553 [2024-04-26 12:09:46.684932] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ba090 is same with the state(5) to be set 00:13:45.553 [2024-04-26 12:09:46.684938] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ba090 is same with the state(5) to be set 00:13:45.553 [2024-04-26 12:09:46.684945] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ba090 is same with the state(5) to be set 00:13:45.553 [2024-04-26 12:09:46.684951] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ba090 is same with the state(5) to be set 00:13:45.553 [2024-04-26 12:09:46.684958] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ba090 is same with the state(5) to be set 00:13:45.553 [2024-04-26 12:09:46.684964] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ba090 is same with the state(5) to be set 00:13:45.553 [2024-04-26 12:09:46.684970] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ba090 is same with the state(5) to be set 00:13:45.553 [2024-04-26 12:09:46.684977] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ba090 is same with the state(5) to be set 00:13:45.553 [2024-04-26 12:09:46.684983] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ba090 is same with the state(5) to be set 00:13:45.553 [2024-04-26 12:09:46.684989] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ba090 is same with the state(5) to be set 00:13:45.553 [2024-04-26 12:09:46.684996] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ba090 is same with the state(5) to be set 00:13:45.553 [2024-04-26 12:09:46.685002] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ba090 is same with the state(5) to be set 00:13:45.553 [2024-04-26 12:09:46.685008] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ba090 is same with the state(5) to be set 00:13:45.553 [2024-04-26 12:09:46.685015] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ba090 is same with the state(5) to be set 00:13:45.553 [2024-04-26 12:09:46.685021] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ba090 is same with the state(5) to be set 00:13:45.553 [2024-04-26 12:09:46.685027] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ba090 is same with the state(5) to be set 00:13:45.553 [2024-04-26 12:09:46.685034] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ba090 is same with the state(5) to be set 00:13:45.553 [2024-04-26 12:09:46.685040] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ba090 is same with the state(5) to be set 00:13:45.553 [2024-04-26 12:09:46.685046] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ba090 is same with the state(5) to be set 00:13:45.553 [2024-04-26 12:09:46.685052] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ba090 is same with the state(5) to be set 00:13:45.553 [2024-04-26 12:09:46.685058] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ba090 is same with the state(5) to be set 00:13:45.553 [2024-04-26 12:09:46.685065] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ba090 is same with the state(5) to be set 00:13:45.553 [2024-04-26 12:09:46.685073] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ba090 is same with the state(5) to be set 00:13:45.553 [2024-04-26 12:09:46.685079] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ba090 is same with the state(5) to be set 00:13:45.553 [2024-04-26 12:09:46.685086] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ba090 is same with the state(5) to be set 00:13:45.553 [2024-04-26 12:09:46.685092] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ba090 is same with the state(5) to be set 00:13:45.553 [2024-04-26 12:09:46.685098] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ba090 is same with the state(5) to be set 00:13:45.553 [2024-04-26 12:09:46.685104] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ba090 is same with the state(5) to be set 00:13:45.553 [2024-04-26 12:09:46.685110] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ba090 is same with the state(5) to be set 00:13:45.553 [2024-04-26 12:09:46.685117] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ba090 is same with the state(5) to be set 00:13:45.553 [2024-04-26 12:09:46.685123] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ba090 is same with the state(5) to be set 00:13:45.553 [2024-04-26 12:09:46.685129] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ba090 is same with the state(5) to be set 00:13:45.553 [2024-04-26 12:09:46.685135] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ba090 is same with the state(5) to be set 00:13:45.553 [2024-04-26 12:09:46.685142] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ba090 is same with the state(5) to be set 00:13:45.553 [2024-04-26 12:09:46.685149] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ba090 is same with the state(5) to be set 00:13:45.553 [2024-04-26 12:09:46.685155] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ba090 is same with the state(5) to be set 00:13:45.553 [2024-04-26 12:09:46.685162] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ba090 is same with the state(5) to be set 00:13:45.553 [2024-04-26 12:09:46.685168] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ba090 is same with the state(5) to be set 00:13:45.553 [2024-04-26 12:09:46.685174] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ba090 is same with the state(5) to be set 00:13:45.553 [2024-04-26 12:09:46.685180] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ba090 is same with the state(5) to be set 00:13:45.553 [2024-04-26 12:09:46.685186] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ba090 is same with the state(5) to be set 00:13:45.553 [2024-04-26 12:09:46.685193] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ba090 is same with the state(5) to be set 00:13:45.553 [2024-04-26 12:09:46.685199] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ba090 is same with the state(5) to be set 00:13:45.553 [2024-04-26 12:09:46.685205] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ba090 is same with the state(5) to be set 00:13:45.553 [2024-04-26 12:09:46.685212] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ba090 is same with the state(5) to be set 00:13:45.553 [2024-04-26 12:09:46.685218] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ba090 is same with the state(5) to be set 00:13:45.553 [2024-04-26 12:09:46.685224] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ba090 is same with the state(5) to be set 00:13:45.553 [2024-04-26 12:09:46.685230] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ba090 is same with the state(5) to be set 00:13:45.553 [2024-04-26 12:09:46.685833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.553 [2024-04-26 12:09:46.685878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.553 [2024-04-26 12:09:46.685898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.553 [2024-04-26 12:09:46.685906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.553 [2024-04-26 12:09:46.685915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.553 [2024-04-26 12:09:46.685923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.553 [2024-04-26 12:09:46.685932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.554 [2024-04-26 12:09:46.685939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.554 [2024-04-26 12:09:46.685949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.554 [2024-04-26 12:09:46.685956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.554 [2024-04-26 12:09:46.685966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.554 [2024-04-26 12:09:46.685973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.554 [2024-04-26 12:09:46.685982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.554 [2024-04-26 12:09:46.685990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.554 [2024-04-26 12:09:46.685999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.554 [2024-04-26 12:09:46.686006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.554 [2024-04-26 12:09:46.686016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.554 [2024-04-26 12:09:46.686024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.554 [2024-04-26 12:09:46.686033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.554 [2024-04-26 12:09:46.686040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.554 [2024-04-26 12:09:46.686050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.554 [2024-04-26 12:09:46.686057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.554 [2024-04-26 12:09:46.686067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.554 [2024-04-26 12:09:46.686075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.554 [2024-04-26 12:09:46.686084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.554 [2024-04-26 12:09:46.686092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.554 [2024-04-26 12:09:46.686103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.554 [2024-04-26 12:09:46.686110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.554 [2024-04-26 12:09:46.686120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.554 [2024-04-26 12:09:46.686128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.554 [2024-04-26 12:09:46.686137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.554 [2024-04-26 12:09:46.686145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.554 [2024-04-26 12:09:46.686154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.554 [2024-04-26 12:09:46.686162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.554 [2024-04-26 12:09:46.686171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.554 [2024-04-26 12:09:46.686179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.554 [2024-04-26 12:09:46.686188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.554 [2024-04-26 12:09:46.686196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.554 [2024-04-26 12:09:46.686206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.554 [2024-04-26 12:09:46.686213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.554 [2024-04-26 12:09:46.686223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.554 [2024-04-26 12:09:46.686230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.554 [2024-04-26 12:09:46.686241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.554 [2024-04-26 12:09:46.686248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.554 [2024-04-26 12:09:46.686258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.554 [2024-04-26 12:09:46.686265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.554 [2024-04-26 12:09:46.686275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.554 [2024-04-26 12:09:46.686282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.554 [2024-04-26 12:09:46.686292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.554 [2024-04-26 12:09:46.686300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.554 [2024-04-26 12:09:46.686309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.554 [2024-04-26 12:09:46.686318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.554 [2024-04-26 12:09:46.686328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.554 [2024-04-26 12:09:46.686336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.554 [2024-04-26 12:09:46.686346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.554 [2024-04-26 12:09:46.686353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.554 [2024-04-26 12:09:46.686363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.554 [2024-04-26 12:09:46.686370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.554 [2024-04-26 12:09:46.686380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.554 [2024-04-26 12:09:46.686387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.554 [2024-04-26 12:09:46.686397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.554 [2024-04-26 12:09:46.686404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.554 [2024-04-26 12:09:46.686414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.554 [2024-04-26 12:09:46.686421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.554 [2024-04-26 12:09:46.686431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.554 [2024-04-26 12:09:46.686439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.554 [2024-04-26 12:09:46.686449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.554 [2024-04-26 12:09:46.686456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.554 [2024-04-26 12:09:46.686466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.554 [2024-04-26 12:09:46.686473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.554 [2024-04-26 12:09:46.686483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.554 [2024-04-26 12:09:46.686491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.554 [2024-04-26 12:09:46.686501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.554 [2024-04-26 12:09:46.686508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.554 [2024-04-26 12:09:46.686518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.554 [2024-04-26 12:09:46.686525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.554 [2024-04-26 12:09:46.686536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.554 [2024-04-26 12:09:46.686544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.554 [2024-04-26 12:09:46.686554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.554 [2024-04-26 12:09:46.686562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.554 [2024-04-26 12:09:46.686571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.554 [2024-04-26 12:09:46.686578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.554 [2024-04-26 12:09:46.686588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.554 [2024-04-26 12:09:46.686595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.554 [2024-04-26 12:09:46.686605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.554 [2024-04-26 12:09:46.686612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.555 [2024-04-26 12:09:46.686622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.555 [2024-04-26 12:09:46.686630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.555 [2024-04-26 12:09:46.686639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.555 [2024-04-26 12:09:46.686647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.555 [2024-04-26 12:09:46.686657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.555 [2024-04-26 12:09:46.686664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.555 [2024-04-26 12:09:46.686674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.555 [2024-04-26 12:09:46.686681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.555 [2024-04-26 12:09:46.686691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.555 [2024-04-26 12:09:46.686698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.555 [2024-04-26 12:09:46.686708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.555 [2024-04-26 12:09:46.686715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.555 [2024-04-26 12:09:46.686725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.555 [2024-04-26 12:09:46.686732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.555 [2024-04-26 12:09:46.686742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.555 [2024-04-26 12:09:46.686751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.555 [2024-04-26 12:09:46.686760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.555 [2024-04-26 12:09:46.686768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.555 [2024-04-26 12:09:46.686778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.555 [2024-04-26 12:09:46.686786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.555 [2024-04-26 12:09:46.686795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.555 [2024-04-26 12:09:46.686803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.555 [2024-04-26 12:09:46.686813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.555 [2024-04-26 12:09:46.686820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.555 [2024-04-26 12:09:46.686830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.555 [2024-04-26 12:09:46.686840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.555 [2024-04-26 12:09:46.686850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.555 [2024-04-26 12:09:46.686858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.555 [2024-04-26 12:09:46.686868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.555 [2024-04-26 12:09:46.686875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.555 [2024-04-26 12:09:46.686885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.555 [2024-04-26 12:09:46.686893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.555 [2024-04-26 12:09:46.686902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.555 [2024-04-26 12:09:46.686910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.555 [2024-04-26 12:09:46.686919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.555 [2024-04-26 12:09:46.686926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.555 [2024-04-26 12:09:46.686936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.555 [2024-04-26 12:09:46.686944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.555 [2024-04-26 12:09:46.686953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.555 [2024-04-26 12:09:46.686961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.555 [2024-04-26 12:09:46.686972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.555 [2024-04-26 12:09:46.686979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.555 [2024-04-26 12:09:46.686989] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e30b0 is same with the state(5) to be set 00:13:45.555 [2024-04-26 12:09:46.687030] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x23e30b0 was disconnected and freed. reset controller. 00:13:45.555 [2024-04-26 12:09:46.688263] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:13:45.555 12:09:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:45.555 task offset: 98304 on job bdev=Nvme0n1 fails 00:13:45.555 00:13:45.555 Latency(us) 00:13:45.555 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:45.555 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:45.555 Job: Nvme0n1 ended in about 0.54 seconds with error 00:13:45.555 Verification LBA range: start 0x0 length 0x400 00:13:45.555 Nvme0n1 : 0.54 1432.34 89.52 119.36 0.00 40204.54 8574.29 33204.91 00:13:45.555 =================================================================================================================== 00:13:45.555 Total : 1432.34 89.52 119.36 0.00 40204.54 8574.29 33204.91 00:13:45.555 12:09:46 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:45.555 [2024-04-26 12:09:46.690263] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:45.555 [2024-04-26 12:09:46.690287] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fd2620 (9): Bad file descriptor 00:13:45.555 12:09:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:45.555 12:09:46 -- common/autotest_common.sh@10 -- # set +x 00:13:45.555 12:09:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:45.555 12:09:46 -- target/host_management.sh@87 -- # sleep 1 00:13:45.555 [2024-04-26 12:09:46.711553] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:46.499 12:09:47 -- target/host_management.sh@91 -- # kill -9 3342463 00:13:46.499 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3342463) - No such process 00:13:46.499 12:09:47 -- target/host_management.sh@91 -- # true 00:13:46.499 12:09:47 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:13:46.499 12:09:47 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:13:46.499 12:09:47 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:13:46.499 12:09:47 -- nvmf/common.sh@521 -- # config=() 00:13:46.499 12:09:47 -- nvmf/common.sh@521 -- # local subsystem config 00:13:46.500 12:09:47 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:13:46.500 12:09:47 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:13:46.500 { 00:13:46.500 "params": { 00:13:46.500 "name": "Nvme$subsystem", 00:13:46.500 "trtype": "$TEST_TRANSPORT", 00:13:46.500 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:46.500 "adrfam": "ipv4", 00:13:46.500 "trsvcid": "$NVMF_PORT", 00:13:46.500 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:46.500 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:46.500 "hdgst": ${hdgst:-false}, 00:13:46.500 "ddgst": ${ddgst:-false} 00:13:46.500 }, 00:13:46.500 "method": "bdev_nvme_attach_controller" 00:13:46.500 } 00:13:46.500 EOF 00:13:46.500 )") 00:13:46.500 12:09:47 -- nvmf/common.sh@543 -- # cat 00:13:46.760 12:09:47 -- nvmf/common.sh@545 -- # jq . 00:13:46.760 12:09:47 -- nvmf/common.sh@546 -- # IFS=, 00:13:46.760 12:09:47 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:13:46.760 "params": { 00:13:46.760 "name": "Nvme0", 00:13:46.760 "trtype": "tcp", 00:13:46.760 "traddr": "10.0.0.2", 00:13:46.760 "adrfam": "ipv4", 00:13:46.760 "trsvcid": "4420", 00:13:46.760 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:46.760 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:46.760 "hdgst": false, 00:13:46.760 "ddgst": false 00:13:46.760 }, 00:13:46.760 "method": "bdev_nvme_attach_controller" 00:13:46.760 }' 00:13:46.760 [2024-04-26 12:09:47.758313] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:13:46.760 [2024-04-26 12:09:47.758369] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3342815 ] 00:13:46.760 EAL: No free 2048 kB hugepages reported on node 1 00:13:46.761 [2024-04-26 12:09:47.818014] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.761 [2024-04-26 12:09:47.879223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.021 Running I/O for 1 seconds... 00:13:47.964 00:13:47.964 Latency(us) 00:13:47.964 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:47.964 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:47.964 Verification LBA range: start 0x0 length 0x400 00:13:47.964 Nvme0n1 : 1.00 1914.47 119.65 0.00 0.00 32807.42 5570.56 30583.47 00:13:47.964 =================================================================================================================== 00:13:47.964 Total : 1914.47 119.65 0.00 0.00 32807.42 5570.56 30583.47 00:13:48.225 12:09:49 -- target/host_management.sh@102 -- # stoptarget 00:13:48.225 12:09:49 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:13:48.225 12:09:49 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:13:48.225 12:09:49 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:48.225 12:09:49 -- target/host_management.sh@40 -- # nvmftestfini 00:13:48.225 12:09:49 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:48.225 12:09:49 -- nvmf/common.sh@117 -- # sync 00:13:48.225 12:09:49 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:48.225 12:09:49 -- nvmf/common.sh@120 -- # set +e 00:13:48.225 12:09:49 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:48.225 12:09:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:48.225 rmmod nvme_tcp 00:13:48.225 rmmod nvme_fabrics 00:13:48.225 rmmod nvme_keyring 00:13:48.225 12:09:49 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:48.225 12:09:49 -- nvmf/common.sh@124 -- # set -e 00:13:48.225 12:09:49 -- nvmf/common.sh@125 -- # return 0 00:13:48.225 12:09:49 -- nvmf/common.sh@478 -- # '[' -n 3342092 ']' 00:13:48.225 12:09:49 -- nvmf/common.sh@479 -- # killprocess 3342092 00:13:48.225 12:09:49 -- common/autotest_common.sh@936 -- # '[' -z 3342092 ']' 00:13:48.225 12:09:49 -- common/autotest_common.sh@940 -- # kill -0 3342092 00:13:48.225 12:09:49 -- common/autotest_common.sh@941 -- # uname 00:13:48.225 12:09:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:48.225 12:09:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3342092 00:13:48.225 12:09:49 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:48.225 12:09:49 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:48.225 12:09:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3342092' 00:13:48.225 killing process with pid 3342092 00:13:48.225 12:09:49 -- common/autotest_common.sh@955 -- # kill 3342092 00:13:48.225 12:09:49 -- common/autotest_common.sh@960 -- # wait 3342092 00:13:48.486 [2024-04-26 12:09:49.451785] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:13:48.486 12:09:49 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:48.486 12:09:49 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:48.486 12:09:49 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:48.486 12:09:49 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:48.486 12:09:49 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:48.486 12:09:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:48.486 12:09:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:48.486 12:09:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:50.400 12:09:51 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:50.400 00:13:50.400 real 0m6.734s 00:13:50.400 user 0m20.288s 00:13:50.400 sys 0m1.006s 00:13:50.400 12:09:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:50.400 12:09:51 -- common/autotest_common.sh@10 -- # set +x 00:13:50.400 ************************************ 00:13:50.400 END TEST nvmf_host_management 00:13:50.400 ************************************ 00:13:50.400 12:09:51 -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:13:50.400 00:13:50.400 real 0m14.435s 00:13:50.400 user 0m22.381s 00:13:50.400 sys 0m6.526s 00:13:50.400 12:09:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:50.400 12:09:51 -- common/autotest_common.sh@10 -- # set +x 00:13:50.400 ************************************ 00:13:50.400 END TEST nvmf_host_management 00:13:50.400 ************************************ 00:13:50.661 12:09:51 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:50.661 12:09:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:50.661 12:09:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:50.661 12:09:51 -- common/autotest_common.sh@10 -- # set +x 00:13:50.661 ************************************ 00:13:50.661 START TEST nvmf_lvol 00:13:50.661 ************************************ 00:13:50.661 12:09:51 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:50.923 * Looking for test storage... 00:13:50.923 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:50.923 12:09:51 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:50.923 12:09:51 -- nvmf/common.sh@7 -- # uname -s 00:13:50.923 12:09:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:50.923 12:09:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:50.923 12:09:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:50.923 12:09:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:50.923 12:09:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:50.923 12:09:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:50.923 12:09:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:50.923 12:09:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:50.923 12:09:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:50.923 12:09:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:50.923 12:09:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:50.923 12:09:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:50.923 12:09:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:50.923 12:09:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:50.923 12:09:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:50.923 12:09:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:50.923 12:09:51 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:50.923 12:09:51 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:50.923 12:09:51 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:50.923 12:09:51 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:50.923 12:09:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.923 12:09:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.923 12:09:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.923 12:09:51 -- paths/export.sh@5 -- # export PATH 00:13:50.923 12:09:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.923 12:09:51 -- nvmf/common.sh@47 -- # : 0 00:13:50.923 12:09:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:50.923 12:09:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:50.923 12:09:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:50.923 12:09:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:50.923 12:09:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:50.923 12:09:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:50.923 12:09:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:50.923 12:09:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:50.923 12:09:51 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:50.923 12:09:51 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:50.923 12:09:51 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:13:50.923 12:09:51 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:13:50.923 12:09:51 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:50.923 12:09:51 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:13:50.923 12:09:51 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:50.923 12:09:51 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:50.923 12:09:51 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:50.923 12:09:51 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:50.923 12:09:51 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:50.923 12:09:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:50.923 12:09:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:50.923 12:09:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:50.923 12:09:51 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:50.923 12:09:51 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:50.923 12:09:51 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:50.923 12:09:51 -- common/autotest_common.sh@10 -- # set +x 00:13:59.069 12:09:58 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:59.069 12:09:58 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:59.069 12:09:58 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:59.069 12:09:58 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:59.070 12:09:58 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:59.070 12:09:58 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:59.070 12:09:58 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:59.070 12:09:58 -- nvmf/common.sh@295 -- # net_devs=() 00:13:59.070 12:09:58 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:59.070 12:09:58 -- nvmf/common.sh@296 -- # e810=() 00:13:59.070 12:09:58 -- nvmf/common.sh@296 -- # local -ga e810 00:13:59.070 12:09:58 -- nvmf/common.sh@297 -- # x722=() 00:13:59.070 12:09:58 -- nvmf/common.sh@297 -- # local -ga x722 00:13:59.070 12:09:58 -- nvmf/common.sh@298 -- # mlx=() 00:13:59.070 12:09:58 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:59.070 12:09:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:59.070 12:09:58 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:59.070 12:09:58 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:59.070 12:09:58 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:59.070 12:09:58 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:59.070 12:09:58 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:59.070 12:09:58 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:59.070 12:09:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:59.070 12:09:58 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:59.070 12:09:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:59.070 12:09:58 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:59.070 12:09:58 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:59.070 12:09:58 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:59.070 12:09:58 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:59.070 12:09:58 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:59.070 12:09:58 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:59.070 12:09:58 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:59.070 12:09:58 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:59.070 12:09:58 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:59.070 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:59.070 12:09:58 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:59.070 12:09:58 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:59.070 12:09:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:59.070 12:09:58 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:59.070 12:09:58 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:59.070 12:09:58 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:59.070 12:09:58 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:59.070 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:59.070 12:09:58 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:59.070 12:09:58 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:59.070 12:09:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:59.070 12:09:58 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:59.070 12:09:58 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:59.070 12:09:58 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:59.070 12:09:58 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:59.070 12:09:58 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:59.070 12:09:58 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:59.070 12:09:58 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:59.070 12:09:58 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:59.070 12:09:58 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:59.070 12:09:58 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:59.070 Found net devices under 0000:31:00.0: cvl_0_0 00:13:59.070 12:09:58 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:59.070 12:09:58 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:59.070 12:09:58 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:59.070 12:09:58 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:59.070 12:09:58 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:59.070 12:09:58 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:59.070 Found net devices under 0000:31:00.1: cvl_0_1 00:13:59.070 12:09:58 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:59.070 12:09:58 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:59.070 12:09:58 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:59.070 12:09:58 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:59.070 12:09:58 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:59.070 12:09:58 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:59.070 12:09:58 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:59.070 12:09:58 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:59.070 12:09:58 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:59.070 12:09:58 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:59.070 12:09:58 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:59.070 12:09:58 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:59.070 12:09:58 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:59.070 12:09:58 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:59.070 12:09:58 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:59.070 12:09:58 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:59.070 12:09:58 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:59.070 12:09:58 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:59.070 12:09:58 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:59.070 12:09:58 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:59.070 12:09:58 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:59.070 12:09:58 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:59.070 12:09:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:59.070 12:09:59 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:59.070 12:09:59 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:59.070 12:09:59 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:59.070 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:59.070 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.553 ms 00:13:59.070 00:13:59.070 --- 10.0.0.2 ping statistics --- 00:13:59.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:59.070 rtt min/avg/max/mdev = 0.553/0.553/0.553/0.000 ms 00:13:59.070 12:09:59 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:59.070 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:59.070 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:13:59.070 00:13:59.070 --- 10.0.0.1 ping statistics --- 00:13:59.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:59.070 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:13:59.070 12:09:59 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:59.070 12:09:59 -- nvmf/common.sh@411 -- # return 0 00:13:59.070 12:09:59 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:59.070 12:09:59 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:59.070 12:09:59 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:59.070 12:09:59 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:59.070 12:09:59 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:59.070 12:09:59 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:59.070 12:09:59 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:59.070 12:09:59 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:13:59.070 12:09:59 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:59.070 12:09:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:59.070 12:09:59 -- common/autotest_common.sh@10 -- # set +x 00:13:59.070 12:09:59 -- nvmf/common.sh@470 -- # nvmfpid=3347354 00:13:59.070 12:09:59 -- nvmf/common.sh@471 -- # waitforlisten 3347354 00:13:59.070 12:09:59 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:59.070 12:09:59 -- common/autotest_common.sh@817 -- # '[' -z 3347354 ']' 00:13:59.070 12:09:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:59.070 12:09:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:59.070 12:09:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:59.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:59.070 12:09:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:59.070 12:09:59 -- common/autotest_common.sh@10 -- # set +x 00:13:59.070 [2024-04-26 12:09:59.225260] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:13:59.070 [2024-04-26 12:09:59.225325] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:59.070 EAL: No free 2048 kB hugepages reported on node 1 00:13:59.070 [2024-04-26 12:09:59.299077] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:59.070 [2024-04-26 12:09:59.373223] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:59.070 [2024-04-26 12:09:59.373264] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:59.070 [2024-04-26 12:09:59.373272] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:59.070 [2024-04-26 12:09:59.373278] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:59.070 [2024-04-26 12:09:59.373284] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:59.070 [2024-04-26 12:09:59.373434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:59.070 [2024-04-26 12:09:59.373549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:59.070 [2024-04-26 12:09:59.373552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.070 12:10:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:59.070 12:10:00 -- common/autotest_common.sh@850 -- # return 0 00:13:59.070 12:10:00 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:59.070 12:10:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:59.070 12:10:00 -- common/autotest_common.sh@10 -- # set +x 00:13:59.070 12:10:00 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:59.070 12:10:00 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:59.070 [2024-04-26 12:10:00.186582] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:59.070 12:10:00 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:59.331 12:10:00 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:13:59.331 12:10:00 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:59.592 12:10:00 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:13:59.592 12:10:00 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:13:59.592 12:10:00 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:13:59.854 12:10:00 -- target/nvmf_lvol.sh@29 -- # lvs=361e4307-f547-4bbe-93be-7ee7e1295724 00:13:59.854 12:10:00 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 361e4307-f547-4bbe-93be-7ee7e1295724 lvol 20 00:14:00.115 12:10:01 -- target/nvmf_lvol.sh@32 -- # lvol=52602fc8-11a5-423c-a604-463e7777d335 00:14:00.115 12:10:01 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:00.115 12:10:01 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 52602fc8-11a5-423c-a604-463e7777d335 00:14:00.376 12:10:01 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:00.376 [2024-04-26 12:10:01.577546] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:00.636 12:10:01 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:00.636 12:10:01 -- target/nvmf_lvol.sh@42 -- # perf_pid=3347947 00:14:00.636 12:10:01 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:14:00.636 12:10:01 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:14:00.636 EAL: No free 2048 kB hugepages reported on node 1 00:14:01.578 12:10:02 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 52602fc8-11a5-423c-a604-463e7777d335 MY_SNAPSHOT 00:14:01.838 12:10:02 -- target/nvmf_lvol.sh@47 -- # snapshot=82c369f3-191a-4290-b22a-00bcb0e6a346 00:14:01.838 12:10:02 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 52602fc8-11a5-423c-a604-463e7777d335 30 00:14:02.098 12:10:03 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 82c369f3-191a-4290-b22a-00bcb0e6a346 MY_CLONE 00:14:02.359 12:10:03 -- target/nvmf_lvol.sh@49 -- # clone=4317f435-1101-4e51-90aa-aee96e7897ca 00:14:02.359 12:10:03 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 4317f435-1101-4e51-90aa-aee96e7897ca 00:14:02.619 12:10:03 -- target/nvmf_lvol.sh@53 -- # wait 3347947 00:14:12.619 Initializing NVMe Controllers 00:14:12.619 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:12.619 Controller IO queue size 128, less than required. 00:14:12.619 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:12.619 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:12.619 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:12.619 Initialization complete. Launching workers. 00:14:12.619 ======================================================== 00:14:12.620 Latency(us) 00:14:12.620 Device Information : IOPS MiB/s Average min max 00:14:12.620 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11937.50 46.63 10724.64 1720.12 71354.17 00:14:12.620 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 16827.30 65.73 7606.78 357.59 69043.39 00:14:12.620 ======================================================== 00:14:12.620 Total : 28764.80 112.36 8900.70 357.59 71354.17 00:14:12.620 00:14:12.620 12:10:12 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:12.620 12:10:12 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 52602fc8-11a5-423c-a604-463e7777d335 00:14:12.620 12:10:12 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 361e4307-f547-4bbe-93be-7ee7e1295724 00:14:12.620 12:10:12 -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:12.620 12:10:12 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:12.620 12:10:12 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:12.620 12:10:12 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:12.620 12:10:12 -- nvmf/common.sh@117 -- # sync 00:14:12.620 12:10:12 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:12.620 12:10:12 -- nvmf/common.sh@120 -- # set +e 00:14:12.620 12:10:12 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:12.620 12:10:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:12.620 rmmod nvme_tcp 00:14:12.620 rmmod nvme_fabrics 00:14:12.620 rmmod nvme_keyring 00:14:12.620 12:10:12 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:12.620 12:10:12 -- nvmf/common.sh@124 -- # set -e 00:14:12.620 12:10:12 -- nvmf/common.sh@125 -- # return 0 00:14:12.620 12:10:12 -- nvmf/common.sh@478 -- # '[' -n 3347354 ']' 00:14:12.620 12:10:12 -- nvmf/common.sh@479 -- # killprocess 3347354 00:14:12.620 12:10:12 -- common/autotest_common.sh@936 -- # '[' -z 3347354 ']' 00:14:12.620 12:10:12 -- common/autotest_common.sh@940 -- # kill -0 3347354 00:14:12.620 12:10:12 -- common/autotest_common.sh@941 -- # uname 00:14:12.620 12:10:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:12.620 12:10:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3347354 00:14:12.620 12:10:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:12.620 12:10:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:12.620 12:10:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3347354' 00:14:12.620 killing process with pid 3347354 00:14:12.620 12:10:12 -- common/autotest_common.sh@955 -- # kill 3347354 00:14:12.620 12:10:12 -- common/autotest_common.sh@960 -- # wait 3347354 00:14:12.620 12:10:12 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:12.620 12:10:12 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:12.620 12:10:12 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:12.620 12:10:12 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:12.620 12:10:12 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:12.620 12:10:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:12.620 12:10:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:12.620 12:10:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:14.005 12:10:14 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:14.005 00:14:14.005 real 0m23.114s 00:14:14.005 user 1m3.500s 00:14:14.005 sys 0m7.717s 00:14:14.005 12:10:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:14.005 12:10:14 -- common/autotest_common.sh@10 -- # set +x 00:14:14.005 ************************************ 00:14:14.005 END TEST nvmf_lvol 00:14:14.005 ************************************ 00:14:14.005 12:10:14 -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:14.005 12:10:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:14.005 12:10:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:14.005 12:10:14 -- common/autotest_common.sh@10 -- # set +x 00:14:14.005 ************************************ 00:14:14.005 START TEST nvmf_lvs_grow 00:14:14.005 ************************************ 00:14:14.005 12:10:15 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:14.005 * Looking for test storage... 00:14:14.005 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:14.005 12:10:15 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:14.005 12:10:15 -- nvmf/common.sh@7 -- # uname -s 00:14:14.005 12:10:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:14.005 12:10:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:14.005 12:10:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:14.005 12:10:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:14.005 12:10:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:14.005 12:10:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:14.005 12:10:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:14.005 12:10:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:14.005 12:10:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:14.005 12:10:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:14.266 12:10:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:14.266 12:10:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:14.266 12:10:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:14.266 12:10:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:14.266 12:10:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:14.266 12:10:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:14.266 12:10:15 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:14.266 12:10:15 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:14.266 12:10:15 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:14.266 12:10:15 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:14.266 12:10:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.266 12:10:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.266 12:10:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.267 12:10:15 -- paths/export.sh@5 -- # export PATH 00:14:14.267 12:10:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.267 12:10:15 -- nvmf/common.sh@47 -- # : 0 00:14:14.267 12:10:15 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:14.267 12:10:15 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:14.267 12:10:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:14.267 12:10:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:14.267 12:10:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:14.267 12:10:15 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:14.267 12:10:15 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:14.267 12:10:15 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:14.267 12:10:15 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:14.267 12:10:15 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:14.267 12:10:15 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:14:14.267 12:10:15 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:14.267 12:10:15 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:14.267 12:10:15 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:14.267 12:10:15 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:14.267 12:10:15 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:14.267 12:10:15 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:14.267 12:10:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:14.267 12:10:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:14.267 12:10:15 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:14.267 12:10:15 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:14.267 12:10:15 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:14.267 12:10:15 -- common/autotest_common.sh@10 -- # set +x 00:14:22.401 12:10:22 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:22.401 12:10:22 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:22.401 12:10:22 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:22.401 12:10:22 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:22.401 12:10:22 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:22.401 12:10:22 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:22.401 12:10:22 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:22.401 12:10:22 -- nvmf/common.sh@295 -- # net_devs=() 00:14:22.401 12:10:22 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:22.401 12:10:22 -- nvmf/common.sh@296 -- # e810=() 00:14:22.401 12:10:22 -- nvmf/common.sh@296 -- # local -ga e810 00:14:22.401 12:10:22 -- nvmf/common.sh@297 -- # x722=() 00:14:22.401 12:10:22 -- nvmf/common.sh@297 -- # local -ga x722 00:14:22.401 12:10:22 -- nvmf/common.sh@298 -- # mlx=() 00:14:22.401 12:10:22 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:22.401 12:10:22 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:22.401 12:10:22 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:22.401 12:10:22 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:22.401 12:10:22 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:22.401 12:10:22 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:22.401 12:10:22 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:22.401 12:10:22 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:22.401 12:10:22 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:22.401 12:10:22 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:22.401 12:10:22 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:22.401 12:10:22 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:22.401 12:10:22 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:22.401 12:10:22 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:22.401 12:10:22 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:22.401 12:10:22 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:22.401 12:10:22 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:22.401 12:10:22 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:22.401 12:10:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:22.401 12:10:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:22.401 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:22.401 12:10:22 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:22.401 12:10:22 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:22.401 12:10:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:22.401 12:10:22 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:22.401 12:10:22 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:22.401 12:10:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:22.401 12:10:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:22.401 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:22.401 12:10:22 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:22.401 12:10:22 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:22.401 12:10:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:22.401 12:10:22 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:22.401 12:10:22 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:22.401 12:10:22 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:22.401 12:10:22 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:22.401 12:10:22 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:22.401 12:10:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:22.401 12:10:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:22.401 12:10:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:22.401 12:10:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:22.401 12:10:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:22.401 Found net devices under 0000:31:00.0: cvl_0_0 00:14:22.401 12:10:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:22.401 12:10:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:22.401 12:10:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:22.401 12:10:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:22.401 12:10:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:22.401 12:10:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:22.401 Found net devices under 0000:31:00.1: cvl_0_1 00:14:22.401 12:10:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:22.401 12:10:22 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:22.401 12:10:22 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:22.401 12:10:22 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:22.401 12:10:22 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:14:22.401 12:10:22 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:14:22.401 12:10:22 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:22.401 12:10:22 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:22.401 12:10:22 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:22.401 12:10:22 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:22.401 12:10:22 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:22.401 12:10:22 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:22.401 12:10:22 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:22.401 12:10:22 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:22.401 12:10:22 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:22.401 12:10:22 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:22.401 12:10:22 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:22.401 12:10:22 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:22.401 12:10:22 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:22.401 12:10:22 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:22.401 12:10:22 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:22.401 12:10:22 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:22.401 12:10:22 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:22.401 12:10:22 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:22.401 12:10:22 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:22.401 12:10:22 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:22.401 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:22.401 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.713 ms 00:14:22.401 00:14:22.401 --- 10.0.0.2 ping statistics --- 00:14:22.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:22.401 rtt min/avg/max/mdev = 0.713/0.713/0.713/0.000 ms 00:14:22.401 12:10:22 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:22.401 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:22.401 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:14:22.401 00:14:22.401 --- 10.0.0.1 ping statistics --- 00:14:22.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:22.401 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:14:22.401 12:10:22 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:22.401 12:10:22 -- nvmf/common.sh@411 -- # return 0 00:14:22.401 12:10:22 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:22.401 12:10:22 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:22.401 12:10:22 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:22.401 12:10:22 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:22.401 12:10:22 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:22.401 12:10:22 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:22.401 12:10:22 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:22.401 12:10:22 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:14:22.401 12:10:22 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:22.401 12:10:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:22.401 12:10:22 -- common/autotest_common.sh@10 -- # set +x 00:14:22.401 12:10:22 -- nvmf/common.sh@470 -- # nvmfpid=3354360 00:14:22.401 12:10:22 -- nvmf/common.sh@471 -- # waitforlisten 3354360 00:14:22.401 12:10:22 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:22.401 12:10:22 -- common/autotest_common.sh@817 -- # '[' -z 3354360 ']' 00:14:22.401 12:10:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:22.401 12:10:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:22.401 12:10:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:22.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:22.401 12:10:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:22.401 12:10:22 -- common/autotest_common.sh@10 -- # set +x 00:14:22.401 [2024-04-26 12:10:22.578429] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:14:22.401 [2024-04-26 12:10:22.578478] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:22.401 EAL: No free 2048 kB hugepages reported on node 1 00:14:22.401 [2024-04-26 12:10:22.642541] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.401 [2024-04-26 12:10:22.704910] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:22.401 [2024-04-26 12:10:22.704946] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:22.401 [2024-04-26 12:10:22.704954] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:22.401 [2024-04-26 12:10:22.704960] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:22.401 [2024-04-26 12:10:22.704966] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:22.401 [2024-04-26 12:10:22.704984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:22.401 12:10:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:22.401 12:10:23 -- common/autotest_common.sh@850 -- # return 0 00:14:22.401 12:10:23 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:22.401 12:10:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:22.401 12:10:23 -- common/autotest_common.sh@10 -- # set +x 00:14:22.401 12:10:23 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:22.401 12:10:23 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:22.401 [2024-04-26 12:10:23.543800] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:22.401 12:10:23 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:14:22.401 12:10:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:22.401 12:10:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:22.401 12:10:23 -- common/autotest_common.sh@10 -- # set +x 00:14:22.661 ************************************ 00:14:22.661 START TEST lvs_grow_clean 00:14:22.661 ************************************ 00:14:22.661 12:10:23 -- common/autotest_common.sh@1111 -- # lvs_grow 00:14:22.661 12:10:23 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:22.661 12:10:23 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:22.661 12:10:23 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:22.661 12:10:23 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:22.661 12:10:23 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:22.661 12:10:23 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:22.661 12:10:23 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:22.661 12:10:23 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:22.661 12:10:23 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:22.661 12:10:23 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:22.661 12:10:23 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:22.921 12:10:24 -- target/nvmf_lvs_grow.sh@28 -- # lvs=7842a078-b253-4ed2-a15d-e54ae91c1ee3 00:14:22.921 12:10:24 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7842a078-b253-4ed2-a15d-e54ae91c1ee3 00:14:22.921 12:10:24 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:23.182 12:10:24 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:23.182 12:10:24 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:23.182 12:10:24 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7842a078-b253-4ed2-a15d-e54ae91c1ee3 lvol 150 00:14:23.182 12:10:24 -- target/nvmf_lvs_grow.sh@33 -- # lvol=4cdef163-617c-42e4-a952-be643075da11 00:14:23.182 12:10:24 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:23.182 12:10:24 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:23.443 [2024-04-26 12:10:24.459223] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:23.443 [2024-04-26 12:10:24.459272] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:23.443 true 00:14:23.443 12:10:24 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:23.443 12:10:24 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7842a078-b253-4ed2-a15d-e54ae91c1ee3 00:14:23.443 12:10:24 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:23.443 12:10:24 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:23.703 12:10:24 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4cdef163-617c-42e4-a952-be643075da11 00:14:23.703 12:10:24 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:23.963 [2024-04-26 12:10:25.041022] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:23.963 12:10:25 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:24.257 12:10:25 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3355031 00:14:24.257 12:10:25 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:24.257 12:10:25 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:24.257 12:10:25 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3355031 /var/tmp/bdevperf.sock 00:14:24.257 12:10:25 -- common/autotest_common.sh@817 -- # '[' -z 3355031 ']' 00:14:24.257 12:10:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:24.257 12:10:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:24.257 12:10:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:24.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:24.257 12:10:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:24.257 12:10:25 -- common/autotest_common.sh@10 -- # set +x 00:14:24.257 [2024-04-26 12:10:25.253630] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:14:24.257 [2024-04-26 12:10:25.253679] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3355031 ] 00:14:24.257 EAL: No free 2048 kB hugepages reported on node 1 00:14:24.257 [2024-04-26 12:10:25.329042] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:24.257 [2024-04-26 12:10:25.391292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:24.849 12:10:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:24.849 12:10:26 -- common/autotest_common.sh@850 -- # return 0 00:14:24.849 12:10:26 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:25.109 Nvme0n1 00:14:25.109 12:10:26 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:25.369 [ 00:14:25.369 { 00:14:25.369 "name": "Nvme0n1", 00:14:25.369 "aliases": [ 00:14:25.369 "4cdef163-617c-42e4-a952-be643075da11" 00:14:25.369 ], 00:14:25.369 "product_name": "NVMe disk", 00:14:25.369 "block_size": 4096, 00:14:25.369 "num_blocks": 38912, 00:14:25.369 "uuid": "4cdef163-617c-42e4-a952-be643075da11", 00:14:25.369 "assigned_rate_limits": { 00:14:25.369 "rw_ios_per_sec": 0, 00:14:25.369 "rw_mbytes_per_sec": 0, 00:14:25.369 "r_mbytes_per_sec": 0, 00:14:25.369 "w_mbytes_per_sec": 0 00:14:25.369 }, 00:14:25.369 "claimed": false, 00:14:25.369 "zoned": false, 00:14:25.369 "supported_io_types": { 00:14:25.369 "read": true, 00:14:25.369 "write": true, 00:14:25.369 "unmap": true, 00:14:25.369 "write_zeroes": true, 00:14:25.369 "flush": true, 00:14:25.369 "reset": true, 00:14:25.369 "compare": true, 00:14:25.369 "compare_and_write": true, 00:14:25.369 "abort": true, 00:14:25.369 "nvme_admin": true, 00:14:25.369 "nvme_io": true 00:14:25.369 }, 00:14:25.369 "memory_domains": [ 00:14:25.369 { 00:14:25.369 "dma_device_id": "system", 00:14:25.369 "dma_device_type": 1 00:14:25.369 } 00:14:25.369 ], 00:14:25.369 "driver_specific": { 00:14:25.369 "nvme": [ 00:14:25.369 { 00:14:25.369 "trid": { 00:14:25.369 "trtype": "TCP", 00:14:25.369 "adrfam": "IPv4", 00:14:25.369 "traddr": "10.0.0.2", 00:14:25.369 "trsvcid": "4420", 00:14:25.369 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:25.369 }, 00:14:25.369 "ctrlr_data": { 00:14:25.369 "cntlid": 1, 00:14:25.369 "vendor_id": "0x8086", 00:14:25.369 "model_number": "SPDK bdev Controller", 00:14:25.369 "serial_number": "SPDK0", 00:14:25.369 "firmware_revision": "24.05", 00:14:25.369 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:25.369 "oacs": { 00:14:25.369 "security": 0, 00:14:25.369 "format": 0, 00:14:25.369 "firmware": 0, 00:14:25.369 "ns_manage": 0 00:14:25.369 }, 00:14:25.369 "multi_ctrlr": true, 00:14:25.369 "ana_reporting": false 00:14:25.369 }, 00:14:25.369 "vs": { 00:14:25.369 "nvme_version": "1.3" 00:14:25.369 }, 00:14:25.369 "ns_data": { 00:14:25.369 "id": 1, 00:14:25.369 "can_share": true 00:14:25.369 } 00:14:25.369 } 00:14:25.369 ], 00:14:25.369 "mp_policy": "active_passive" 00:14:25.369 } 00:14:25.369 } 00:14:25.369 ] 00:14:25.369 12:10:26 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3355152 00:14:25.369 12:10:26 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:25.369 12:10:26 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:25.370 Running I/O for 10 seconds... 00:14:26.752 Latency(us) 00:14:26.752 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:26.752 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:26.752 Nvme0n1 : 1.00 17591.00 68.71 0.00 0.00 0.00 0.00 0.00 00:14:26.752 =================================================================================================================== 00:14:26.752 Total : 17591.00 68.71 0.00 0.00 0.00 0.00 0.00 00:14:26.752 00:14:27.323 12:10:28 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 7842a078-b253-4ed2-a15d-e54ae91c1ee3 00:14:27.583 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:27.583 Nvme0n1 : 2.00 17657.50 68.97 0.00 0.00 0.00 0.00 0.00 00:14:27.583 =================================================================================================================== 00:14:27.583 Total : 17657.50 68.97 0.00 0.00 0.00 0.00 0.00 00:14:27.583 00:14:27.583 true 00:14:27.583 12:10:28 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7842a078-b253-4ed2-a15d-e54ae91c1ee3 00:14:27.583 12:10:28 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:27.843 12:10:28 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:27.843 12:10:28 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:27.843 12:10:28 -- target/nvmf_lvs_grow.sh@65 -- # wait 3355152 00:14:28.412 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:28.412 Nvme0n1 : 3.00 17699.00 69.14 0.00 0.00 0.00 0.00 0.00 00:14:28.412 =================================================================================================================== 00:14:28.412 Total : 17699.00 69.14 0.00 0.00 0.00 0.00 0.00 00:14:28.412 00:14:29.352 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:29.352 Nvme0n1 : 4.00 17717.25 69.21 0.00 0.00 0.00 0.00 0.00 00:14:29.352 =================================================================================================================== 00:14:29.352 Total : 17717.25 69.21 0.00 0.00 0.00 0.00 0.00 00:14:29.352 00:14:30.739 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:30.739 Nvme0n1 : 5.00 17739.20 69.29 0.00 0.00 0.00 0.00 0.00 00:14:30.739 =================================================================================================================== 00:14:30.739 Total : 17739.20 69.29 0.00 0.00 0.00 0.00 0.00 00:14:30.739 00:14:31.681 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:31.681 Nvme0n1 : 6.00 17755.67 69.36 0.00 0.00 0.00 0.00 0.00 00:14:31.681 =================================================================================================================== 00:14:31.681 Total : 17755.67 69.36 0.00 0.00 0.00 0.00 0.00 00:14:31.681 00:14:32.624 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:32.624 Nvme0n1 : 7.00 17760.00 69.38 0.00 0.00 0.00 0.00 0.00 00:14:32.624 =================================================================================================================== 00:14:32.624 Total : 17760.00 69.38 0.00 0.00 0.00 0.00 0.00 00:14:32.624 00:14:33.567 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:33.567 Nvme0n1 : 8.00 17769.75 69.41 0.00 0.00 0.00 0.00 0.00 00:14:33.567 =================================================================================================================== 00:14:33.567 Total : 17769.75 69.41 0.00 0.00 0.00 0.00 0.00 00:14:33.567 00:14:34.509 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:34.509 Nvme0n1 : 9.00 17784.56 69.47 0.00 0.00 0.00 0.00 0.00 00:14:34.509 =================================================================================================================== 00:14:34.509 Total : 17784.56 69.47 0.00 0.00 0.00 0.00 0.00 00:14:34.509 00:14:35.453 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:35.453 Nvme0n1 : 10.00 17789.70 69.49 0.00 0.00 0.00 0.00 0.00 00:14:35.453 =================================================================================================================== 00:14:35.453 Total : 17789.70 69.49 0.00 0.00 0.00 0.00 0.00 00:14:35.453 00:14:35.453 00:14:35.453 Latency(us) 00:14:35.453 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:35.453 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:35.453 Nvme0n1 : 10.00 17795.45 69.51 0.00 0.00 7189.70 3604.48 12888.75 00:14:35.453 =================================================================================================================== 00:14:35.453 Total : 17795.45 69.51 0.00 0.00 7189.70 3604.48 12888.75 00:14:35.453 0 00:14:35.453 12:10:36 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3355031 00:14:35.453 12:10:36 -- common/autotest_common.sh@936 -- # '[' -z 3355031 ']' 00:14:35.453 12:10:36 -- common/autotest_common.sh@940 -- # kill -0 3355031 00:14:35.453 12:10:36 -- common/autotest_common.sh@941 -- # uname 00:14:35.453 12:10:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:35.453 12:10:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3355031 00:14:35.453 12:10:36 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:35.453 12:10:36 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:35.453 12:10:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3355031' 00:14:35.453 killing process with pid 3355031 00:14:35.453 12:10:36 -- common/autotest_common.sh@955 -- # kill 3355031 00:14:35.453 Received shutdown signal, test time was about 10.000000 seconds 00:14:35.453 00:14:35.453 Latency(us) 00:14:35.453 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:35.453 =================================================================================================================== 00:14:35.453 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:35.453 12:10:36 -- common/autotest_common.sh@960 -- # wait 3355031 00:14:35.714 12:10:36 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:35.975 12:10:36 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7842a078-b253-4ed2-a15d-e54ae91c1ee3 00:14:35.975 12:10:36 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:14:35.975 12:10:37 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:14:35.975 12:10:37 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:14:35.975 12:10:37 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:36.235 [2024-04-26 12:10:37.214872] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:36.235 12:10:37 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7842a078-b253-4ed2-a15d-e54ae91c1ee3 00:14:36.235 12:10:37 -- common/autotest_common.sh@638 -- # local es=0 00:14:36.235 12:10:37 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7842a078-b253-4ed2-a15d-e54ae91c1ee3 00:14:36.235 12:10:37 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:36.235 12:10:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:36.235 12:10:37 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:36.235 12:10:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:36.235 12:10:37 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:36.235 12:10:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:36.235 12:10:37 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:36.235 12:10:37 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:36.235 12:10:37 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7842a078-b253-4ed2-a15d-e54ae91c1ee3 00:14:36.235 request: 00:14:36.235 { 00:14:36.235 "uuid": "7842a078-b253-4ed2-a15d-e54ae91c1ee3", 00:14:36.235 "method": "bdev_lvol_get_lvstores", 00:14:36.235 "req_id": 1 00:14:36.235 } 00:14:36.235 Got JSON-RPC error response 00:14:36.235 response: 00:14:36.235 { 00:14:36.235 "code": -19, 00:14:36.235 "message": "No such device" 00:14:36.235 } 00:14:36.235 12:10:37 -- common/autotest_common.sh@641 -- # es=1 00:14:36.235 12:10:37 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:36.235 12:10:37 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:36.235 12:10:37 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:36.235 12:10:37 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:36.496 aio_bdev 00:14:36.496 12:10:37 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 4cdef163-617c-42e4-a952-be643075da11 00:14:36.496 12:10:37 -- common/autotest_common.sh@885 -- # local bdev_name=4cdef163-617c-42e4-a952-be643075da11 00:14:36.496 12:10:37 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:14:36.496 12:10:37 -- common/autotest_common.sh@887 -- # local i 00:14:36.496 12:10:37 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:14:36.496 12:10:37 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:14:36.496 12:10:37 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:36.757 12:10:37 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 4cdef163-617c-42e4-a952-be643075da11 -t 2000 00:14:36.757 [ 00:14:36.757 { 00:14:36.757 "name": "4cdef163-617c-42e4-a952-be643075da11", 00:14:36.757 "aliases": [ 00:14:36.757 "lvs/lvol" 00:14:36.757 ], 00:14:36.757 "product_name": "Logical Volume", 00:14:36.757 "block_size": 4096, 00:14:36.757 "num_blocks": 38912, 00:14:36.757 "uuid": "4cdef163-617c-42e4-a952-be643075da11", 00:14:36.757 "assigned_rate_limits": { 00:14:36.757 "rw_ios_per_sec": 0, 00:14:36.757 "rw_mbytes_per_sec": 0, 00:14:36.757 "r_mbytes_per_sec": 0, 00:14:36.757 "w_mbytes_per_sec": 0 00:14:36.757 }, 00:14:36.757 "claimed": false, 00:14:36.757 "zoned": false, 00:14:36.757 "supported_io_types": { 00:14:36.757 "read": true, 00:14:36.757 "write": true, 00:14:36.757 "unmap": true, 00:14:36.757 "write_zeroes": true, 00:14:36.757 "flush": false, 00:14:36.757 "reset": true, 00:14:36.757 "compare": false, 00:14:36.757 "compare_and_write": false, 00:14:36.757 "abort": false, 00:14:36.757 "nvme_admin": false, 00:14:36.757 "nvme_io": false 00:14:36.757 }, 00:14:36.757 "driver_specific": { 00:14:36.757 "lvol": { 00:14:36.757 "lvol_store_uuid": "7842a078-b253-4ed2-a15d-e54ae91c1ee3", 00:14:36.757 "base_bdev": "aio_bdev", 00:14:36.757 "thin_provision": false, 00:14:36.757 "snapshot": false, 00:14:36.757 "clone": false, 00:14:36.757 "esnap_clone": false 00:14:36.757 } 00:14:36.757 } 00:14:36.757 } 00:14:36.757 ] 00:14:36.757 12:10:37 -- common/autotest_common.sh@893 -- # return 0 00:14:36.757 12:10:37 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7842a078-b253-4ed2-a15d-e54ae91c1ee3 00:14:36.757 12:10:37 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:14:37.018 12:10:38 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:14:37.018 12:10:38 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7842a078-b253-4ed2-a15d-e54ae91c1ee3 00:14:37.018 12:10:38 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:14:37.018 12:10:38 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:14:37.018 12:10:38 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4cdef163-617c-42e4-a952-be643075da11 00:14:37.279 12:10:38 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7842a078-b253-4ed2-a15d-e54ae91c1ee3 00:14:37.539 12:10:38 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:37.540 12:10:38 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:37.540 00:14:37.540 real 0m15.029s 00:14:37.540 user 0m14.848s 00:14:37.540 sys 0m1.147s 00:14:37.540 12:10:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:37.540 12:10:38 -- common/autotest_common.sh@10 -- # set +x 00:14:37.540 ************************************ 00:14:37.540 END TEST lvs_grow_clean 00:14:37.540 ************************************ 00:14:37.801 12:10:38 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:37.801 12:10:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:37.801 12:10:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:37.801 12:10:38 -- common/autotest_common.sh@10 -- # set +x 00:14:37.801 ************************************ 00:14:37.801 START TEST lvs_grow_dirty 00:14:37.801 ************************************ 00:14:37.801 12:10:38 -- common/autotest_common.sh@1111 -- # lvs_grow dirty 00:14:37.801 12:10:38 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:37.801 12:10:38 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:37.801 12:10:38 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:37.801 12:10:38 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:37.801 12:10:38 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:37.801 12:10:38 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:37.801 12:10:38 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:37.801 12:10:38 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:37.801 12:10:38 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:38.062 12:10:39 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:38.062 12:10:39 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:38.062 12:10:39 -- target/nvmf_lvs_grow.sh@28 -- # lvs=36db5d9a-b44e-45b4-829f-2629cb301e7b 00:14:38.062 12:10:39 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 36db5d9a-b44e-45b4-829f-2629cb301e7b 00:14:38.062 12:10:39 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:38.322 12:10:39 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:38.322 12:10:39 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:38.322 12:10:39 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 36db5d9a-b44e-45b4-829f-2629cb301e7b lvol 150 00:14:38.583 12:10:39 -- target/nvmf_lvs_grow.sh@33 -- # lvol=fea1930f-7725-49eb-aa98-b6e08eeba16e 00:14:38.583 12:10:39 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:38.583 12:10:39 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:38.583 [2024-04-26 12:10:39.698214] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:38.583 [2024-04-26 12:10:39.698266] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:38.583 true 00:14:38.583 12:10:39 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 36db5d9a-b44e-45b4-829f-2629cb301e7b 00:14:38.583 12:10:39 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:38.844 12:10:39 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:38.844 12:10:39 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:38.844 12:10:40 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 fea1930f-7725-49eb-aa98-b6e08eeba16e 00:14:39.105 12:10:40 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:39.366 12:10:40 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:39.366 12:10:40 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3358040 00:14:39.366 12:10:40 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:39.366 12:10:40 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:39.366 12:10:40 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3358040 /var/tmp/bdevperf.sock 00:14:39.366 12:10:40 -- common/autotest_common.sh@817 -- # '[' -z 3358040 ']' 00:14:39.366 12:10:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:39.366 12:10:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:39.366 12:10:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:39.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:39.366 12:10:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:39.366 12:10:40 -- common/autotest_common.sh@10 -- # set +x 00:14:39.366 [2024-04-26 12:10:40.530033] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:14:39.366 [2024-04-26 12:10:40.530085] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3358040 ] 00:14:39.366 EAL: No free 2048 kB hugepages reported on node 1 00:14:39.627 [2024-04-26 12:10:40.603253] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:39.627 [2024-04-26 12:10:40.655916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:40.198 12:10:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:40.198 12:10:41 -- common/autotest_common.sh@850 -- # return 0 00:14:40.198 12:10:41 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:40.460 Nvme0n1 00:14:40.460 12:10:41 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:40.722 [ 00:14:40.722 { 00:14:40.722 "name": "Nvme0n1", 00:14:40.722 "aliases": [ 00:14:40.722 "fea1930f-7725-49eb-aa98-b6e08eeba16e" 00:14:40.722 ], 00:14:40.722 "product_name": "NVMe disk", 00:14:40.722 "block_size": 4096, 00:14:40.722 "num_blocks": 38912, 00:14:40.722 "uuid": "fea1930f-7725-49eb-aa98-b6e08eeba16e", 00:14:40.722 "assigned_rate_limits": { 00:14:40.722 "rw_ios_per_sec": 0, 00:14:40.722 "rw_mbytes_per_sec": 0, 00:14:40.722 "r_mbytes_per_sec": 0, 00:14:40.722 "w_mbytes_per_sec": 0 00:14:40.722 }, 00:14:40.722 "claimed": false, 00:14:40.722 "zoned": false, 00:14:40.722 "supported_io_types": { 00:14:40.722 "read": true, 00:14:40.722 "write": true, 00:14:40.722 "unmap": true, 00:14:40.722 "write_zeroes": true, 00:14:40.722 "flush": true, 00:14:40.722 "reset": true, 00:14:40.722 "compare": true, 00:14:40.722 "compare_and_write": true, 00:14:40.722 "abort": true, 00:14:40.722 "nvme_admin": true, 00:14:40.722 "nvme_io": true 00:14:40.722 }, 00:14:40.722 "memory_domains": [ 00:14:40.722 { 00:14:40.722 "dma_device_id": "system", 00:14:40.722 "dma_device_type": 1 00:14:40.722 } 00:14:40.722 ], 00:14:40.722 "driver_specific": { 00:14:40.722 "nvme": [ 00:14:40.722 { 00:14:40.722 "trid": { 00:14:40.722 "trtype": "TCP", 00:14:40.722 "adrfam": "IPv4", 00:14:40.722 "traddr": "10.0.0.2", 00:14:40.722 "trsvcid": "4420", 00:14:40.722 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:40.722 }, 00:14:40.722 "ctrlr_data": { 00:14:40.722 "cntlid": 1, 00:14:40.722 "vendor_id": "0x8086", 00:14:40.722 "model_number": "SPDK bdev Controller", 00:14:40.722 "serial_number": "SPDK0", 00:14:40.722 "firmware_revision": "24.05", 00:14:40.722 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:40.722 "oacs": { 00:14:40.722 "security": 0, 00:14:40.722 "format": 0, 00:14:40.722 "firmware": 0, 00:14:40.722 "ns_manage": 0 00:14:40.722 }, 00:14:40.722 "multi_ctrlr": true, 00:14:40.722 "ana_reporting": false 00:14:40.722 }, 00:14:40.722 "vs": { 00:14:40.722 "nvme_version": "1.3" 00:14:40.722 }, 00:14:40.722 "ns_data": { 00:14:40.722 "id": 1, 00:14:40.722 "can_share": true 00:14:40.722 } 00:14:40.722 } 00:14:40.722 ], 00:14:40.722 "mp_policy": "active_passive" 00:14:40.722 } 00:14:40.722 } 00:14:40.722 ] 00:14:40.722 12:10:41 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3358182 00:14:40.722 12:10:41 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:40.722 12:10:41 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:40.722 Running I/O for 10 seconds... 00:14:41.667 Latency(us) 00:14:41.667 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:41.667 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:41.667 Nvme0n1 : 1.00 17538.00 68.51 0.00 0.00 0.00 0.00 0.00 00:14:41.667 =================================================================================================================== 00:14:41.667 Total : 17538.00 68.51 0.00 0.00 0.00 0.00 0.00 00:14:41.667 00:14:42.620 12:10:43 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 36db5d9a-b44e-45b4-829f-2629cb301e7b 00:14:42.620 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:42.620 Nvme0n1 : 2.00 17628.00 68.86 0.00 0.00 0.00 0.00 0.00 00:14:42.620 =================================================================================================================== 00:14:42.620 Total : 17628.00 68.86 0.00 0.00 0.00 0.00 0.00 00:14:42.620 00:14:42.881 true 00:14:42.881 12:10:43 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 36db5d9a-b44e-45b4-829f-2629cb301e7b 00:14:42.881 12:10:43 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:42.881 12:10:44 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:42.881 12:10:44 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:42.881 12:10:44 -- target/nvmf_lvs_grow.sh@65 -- # wait 3358182 00:14:43.823 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:43.823 Nvme0n1 : 3.00 17677.00 69.05 0.00 0.00 0.00 0.00 0.00 00:14:43.823 =================================================================================================================== 00:14:43.823 Total : 17677.00 69.05 0.00 0.00 0.00 0.00 0.00 00:14:43.823 00:14:44.765 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:44.765 Nvme0n1 : 4.00 17702.00 69.15 0.00 0.00 0.00 0.00 0.00 00:14:44.765 =================================================================================================================== 00:14:44.765 Total : 17702.00 69.15 0.00 0.00 0.00 0.00 0.00 00:14:44.765 00:14:45.709 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:45.709 Nvme0n1 : 5.00 17720.00 69.22 0.00 0.00 0.00 0.00 0.00 00:14:45.709 =================================================================================================================== 00:14:45.709 Total : 17720.00 69.22 0.00 0.00 0.00 0.00 0.00 00:14:45.709 00:14:46.650 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:46.650 Nvme0n1 : 6.00 17748.83 69.33 0.00 0.00 0.00 0.00 0.00 00:14:46.650 =================================================================================================================== 00:14:46.650 Total : 17748.83 69.33 0.00 0.00 0.00 0.00 0.00 00:14:46.650 00:14:48.034 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:48.034 Nvme0n1 : 7.00 17753.29 69.35 0.00 0.00 0.00 0.00 0.00 00:14:48.034 =================================================================================================================== 00:14:48.034 Total : 17753.29 69.35 0.00 0.00 0.00 0.00 0.00 00:14:48.034 00:14:48.972 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:48.972 Nvme0n1 : 8.00 17771.88 69.42 0.00 0.00 0.00 0.00 0.00 00:14:48.972 =================================================================================================================== 00:14:48.973 Total : 17771.88 69.42 0.00 0.00 0.00 0.00 0.00 00:14:48.973 00:14:49.912 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:49.912 Nvme0n1 : 9.00 17785.33 69.47 0.00 0.00 0.00 0.00 0.00 00:14:49.912 =================================================================================================================== 00:14:49.912 Total : 17785.33 69.47 0.00 0.00 0.00 0.00 0.00 00:14:49.912 00:14:50.853 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:50.853 Nvme0n1 : 10.00 17791.20 69.50 0.00 0.00 0.00 0.00 0.00 00:14:50.853 =================================================================================================================== 00:14:50.853 Total : 17791.20 69.50 0.00 0.00 0.00 0.00 0.00 00:14:50.853 00:14:50.853 00:14:50.853 Latency(us) 00:14:50.853 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.853 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:50.853 Nvme0n1 : 10.01 17794.09 69.51 0.00 0.00 7190.13 2457.60 13762.56 00:14:50.853 =================================================================================================================== 00:14:50.853 Total : 17794.09 69.51 0.00 0.00 7190.13 2457.60 13762.56 00:14:50.853 0 00:14:50.853 12:10:51 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3358040 00:14:50.853 12:10:51 -- common/autotest_common.sh@936 -- # '[' -z 3358040 ']' 00:14:50.853 12:10:51 -- common/autotest_common.sh@940 -- # kill -0 3358040 00:14:50.853 12:10:51 -- common/autotest_common.sh@941 -- # uname 00:14:50.853 12:10:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:50.853 12:10:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3358040 00:14:50.853 12:10:51 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:50.853 12:10:51 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:50.853 12:10:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3358040' 00:14:50.853 killing process with pid 3358040 00:14:50.853 12:10:51 -- common/autotest_common.sh@955 -- # kill 3358040 00:14:50.853 Received shutdown signal, test time was about 10.000000 seconds 00:14:50.853 00:14:50.853 Latency(us) 00:14:50.853 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.853 =================================================================================================================== 00:14:50.853 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:50.853 12:10:51 -- common/autotest_common.sh@960 -- # wait 3358040 00:14:50.853 12:10:52 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:51.158 12:10:52 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 36db5d9a-b44e-45b4-829f-2629cb301e7b 00:14:51.158 12:10:52 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:14:51.445 12:10:52 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:14:51.445 12:10:52 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:14:51.445 12:10:52 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 3354360 00:14:51.445 12:10:52 -- target/nvmf_lvs_grow.sh@74 -- # wait 3354360 00:14:51.445 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 3354360 Killed "${NVMF_APP[@]}" "$@" 00:14:51.445 12:10:52 -- target/nvmf_lvs_grow.sh@74 -- # true 00:14:51.445 12:10:52 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:14:51.445 12:10:52 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:51.445 12:10:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:51.445 12:10:52 -- common/autotest_common.sh@10 -- # set +x 00:14:51.445 12:10:52 -- nvmf/common.sh@470 -- # nvmfpid=3360392 00:14:51.445 12:10:52 -- nvmf/common.sh@471 -- # waitforlisten 3360392 00:14:51.445 12:10:52 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:51.445 12:10:52 -- common/autotest_common.sh@817 -- # '[' -z 3360392 ']' 00:14:51.445 12:10:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:51.445 12:10:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:51.445 12:10:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:51.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:51.445 12:10:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:51.445 12:10:52 -- common/autotest_common.sh@10 -- # set +x 00:14:51.445 [2024-04-26 12:10:52.504456] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:14:51.445 [2024-04-26 12:10:52.504524] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:51.445 EAL: No free 2048 kB hugepages reported on node 1 00:14:51.445 [2024-04-26 12:10:52.571899] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.445 [2024-04-26 12:10:52.636580] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:51.445 [2024-04-26 12:10:52.636618] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:51.445 [2024-04-26 12:10:52.636625] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:51.445 [2024-04-26 12:10:52.636632] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:51.445 [2024-04-26 12:10:52.636637] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:51.445 [2024-04-26 12:10:52.636654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:52.395 12:10:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:52.395 12:10:53 -- common/autotest_common.sh@850 -- # return 0 00:14:52.395 12:10:53 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:52.395 12:10:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:52.395 12:10:53 -- common/autotest_common.sh@10 -- # set +x 00:14:52.395 12:10:53 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:52.395 12:10:53 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:52.395 [2024-04-26 12:10:53.437715] blobstore.c:4779:bs_recover: *NOTICE*: Performing recovery on blobstore 00:14:52.395 [2024-04-26 12:10:53.437805] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:14:52.395 [2024-04-26 12:10:53.437834] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:14:52.395 12:10:53 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:14:52.395 12:10:53 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev fea1930f-7725-49eb-aa98-b6e08eeba16e 00:14:52.395 12:10:53 -- common/autotest_common.sh@885 -- # local bdev_name=fea1930f-7725-49eb-aa98-b6e08eeba16e 00:14:52.395 12:10:53 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:14:52.395 12:10:53 -- common/autotest_common.sh@887 -- # local i 00:14:52.395 12:10:53 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:14:52.395 12:10:53 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:14:52.395 12:10:53 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:52.657 12:10:53 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b fea1930f-7725-49eb-aa98-b6e08eeba16e -t 2000 00:14:52.657 [ 00:14:52.657 { 00:14:52.657 "name": "fea1930f-7725-49eb-aa98-b6e08eeba16e", 00:14:52.657 "aliases": [ 00:14:52.657 "lvs/lvol" 00:14:52.657 ], 00:14:52.657 "product_name": "Logical Volume", 00:14:52.657 "block_size": 4096, 00:14:52.657 "num_blocks": 38912, 00:14:52.657 "uuid": "fea1930f-7725-49eb-aa98-b6e08eeba16e", 00:14:52.657 "assigned_rate_limits": { 00:14:52.657 "rw_ios_per_sec": 0, 00:14:52.657 "rw_mbytes_per_sec": 0, 00:14:52.657 "r_mbytes_per_sec": 0, 00:14:52.657 "w_mbytes_per_sec": 0 00:14:52.657 }, 00:14:52.657 "claimed": false, 00:14:52.657 "zoned": false, 00:14:52.657 "supported_io_types": { 00:14:52.657 "read": true, 00:14:52.657 "write": true, 00:14:52.657 "unmap": true, 00:14:52.657 "write_zeroes": true, 00:14:52.657 "flush": false, 00:14:52.657 "reset": true, 00:14:52.657 "compare": false, 00:14:52.657 "compare_and_write": false, 00:14:52.657 "abort": false, 00:14:52.657 "nvme_admin": false, 00:14:52.657 "nvme_io": false 00:14:52.657 }, 00:14:52.657 "driver_specific": { 00:14:52.657 "lvol": { 00:14:52.657 "lvol_store_uuid": "36db5d9a-b44e-45b4-829f-2629cb301e7b", 00:14:52.657 "base_bdev": "aio_bdev", 00:14:52.657 "thin_provision": false, 00:14:52.657 "snapshot": false, 00:14:52.657 "clone": false, 00:14:52.657 "esnap_clone": false 00:14:52.657 } 00:14:52.657 } 00:14:52.657 } 00:14:52.657 ] 00:14:52.657 12:10:53 -- common/autotest_common.sh@893 -- # return 0 00:14:52.657 12:10:53 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 36db5d9a-b44e-45b4-829f-2629cb301e7b 00:14:52.657 12:10:53 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:14:52.917 12:10:53 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:14:52.917 12:10:53 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 36db5d9a-b44e-45b4-829f-2629cb301e7b 00:14:52.917 12:10:53 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:14:52.917 12:10:54 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:14:52.917 12:10:54 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:53.178 [2024-04-26 12:10:54.217704] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:53.178 12:10:54 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 36db5d9a-b44e-45b4-829f-2629cb301e7b 00:14:53.178 12:10:54 -- common/autotest_common.sh@638 -- # local es=0 00:14:53.178 12:10:54 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 36db5d9a-b44e-45b4-829f-2629cb301e7b 00:14:53.179 12:10:54 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:53.179 12:10:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:53.179 12:10:54 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:53.179 12:10:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:53.179 12:10:54 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:53.179 12:10:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:53.179 12:10:54 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:53.179 12:10:54 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:53.179 12:10:54 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 36db5d9a-b44e-45b4-829f-2629cb301e7b 00:14:53.439 request: 00:14:53.439 { 00:14:53.439 "uuid": "36db5d9a-b44e-45b4-829f-2629cb301e7b", 00:14:53.439 "method": "bdev_lvol_get_lvstores", 00:14:53.439 "req_id": 1 00:14:53.439 } 00:14:53.439 Got JSON-RPC error response 00:14:53.439 response: 00:14:53.439 { 00:14:53.439 "code": -19, 00:14:53.439 "message": "No such device" 00:14:53.439 } 00:14:53.439 12:10:54 -- common/autotest_common.sh@641 -- # es=1 00:14:53.439 12:10:54 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:53.439 12:10:54 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:53.439 12:10:54 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:53.439 12:10:54 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:53.439 aio_bdev 00:14:53.439 12:10:54 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev fea1930f-7725-49eb-aa98-b6e08eeba16e 00:14:53.439 12:10:54 -- common/autotest_common.sh@885 -- # local bdev_name=fea1930f-7725-49eb-aa98-b6e08eeba16e 00:14:53.439 12:10:54 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:14:53.439 12:10:54 -- common/autotest_common.sh@887 -- # local i 00:14:53.439 12:10:54 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:14:53.439 12:10:54 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:14:53.439 12:10:54 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:53.701 12:10:54 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b fea1930f-7725-49eb-aa98-b6e08eeba16e -t 2000 00:14:53.701 [ 00:14:53.701 { 00:14:53.701 "name": "fea1930f-7725-49eb-aa98-b6e08eeba16e", 00:14:53.701 "aliases": [ 00:14:53.701 "lvs/lvol" 00:14:53.701 ], 00:14:53.701 "product_name": "Logical Volume", 00:14:53.701 "block_size": 4096, 00:14:53.701 "num_blocks": 38912, 00:14:53.701 "uuid": "fea1930f-7725-49eb-aa98-b6e08eeba16e", 00:14:53.701 "assigned_rate_limits": { 00:14:53.701 "rw_ios_per_sec": 0, 00:14:53.701 "rw_mbytes_per_sec": 0, 00:14:53.701 "r_mbytes_per_sec": 0, 00:14:53.701 "w_mbytes_per_sec": 0 00:14:53.701 }, 00:14:53.701 "claimed": false, 00:14:53.701 "zoned": false, 00:14:53.701 "supported_io_types": { 00:14:53.701 "read": true, 00:14:53.701 "write": true, 00:14:53.701 "unmap": true, 00:14:53.701 "write_zeroes": true, 00:14:53.701 "flush": false, 00:14:53.701 "reset": true, 00:14:53.701 "compare": false, 00:14:53.701 "compare_and_write": false, 00:14:53.701 "abort": false, 00:14:53.701 "nvme_admin": false, 00:14:53.701 "nvme_io": false 00:14:53.701 }, 00:14:53.701 "driver_specific": { 00:14:53.701 "lvol": { 00:14:53.701 "lvol_store_uuid": "36db5d9a-b44e-45b4-829f-2629cb301e7b", 00:14:53.701 "base_bdev": "aio_bdev", 00:14:53.701 "thin_provision": false, 00:14:53.701 "snapshot": false, 00:14:53.701 "clone": false, 00:14:53.701 "esnap_clone": false 00:14:53.701 } 00:14:53.701 } 00:14:53.701 } 00:14:53.701 ] 00:14:53.701 12:10:54 -- common/autotest_common.sh@893 -- # return 0 00:14:53.701 12:10:54 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 36db5d9a-b44e-45b4-829f-2629cb301e7b 00:14:53.701 12:10:54 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:14:53.962 12:10:55 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:14:53.962 12:10:55 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 36db5d9a-b44e-45b4-829f-2629cb301e7b 00:14:53.962 12:10:55 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:14:53.962 12:10:55 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:14:53.962 12:10:55 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete fea1930f-7725-49eb-aa98-b6e08eeba16e 00:14:54.223 12:10:55 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 36db5d9a-b44e-45b4-829f-2629cb301e7b 00:14:54.483 12:10:55 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:54.483 12:10:55 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:54.483 00:14:54.483 real 0m16.781s 00:14:54.483 user 0m43.988s 00:14:54.483 sys 0m2.841s 00:14:54.483 12:10:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:54.483 12:10:55 -- common/autotest_common.sh@10 -- # set +x 00:14:54.483 ************************************ 00:14:54.483 END TEST lvs_grow_dirty 00:14:54.483 ************************************ 00:14:54.744 12:10:55 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:14:54.744 12:10:55 -- common/autotest_common.sh@794 -- # type=--id 00:14:54.744 12:10:55 -- common/autotest_common.sh@795 -- # id=0 00:14:54.744 12:10:55 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:14:54.744 12:10:55 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:54.744 12:10:55 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:14:54.744 12:10:55 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:14:54.744 12:10:55 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:14:54.744 12:10:55 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:54.744 nvmf_trace.0 00:14:54.744 12:10:55 -- common/autotest_common.sh@809 -- # return 0 00:14:54.744 12:10:55 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:14:54.744 12:10:55 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:54.744 12:10:55 -- nvmf/common.sh@117 -- # sync 00:14:54.744 12:10:55 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:54.744 12:10:55 -- nvmf/common.sh@120 -- # set +e 00:14:54.744 12:10:55 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:54.744 12:10:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:54.744 rmmod nvme_tcp 00:14:54.744 rmmod nvme_fabrics 00:14:54.744 rmmod nvme_keyring 00:14:54.744 12:10:55 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:54.744 12:10:55 -- nvmf/common.sh@124 -- # set -e 00:14:54.744 12:10:55 -- nvmf/common.sh@125 -- # return 0 00:14:54.744 12:10:55 -- nvmf/common.sh@478 -- # '[' -n 3360392 ']' 00:14:54.744 12:10:55 -- nvmf/common.sh@479 -- # killprocess 3360392 00:14:54.744 12:10:55 -- common/autotest_common.sh@936 -- # '[' -z 3360392 ']' 00:14:54.744 12:10:55 -- common/autotest_common.sh@940 -- # kill -0 3360392 00:14:54.744 12:10:55 -- common/autotest_common.sh@941 -- # uname 00:14:54.744 12:10:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:54.744 12:10:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3360392 00:14:54.744 12:10:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:54.744 12:10:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:54.744 12:10:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3360392' 00:14:54.744 killing process with pid 3360392 00:14:54.744 12:10:55 -- common/autotest_common.sh@955 -- # kill 3360392 00:14:54.744 12:10:55 -- common/autotest_common.sh@960 -- # wait 3360392 00:14:55.005 12:10:56 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:55.005 12:10:56 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:55.005 12:10:56 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:55.005 12:10:56 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:55.005 12:10:56 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:55.005 12:10:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:55.005 12:10:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:55.005 12:10:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.921 12:10:58 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:56.921 00:14:56.921 real 0m42.981s 00:14:56.921 user 1m4.836s 00:14:56.921 sys 0m9.889s 00:14:56.921 12:10:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:56.921 12:10:58 -- common/autotest_common.sh@10 -- # set +x 00:14:56.921 ************************************ 00:14:56.921 END TEST nvmf_lvs_grow 00:14:56.921 ************************************ 00:14:56.921 12:10:58 -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:56.921 12:10:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:56.921 12:10:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:56.921 12:10:58 -- common/autotest_common.sh@10 -- # set +x 00:14:57.183 ************************************ 00:14:57.183 START TEST nvmf_bdev_io_wait 00:14:57.183 ************************************ 00:14:57.183 12:10:58 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:57.183 * Looking for test storage... 00:14:57.183 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:57.183 12:10:58 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:57.183 12:10:58 -- nvmf/common.sh@7 -- # uname -s 00:14:57.183 12:10:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:57.183 12:10:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:57.183 12:10:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:57.183 12:10:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:57.183 12:10:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:57.183 12:10:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:57.183 12:10:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:57.183 12:10:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:57.183 12:10:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:57.183 12:10:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:57.183 12:10:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:57.183 12:10:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:57.183 12:10:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:57.183 12:10:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:57.183 12:10:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:57.183 12:10:58 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:57.183 12:10:58 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:57.444 12:10:58 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:57.445 12:10:58 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:57.445 12:10:58 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:57.445 12:10:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.445 12:10:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.445 12:10:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.445 12:10:58 -- paths/export.sh@5 -- # export PATH 00:14:57.445 12:10:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.445 12:10:58 -- nvmf/common.sh@47 -- # : 0 00:14:57.445 12:10:58 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:57.445 12:10:58 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:57.445 12:10:58 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:57.445 12:10:58 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:57.445 12:10:58 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:57.445 12:10:58 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:57.445 12:10:58 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:57.445 12:10:58 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:57.445 12:10:58 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:57.445 12:10:58 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:57.445 12:10:58 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:14:57.445 12:10:58 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:57.445 12:10:58 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:57.445 12:10:58 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:57.445 12:10:58 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:57.445 12:10:58 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:57.445 12:10:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:57.445 12:10:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:57.445 12:10:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:57.445 12:10:58 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:57.445 12:10:58 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:57.445 12:10:58 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:57.445 12:10:58 -- common/autotest_common.sh@10 -- # set +x 00:15:05.593 12:11:05 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:05.594 12:11:05 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:05.594 12:11:05 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:05.594 12:11:05 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:05.594 12:11:05 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:05.594 12:11:05 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:05.594 12:11:05 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:05.594 12:11:05 -- nvmf/common.sh@295 -- # net_devs=() 00:15:05.594 12:11:05 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:05.594 12:11:05 -- nvmf/common.sh@296 -- # e810=() 00:15:05.594 12:11:05 -- nvmf/common.sh@296 -- # local -ga e810 00:15:05.594 12:11:05 -- nvmf/common.sh@297 -- # x722=() 00:15:05.594 12:11:05 -- nvmf/common.sh@297 -- # local -ga x722 00:15:05.594 12:11:05 -- nvmf/common.sh@298 -- # mlx=() 00:15:05.594 12:11:05 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:05.594 12:11:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:05.594 12:11:05 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:05.594 12:11:05 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:05.594 12:11:05 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:05.594 12:11:05 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:05.594 12:11:05 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:05.594 12:11:05 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:05.594 12:11:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:05.594 12:11:05 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:05.594 12:11:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:05.594 12:11:05 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:05.594 12:11:05 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:05.594 12:11:05 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:05.594 12:11:05 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:05.594 12:11:05 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:05.594 12:11:05 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:05.594 12:11:05 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:05.594 12:11:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:05.594 12:11:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:05.594 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:05.594 12:11:05 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:05.594 12:11:05 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:05.594 12:11:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:05.594 12:11:05 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:05.594 12:11:05 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:05.594 12:11:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:05.594 12:11:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:05.594 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:05.594 12:11:05 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:05.594 12:11:05 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:05.594 12:11:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:05.594 12:11:05 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:05.594 12:11:05 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:05.594 12:11:05 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:05.594 12:11:05 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:05.594 12:11:05 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:05.594 12:11:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:05.594 12:11:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:05.594 12:11:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:05.594 12:11:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:05.594 12:11:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:05.594 Found net devices under 0000:31:00.0: cvl_0_0 00:15:05.594 12:11:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:05.594 12:11:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:05.594 12:11:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:05.594 12:11:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:05.594 12:11:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:05.594 12:11:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:05.594 Found net devices under 0000:31:00.1: cvl_0_1 00:15:05.594 12:11:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:05.594 12:11:05 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:05.594 12:11:05 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:05.594 12:11:05 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:05.594 12:11:05 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:05.594 12:11:05 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:05.594 12:11:05 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:05.594 12:11:05 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:05.594 12:11:05 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:05.594 12:11:05 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:05.594 12:11:05 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:05.594 12:11:05 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:05.594 12:11:05 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:05.594 12:11:05 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:05.594 12:11:05 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:05.594 12:11:05 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:05.594 12:11:05 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:05.594 12:11:05 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:05.594 12:11:05 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:05.594 12:11:05 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:05.594 12:11:05 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:05.594 12:11:05 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:05.594 12:11:05 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:05.594 12:11:05 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:05.594 12:11:05 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:05.594 12:11:05 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:05.594 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:05.594 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.505 ms 00:15:05.594 00:15:05.594 --- 10.0.0.2 ping statistics --- 00:15:05.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:05.594 rtt min/avg/max/mdev = 0.505/0.505/0.505/0.000 ms 00:15:05.594 12:11:05 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:05.594 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:05.594 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.415 ms 00:15:05.594 00:15:05.594 --- 10.0.0.1 ping statistics --- 00:15:05.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:05.594 rtt min/avg/max/mdev = 0.415/0.415/0.415/0.000 ms 00:15:05.594 12:11:05 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:05.594 12:11:05 -- nvmf/common.sh@411 -- # return 0 00:15:05.594 12:11:05 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:05.594 12:11:05 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:05.594 12:11:05 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:05.594 12:11:05 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:05.594 12:11:05 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:05.594 12:11:05 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:05.594 12:11:05 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:05.594 12:11:05 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:05.594 12:11:05 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:05.594 12:11:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:05.594 12:11:05 -- common/autotest_common.sh@10 -- # set +x 00:15:05.594 12:11:05 -- nvmf/common.sh@470 -- # nvmfpid=3365326 00:15:05.594 12:11:05 -- nvmf/common.sh@471 -- # waitforlisten 3365326 00:15:05.594 12:11:05 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:05.594 12:11:05 -- common/autotest_common.sh@817 -- # '[' -z 3365326 ']' 00:15:05.594 12:11:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:05.594 12:11:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:05.594 12:11:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:05.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:05.594 12:11:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:05.594 12:11:05 -- common/autotest_common.sh@10 -- # set +x 00:15:05.594 [2024-04-26 12:11:05.733904] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:15:05.594 [2024-04-26 12:11:05.733966] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:05.594 EAL: No free 2048 kB hugepages reported on node 1 00:15:05.594 [2024-04-26 12:11:05.808171] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:05.594 [2024-04-26 12:11:05.882636] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:05.594 [2024-04-26 12:11:05.882675] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:05.594 [2024-04-26 12:11:05.882684] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:05.594 [2024-04-26 12:11:05.882692] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:05.594 [2024-04-26 12:11:05.882699] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:05.594 [2024-04-26 12:11:05.882885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:05.594 [2024-04-26 12:11:05.882982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:05.594 [2024-04-26 12:11:05.883124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:05.594 [2024-04-26 12:11:05.883124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:05.594 12:11:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:05.594 12:11:06 -- common/autotest_common.sh@850 -- # return 0 00:15:05.594 12:11:06 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:05.594 12:11:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:05.594 12:11:06 -- common/autotest_common.sh@10 -- # set +x 00:15:05.594 12:11:06 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:05.594 12:11:06 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:15:05.594 12:11:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:05.594 12:11:06 -- common/autotest_common.sh@10 -- # set +x 00:15:05.594 12:11:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:05.595 12:11:06 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:15:05.595 12:11:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:05.595 12:11:06 -- common/autotest_common.sh@10 -- # set +x 00:15:05.595 12:11:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:05.595 12:11:06 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:05.595 12:11:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:05.595 12:11:06 -- common/autotest_common.sh@10 -- # set +x 00:15:05.595 [2024-04-26 12:11:06.623080] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:05.595 12:11:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:05.595 12:11:06 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:05.595 12:11:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:05.595 12:11:06 -- common/autotest_common.sh@10 -- # set +x 00:15:05.595 Malloc0 00:15:05.595 12:11:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:05.595 12:11:06 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:05.595 12:11:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:05.595 12:11:06 -- common/autotest_common.sh@10 -- # set +x 00:15:05.595 12:11:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:05.595 12:11:06 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:05.595 12:11:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:05.595 12:11:06 -- common/autotest_common.sh@10 -- # set +x 00:15:05.595 12:11:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:05.595 12:11:06 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:05.595 12:11:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:05.595 12:11:06 -- common/autotest_common.sh@10 -- # set +x 00:15:05.595 [2024-04-26 12:11:06.692100] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:05.595 12:11:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:05.595 12:11:06 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3365673 00:15:05.595 12:11:06 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:15:05.595 12:11:06 -- target/bdev_io_wait.sh@30 -- # READ_PID=3365675 00:15:05.595 12:11:06 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:15:05.595 12:11:06 -- nvmf/common.sh@521 -- # config=() 00:15:05.595 12:11:06 -- nvmf/common.sh@521 -- # local subsystem config 00:15:05.595 12:11:06 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:05.595 12:11:06 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:05.595 { 00:15:05.595 "params": { 00:15:05.595 "name": "Nvme$subsystem", 00:15:05.595 "trtype": "$TEST_TRANSPORT", 00:15:05.595 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:05.595 "adrfam": "ipv4", 00:15:05.595 "trsvcid": "$NVMF_PORT", 00:15:05.595 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:05.595 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:05.595 "hdgst": ${hdgst:-false}, 00:15:05.595 "ddgst": ${ddgst:-false} 00:15:05.595 }, 00:15:05.595 "method": "bdev_nvme_attach_controller" 00:15:05.595 } 00:15:05.595 EOF 00:15:05.595 )") 00:15:05.595 12:11:06 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3365677 00:15:05.595 12:11:06 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:15:05.595 12:11:06 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:15:05.595 12:11:06 -- nvmf/common.sh@521 -- # config=() 00:15:05.595 12:11:06 -- nvmf/common.sh@521 -- # local subsystem config 00:15:05.595 12:11:06 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:05.595 12:11:06 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:05.595 { 00:15:05.595 "params": { 00:15:05.595 "name": "Nvme$subsystem", 00:15:05.595 "trtype": "$TEST_TRANSPORT", 00:15:05.595 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:05.595 "adrfam": "ipv4", 00:15:05.595 "trsvcid": "$NVMF_PORT", 00:15:05.595 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:05.595 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:05.595 "hdgst": ${hdgst:-false}, 00:15:05.595 "ddgst": ${ddgst:-false} 00:15:05.595 }, 00:15:05.595 "method": "bdev_nvme_attach_controller" 00:15:05.595 } 00:15:05.595 EOF 00:15:05.595 )") 00:15:05.595 12:11:06 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3365680 00:15:05.595 12:11:06 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:15:05.595 12:11:06 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:15:05.595 12:11:06 -- target/bdev_io_wait.sh@35 -- # sync 00:15:05.595 12:11:06 -- nvmf/common.sh@543 -- # cat 00:15:05.595 12:11:06 -- nvmf/common.sh@521 -- # config=() 00:15:05.595 12:11:06 -- nvmf/common.sh@521 -- # local subsystem config 00:15:05.595 12:11:06 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:05.595 12:11:06 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:05.595 { 00:15:05.595 "params": { 00:15:05.595 "name": "Nvme$subsystem", 00:15:05.595 "trtype": "$TEST_TRANSPORT", 00:15:05.595 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:05.595 "adrfam": "ipv4", 00:15:05.595 "trsvcid": "$NVMF_PORT", 00:15:05.595 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:05.595 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:05.595 "hdgst": ${hdgst:-false}, 00:15:05.595 "ddgst": ${ddgst:-false} 00:15:05.595 }, 00:15:05.595 "method": "bdev_nvme_attach_controller" 00:15:05.595 } 00:15:05.595 EOF 00:15:05.595 )") 00:15:05.595 12:11:06 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:15:05.595 12:11:06 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:15:05.595 12:11:06 -- nvmf/common.sh@521 -- # config=() 00:15:05.595 12:11:06 -- nvmf/common.sh@543 -- # cat 00:15:05.595 12:11:06 -- nvmf/common.sh@521 -- # local subsystem config 00:15:05.595 12:11:06 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:05.595 12:11:06 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:05.595 { 00:15:05.595 "params": { 00:15:05.595 "name": "Nvme$subsystem", 00:15:05.595 "trtype": "$TEST_TRANSPORT", 00:15:05.595 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:05.595 "adrfam": "ipv4", 00:15:05.595 "trsvcid": "$NVMF_PORT", 00:15:05.595 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:05.595 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:05.595 "hdgst": ${hdgst:-false}, 00:15:05.595 "ddgst": ${ddgst:-false} 00:15:05.595 }, 00:15:05.595 "method": "bdev_nvme_attach_controller" 00:15:05.595 } 00:15:05.595 EOF 00:15:05.595 )") 00:15:05.595 12:11:06 -- nvmf/common.sh@543 -- # cat 00:15:05.595 12:11:06 -- target/bdev_io_wait.sh@37 -- # wait 3365673 00:15:05.595 12:11:06 -- nvmf/common.sh@543 -- # cat 00:15:05.595 12:11:06 -- nvmf/common.sh@545 -- # jq . 00:15:05.595 12:11:06 -- nvmf/common.sh@545 -- # jq . 00:15:05.595 12:11:06 -- nvmf/common.sh@545 -- # jq . 00:15:05.595 12:11:06 -- nvmf/common.sh@546 -- # IFS=, 00:15:05.595 12:11:06 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:05.595 "params": { 00:15:05.595 "name": "Nvme1", 00:15:05.595 "trtype": "tcp", 00:15:05.595 "traddr": "10.0.0.2", 00:15:05.595 "adrfam": "ipv4", 00:15:05.595 "trsvcid": "4420", 00:15:05.595 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:05.595 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:05.595 "hdgst": false, 00:15:05.595 "ddgst": false 00:15:05.595 }, 00:15:05.595 "method": "bdev_nvme_attach_controller" 00:15:05.595 }' 00:15:05.595 12:11:06 -- nvmf/common.sh@545 -- # jq . 00:15:05.595 12:11:06 -- nvmf/common.sh@546 -- # IFS=, 00:15:05.595 12:11:06 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:05.595 "params": { 00:15:05.595 "name": "Nvme1", 00:15:05.595 "trtype": "tcp", 00:15:05.595 "traddr": "10.0.0.2", 00:15:05.595 "adrfam": "ipv4", 00:15:05.595 "trsvcid": "4420", 00:15:05.595 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:05.595 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:05.595 "hdgst": false, 00:15:05.595 "ddgst": false 00:15:05.595 }, 00:15:05.595 "method": "bdev_nvme_attach_controller" 00:15:05.595 }' 00:15:05.595 12:11:06 -- nvmf/common.sh@546 -- # IFS=, 00:15:05.595 12:11:06 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:05.595 "params": { 00:15:05.595 "name": "Nvme1", 00:15:05.595 "trtype": "tcp", 00:15:05.595 "traddr": "10.0.0.2", 00:15:05.595 "adrfam": "ipv4", 00:15:05.595 "trsvcid": "4420", 00:15:05.595 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:05.595 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:05.595 "hdgst": false, 00:15:05.595 "ddgst": false 00:15:05.595 }, 00:15:05.595 "method": "bdev_nvme_attach_controller" 00:15:05.595 }' 00:15:05.595 12:11:06 -- nvmf/common.sh@546 -- # IFS=, 00:15:05.595 12:11:06 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:05.595 "params": { 00:15:05.595 "name": "Nvme1", 00:15:05.595 "trtype": "tcp", 00:15:05.595 "traddr": "10.0.0.2", 00:15:05.595 "adrfam": "ipv4", 00:15:05.595 "trsvcid": "4420", 00:15:05.595 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:05.595 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:05.595 "hdgst": false, 00:15:05.595 "ddgst": false 00:15:05.595 }, 00:15:05.595 "method": "bdev_nvme_attach_controller" 00:15:05.595 }' 00:15:05.595 [2024-04-26 12:11:06.742861] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:15:05.595 [2024-04-26 12:11:06.742913] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:15:05.595 [2024-04-26 12:11:06.744073] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:15:05.595 [2024-04-26 12:11:06.744118] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:15:05.596 [2024-04-26 12:11:06.746859] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:15:05.596 [2024-04-26 12:11:06.746902] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:15:05.596 [2024-04-26 12:11:06.749138] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:15:05.596 [2024-04-26 12:11:06.749184] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:15:05.596 EAL: No free 2048 kB hugepages reported on node 1 00:15:05.856 EAL: No free 2048 kB hugepages reported on node 1 00:15:05.856 [2024-04-26 12:11:06.888614] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.856 EAL: No free 2048 kB hugepages reported on node 1 00:15:05.856 [2024-04-26 12:11:06.937258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:15:05.856 [2024-04-26 12:11:06.943409] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.856 EAL: No free 2048 kB hugepages reported on node 1 00:15:05.856 [2024-04-26 12:11:06.991705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:15:05.856 [2024-04-26 12:11:07.005586] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.856 [2024-04-26 12:11:07.054999] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.856 [2024-04-26 12:11:07.055644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:15:06.116 [2024-04-26 12:11:07.102087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:15:06.116 Running I/O for 1 seconds... 00:15:06.116 Running I/O for 1 seconds... 00:15:06.116 Running I/O for 1 seconds... 00:15:06.116 Running I/O for 1 seconds... 00:15:07.106 00:15:07.106 Latency(us) 00:15:07.106 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:07.106 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:15:07.106 Nvme1n1 : 1.00 19985.25 78.07 0.00 0.00 6387.72 4259.84 13216.43 00:15:07.106 =================================================================================================================== 00:15:07.106 Total : 19985.25 78.07 0.00 0.00 6387.72 4259.84 13216.43 00:15:07.106 00:15:07.106 Latency(us) 00:15:07.106 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:07.106 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:15:07.106 Nvme1n1 : 1.02 7320.67 28.60 0.00 0.00 17329.51 5652.48 28398.93 00:15:07.106 =================================================================================================================== 00:15:07.106 Total : 7320.67 28.60 0.00 0.00 17329.51 5652.48 28398.93 00:15:07.106 00:15:07.106 Latency(us) 00:15:07.106 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:07.106 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:15:07.106 Nvme1n1 : 1.00 188232.38 735.28 0.00 0.00 677.34 266.24 761.17 00:15:07.106 =================================================================================================================== 00:15:07.106 Total : 188232.38 735.28 0.00 0.00 677.34 266.24 761.17 00:15:07.106 00:15:07.106 Latency(us) 00:15:07.106 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:07.106 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:15:07.106 Nvme1n1 : 1.00 7694.68 30.06 0.00 0.00 16591.85 4587.52 42598.40 00:15:07.106 =================================================================================================================== 00:15:07.106 Total : 7694.68 30.06 0.00 0.00 16591.85 4587.52 42598.40 00:15:07.366 12:11:08 -- target/bdev_io_wait.sh@38 -- # wait 3365675 00:15:07.366 12:11:08 -- target/bdev_io_wait.sh@39 -- # wait 3365677 00:15:07.366 12:11:08 -- target/bdev_io_wait.sh@40 -- # wait 3365680 00:15:07.366 12:11:08 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:07.366 12:11:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:07.366 12:11:08 -- common/autotest_common.sh@10 -- # set +x 00:15:07.366 12:11:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:07.367 12:11:08 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:15:07.367 12:11:08 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:15:07.367 12:11:08 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:07.367 12:11:08 -- nvmf/common.sh@117 -- # sync 00:15:07.367 12:11:08 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:07.367 12:11:08 -- nvmf/common.sh@120 -- # set +e 00:15:07.367 12:11:08 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:07.367 12:11:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:07.367 rmmod nvme_tcp 00:15:07.367 rmmod nvme_fabrics 00:15:07.367 rmmod nvme_keyring 00:15:07.367 12:11:08 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:07.367 12:11:08 -- nvmf/common.sh@124 -- # set -e 00:15:07.367 12:11:08 -- nvmf/common.sh@125 -- # return 0 00:15:07.367 12:11:08 -- nvmf/common.sh@478 -- # '[' -n 3365326 ']' 00:15:07.367 12:11:08 -- nvmf/common.sh@479 -- # killprocess 3365326 00:15:07.367 12:11:08 -- common/autotest_common.sh@936 -- # '[' -z 3365326 ']' 00:15:07.367 12:11:08 -- common/autotest_common.sh@940 -- # kill -0 3365326 00:15:07.367 12:11:08 -- common/autotest_common.sh@941 -- # uname 00:15:07.367 12:11:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:07.367 12:11:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3365326 00:15:07.627 12:11:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:07.627 12:11:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:07.627 12:11:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3365326' 00:15:07.627 killing process with pid 3365326 00:15:07.627 12:11:08 -- common/autotest_common.sh@955 -- # kill 3365326 00:15:07.627 12:11:08 -- common/autotest_common.sh@960 -- # wait 3365326 00:15:07.627 12:11:08 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:07.627 12:11:08 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:07.627 12:11:08 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:07.627 12:11:08 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:07.627 12:11:08 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:07.627 12:11:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:07.627 12:11:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:07.627 12:11:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:10.172 12:11:10 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:10.172 00:15:10.172 real 0m12.519s 00:15:10.172 user 0m18.666s 00:15:10.172 sys 0m6.738s 00:15:10.172 12:11:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:10.172 12:11:10 -- common/autotest_common.sh@10 -- # set +x 00:15:10.172 ************************************ 00:15:10.172 END TEST nvmf_bdev_io_wait 00:15:10.172 ************************************ 00:15:10.172 12:11:10 -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:10.172 12:11:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:10.172 12:11:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:10.172 12:11:10 -- common/autotest_common.sh@10 -- # set +x 00:15:10.172 ************************************ 00:15:10.172 START TEST nvmf_queue_depth 00:15:10.172 ************************************ 00:15:10.172 12:11:10 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:10.172 * Looking for test storage... 00:15:10.172 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:10.172 12:11:11 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:10.172 12:11:11 -- nvmf/common.sh@7 -- # uname -s 00:15:10.172 12:11:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:10.172 12:11:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:10.172 12:11:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:10.172 12:11:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:10.172 12:11:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:10.172 12:11:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:10.172 12:11:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:10.172 12:11:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:10.172 12:11:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:10.172 12:11:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:10.172 12:11:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:10.172 12:11:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:10.172 12:11:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:10.172 12:11:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:10.172 12:11:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:10.172 12:11:11 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:10.172 12:11:11 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:10.172 12:11:11 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:10.172 12:11:11 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:10.172 12:11:11 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:10.172 12:11:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.172 12:11:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.172 12:11:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.172 12:11:11 -- paths/export.sh@5 -- # export PATH 00:15:10.172 12:11:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.172 12:11:11 -- nvmf/common.sh@47 -- # : 0 00:15:10.172 12:11:11 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:10.172 12:11:11 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:10.172 12:11:11 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:10.172 12:11:11 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:10.172 12:11:11 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:10.172 12:11:11 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:10.172 12:11:11 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:10.173 12:11:11 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:10.173 12:11:11 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:15:10.173 12:11:11 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:15:10.173 12:11:11 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:10.173 12:11:11 -- target/queue_depth.sh@19 -- # nvmftestinit 00:15:10.173 12:11:11 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:10.173 12:11:11 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:10.173 12:11:11 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:10.173 12:11:11 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:10.173 12:11:11 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:10.173 12:11:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:10.173 12:11:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:10.173 12:11:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:10.173 12:11:11 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:10.173 12:11:11 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:10.173 12:11:11 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:10.173 12:11:11 -- common/autotest_common.sh@10 -- # set +x 00:15:16.759 12:11:17 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:16.759 12:11:17 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:16.759 12:11:17 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:16.759 12:11:17 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:16.759 12:11:17 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:16.759 12:11:17 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:16.759 12:11:17 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:16.759 12:11:17 -- nvmf/common.sh@295 -- # net_devs=() 00:15:16.759 12:11:17 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:16.759 12:11:17 -- nvmf/common.sh@296 -- # e810=() 00:15:16.759 12:11:17 -- nvmf/common.sh@296 -- # local -ga e810 00:15:16.759 12:11:17 -- nvmf/common.sh@297 -- # x722=() 00:15:16.759 12:11:17 -- nvmf/common.sh@297 -- # local -ga x722 00:15:16.759 12:11:17 -- nvmf/common.sh@298 -- # mlx=() 00:15:16.759 12:11:17 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:16.759 12:11:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:16.759 12:11:17 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:16.759 12:11:17 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:16.759 12:11:17 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:16.759 12:11:17 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:16.759 12:11:17 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:16.759 12:11:17 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:16.759 12:11:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:16.759 12:11:17 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:16.759 12:11:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:16.759 12:11:17 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:16.759 12:11:17 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:16.759 12:11:17 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:16.759 12:11:17 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:16.759 12:11:17 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:16.759 12:11:17 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:16.759 12:11:17 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:16.759 12:11:17 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:16.759 12:11:17 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:16.759 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:16.759 12:11:17 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:16.759 12:11:17 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:16.759 12:11:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:16.759 12:11:17 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:16.759 12:11:17 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:16.759 12:11:17 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:16.759 12:11:17 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:16.759 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:16.759 12:11:17 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:16.759 12:11:17 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:16.759 12:11:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:16.759 12:11:17 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:16.759 12:11:17 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:16.759 12:11:17 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:16.759 12:11:17 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:16.759 12:11:17 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:16.759 12:11:17 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:16.759 12:11:17 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:16.759 12:11:17 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:16.759 12:11:17 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:16.759 12:11:17 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:16.759 Found net devices under 0000:31:00.0: cvl_0_0 00:15:16.759 12:11:17 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:16.759 12:11:17 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:16.759 12:11:17 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:16.759 12:11:17 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:16.759 12:11:17 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:16.759 12:11:17 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:16.759 Found net devices under 0000:31:00.1: cvl_0_1 00:15:16.759 12:11:17 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:16.759 12:11:17 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:16.759 12:11:17 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:16.759 12:11:17 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:16.759 12:11:17 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:16.759 12:11:17 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:16.759 12:11:17 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:16.759 12:11:17 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:16.759 12:11:17 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:16.759 12:11:17 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:16.759 12:11:17 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:16.759 12:11:17 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:16.759 12:11:17 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:16.759 12:11:17 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:16.759 12:11:17 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:16.759 12:11:17 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:16.759 12:11:17 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:16.759 12:11:17 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:16.759 12:11:17 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:16.759 12:11:17 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:16.759 12:11:17 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:16.759 12:11:17 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:16.759 12:11:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:17.021 12:11:18 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:17.021 12:11:18 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:17.021 12:11:18 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:17.021 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:17.021 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.914 ms 00:15:17.021 00:15:17.021 --- 10.0.0.2 ping statistics --- 00:15:17.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.021 rtt min/avg/max/mdev = 0.914/0.914/0.914/0.000 ms 00:15:17.021 12:11:18 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:17.021 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:17.021 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.336 ms 00:15:17.021 00:15:17.021 --- 10.0.0.1 ping statistics --- 00:15:17.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.021 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:15:17.021 12:11:18 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:17.021 12:11:18 -- nvmf/common.sh@411 -- # return 0 00:15:17.021 12:11:18 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:17.021 12:11:18 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:17.021 12:11:18 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:17.021 12:11:18 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:17.021 12:11:18 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:17.021 12:11:18 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:17.021 12:11:18 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:17.021 12:11:18 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:15:17.021 12:11:18 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:17.021 12:11:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:17.021 12:11:18 -- common/autotest_common.sh@10 -- # set +x 00:15:17.021 12:11:18 -- nvmf/common.sh@470 -- # nvmfpid=3370125 00:15:17.021 12:11:18 -- nvmf/common.sh@471 -- # waitforlisten 3370125 00:15:17.021 12:11:18 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:17.021 12:11:18 -- common/autotest_common.sh@817 -- # '[' -z 3370125 ']' 00:15:17.021 12:11:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.021 12:11:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:17.021 12:11:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.021 12:11:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:17.021 12:11:18 -- common/autotest_common.sh@10 -- # set +x 00:15:17.021 [2024-04-26 12:11:18.182272] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:15:17.021 [2024-04-26 12:11:18.182335] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:17.021 EAL: No free 2048 kB hugepages reported on node 1 00:15:17.282 [2024-04-26 12:11:18.269487] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.282 [2024-04-26 12:11:18.360676] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:17.282 [2024-04-26 12:11:18.360746] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:17.282 [2024-04-26 12:11:18.360754] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:17.282 [2024-04-26 12:11:18.360761] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:17.282 [2024-04-26 12:11:18.360767] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:17.282 [2024-04-26 12:11:18.360791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:17.854 12:11:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:17.854 12:11:18 -- common/autotest_common.sh@850 -- # return 0 00:15:17.854 12:11:18 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:17.854 12:11:18 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:17.854 12:11:18 -- common/autotest_common.sh@10 -- # set +x 00:15:17.854 12:11:19 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:17.854 12:11:19 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:17.854 12:11:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:17.854 12:11:19 -- common/autotest_common.sh@10 -- # set +x 00:15:17.854 [2024-04-26 12:11:19.011968] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:17.854 12:11:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:17.854 12:11:19 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:17.854 12:11:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:17.854 12:11:19 -- common/autotest_common.sh@10 -- # set +x 00:15:17.854 Malloc0 00:15:17.854 12:11:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:17.854 12:11:19 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:17.854 12:11:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:17.854 12:11:19 -- common/autotest_common.sh@10 -- # set +x 00:15:17.854 12:11:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:17.854 12:11:19 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:17.854 12:11:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:17.854 12:11:19 -- common/autotest_common.sh@10 -- # set +x 00:15:17.854 12:11:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:17.854 12:11:19 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:17.854 12:11:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:17.854 12:11:19 -- common/autotest_common.sh@10 -- # set +x 00:15:17.854 [2024-04-26 12:11:19.066672] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:17.854 12:11:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:17.854 12:11:19 -- target/queue_depth.sh@30 -- # bdevperf_pid=3370454 00:15:17.854 12:11:19 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:17.854 12:11:19 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:15:18.115 12:11:19 -- target/queue_depth.sh@33 -- # waitforlisten 3370454 /var/tmp/bdevperf.sock 00:15:18.115 12:11:19 -- common/autotest_common.sh@817 -- # '[' -z 3370454 ']' 00:15:18.115 12:11:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:18.115 12:11:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:18.115 12:11:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:18.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:18.115 12:11:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:18.115 12:11:19 -- common/autotest_common.sh@10 -- # set +x 00:15:18.115 [2024-04-26 12:11:19.122195] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:15:18.115 [2024-04-26 12:11:19.122254] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3370454 ] 00:15:18.115 EAL: No free 2048 kB hugepages reported on node 1 00:15:18.115 [2024-04-26 12:11:19.186634] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.115 [2024-04-26 12:11:19.260246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:19.056 12:11:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:19.056 12:11:19 -- common/autotest_common.sh@850 -- # return 0 00:15:19.056 12:11:19 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:19.056 12:11:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:19.056 12:11:19 -- common/autotest_common.sh@10 -- # set +x 00:15:19.056 NVMe0n1 00:15:19.057 12:11:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:19.057 12:11:20 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:19.057 Running I/O for 10 seconds... 00:15:29.059 00:15:29.059 Latency(us) 00:15:29.059 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:29.059 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:29.059 Verification LBA range: start 0x0 length 0x4000 00:15:29.060 NVMe0n1 : 10.04 11542.58 45.09 0.00 0.00 88384.81 6608.21 71652.69 00:15:29.060 =================================================================================================================== 00:15:29.060 Total : 11542.58 45.09 0.00 0.00 88384.81 6608.21 71652.69 00:15:29.060 0 00:15:29.060 12:11:30 -- target/queue_depth.sh@39 -- # killprocess 3370454 00:15:29.060 12:11:30 -- common/autotest_common.sh@936 -- # '[' -z 3370454 ']' 00:15:29.060 12:11:30 -- common/autotest_common.sh@940 -- # kill -0 3370454 00:15:29.060 12:11:30 -- common/autotest_common.sh@941 -- # uname 00:15:29.331 12:11:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:29.331 12:11:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3370454 00:15:29.331 12:11:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:29.331 12:11:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:29.331 12:11:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3370454' 00:15:29.331 killing process with pid 3370454 00:15:29.331 12:11:30 -- common/autotest_common.sh@955 -- # kill 3370454 00:15:29.331 Received shutdown signal, test time was about 10.000000 seconds 00:15:29.331 00:15:29.331 Latency(us) 00:15:29.331 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:29.331 =================================================================================================================== 00:15:29.331 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:29.331 12:11:30 -- common/autotest_common.sh@960 -- # wait 3370454 00:15:29.331 12:11:30 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:29.331 12:11:30 -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:29.331 12:11:30 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:29.331 12:11:30 -- nvmf/common.sh@117 -- # sync 00:15:29.331 12:11:30 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:29.331 12:11:30 -- nvmf/common.sh@120 -- # set +e 00:15:29.331 12:11:30 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:29.331 12:11:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:29.331 rmmod nvme_tcp 00:15:29.331 rmmod nvme_fabrics 00:15:29.331 rmmod nvme_keyring 00:15:29.331 12:11:30 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:29.331 12:11:30 -- nvmf/common.sh@124 -- # set -e 00:15:29.331 12:11:30 -- nvmf/common.sh@125 -- # return 0 00:15:29.331 12:11:30 -- nvmf/common.sh@478 -- # '[' -n 3370125 ']' 00:15:29.331 12:11:30 -- nvmf/common.sh@479 -- # killprocess 3370125 00:15:29.331 12:11:30 -- common/autotest_common.sh@936 -- # '[' -z 3370125 ']' 00:15:29.331 12:11:30 -- common/autotest_common.sh@940 -- # kill -0 3370125 00:15:29.331 12:11:30 -- common/autotest_common.sh@941 -- # uname 00:15:29.331 12:11:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:29.331 12:11:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3370125 00:15:29.591 12:11:30 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:29.591 12:11:30 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:29.591 12:11:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3370125' 00:15:29.591 killing process with pid 3370125 00:15:29.592 12:11:30 -- common/autotest_common.sh@955 -- # kill 3370125 00:15:29.592 12:11:30 -- common/autotest_common.sh@960 -- # wait 3370125 00:15:29.592 12:11:30 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:29.592 12:11:30 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:29.592 12:11:30 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:29.592 12:11:30 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:29.592 12:11:30 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:29.592 12:11:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:29.592 12:11:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:29.592 12:11:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:32.135 12:11:32 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:32.135 00:15:32.135 real 0m21.789s 00:15:32.135 user 0m25.575s 00:15:32.135 sys 0m6.338s 00:15:32.135 12:11:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:32.135 12:11:32 -- common/autotest_common.sh@10 -- # set +x 00:15:32.135 ************************************ 00:15:32.135 END TEST nvmf_queue_depth 00:15:32.135 ************************************ 00:15:32.135 12:11:32 -- nvmf/nvmf.sh@52 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:32.135 12:11:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:32.135 12:11:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:32.135 12:11:32 -- common/autotest_common.sh@10 -- # set +x 00:15:32.135 ************************************ 00:15:32.135 START TEST nvmf_multipath 00:15:32.135 ************************************ 00:15:32.135 12:11:32 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:32.135 * Looking for test storage... 00:15:32.135 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:32.135 12:11:33 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:32.135 12:11:33 -- nvmf/common.sh@7 -- # uname -s 00:15:32.135 12:11:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:32.135 12:11:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:32.135 12:11:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:32.135 12:11:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:32.135 12:11:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:32.135 12:11:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:32.135 12:11:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:32.135 12:11:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:32.135 12:11:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:32.135 12:11:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:32.135 12:11:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:32.135 12:11:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:32.135 12:11:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:32.135 12:11:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:32.135 12:11:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:32.135 12:11:33 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:32.135 12:11:33 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:32.135 12:11:33 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:32.135 12:11:33 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:32.135 12:11:33 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:32.135 12:11:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.135 12:11:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.135 12:11:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.135 12:11:33 -- paths/export.sh@5 -- # export PATH 00:15:32.135 12:11:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.135 12:11:33 -- nvmf/common.sh@47 -- # : 0 00:15:32.135 12:11:33 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:32.135 12:11:33 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:32.135 12:11:33 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:32.135 12:11:33 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:32.135 12:11:33 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:32.135 12:11:33 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:32.135 12:11:33 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:32.135 12:11:33 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:32.135 12:11:33 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:32.135 12:11:33 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:32.135 12:11:33 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:32.135 12:11:33 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:32.135 12:11:33 -- target/multipath.sh@43 -- # nvmftestinit 00:15:32.135 12:11:33 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:32.135 12:11:33 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:32.135 12:11:33 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:32.135 12:11:33 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:32.135 12:11:33 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:32.135 12:11:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:32.135 12:11:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:32.135 12:11:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:32.135 12:11:33 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:32.135 12:11:33 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:32.135 12:11:33 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:32.135 12:11:33 -- common/autotest_common.sh@10 -- # set +x 00:15:40.353 12:11:40 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:40.353 12:11:40 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:40.353 12:11:40 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:40.353 12:11:40 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:40.353 12:11:40 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:40.353 12:11:40 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:40.353 12:11:40 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:40.353 12:11:40 -- nvmf/common.sh@295 -- # net_devs=() 00:15:40.353 12:11:40 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:40.353 12:11:40 -- nvmf/common.sh@296 -- # e810=() 00:15:40.353 12:11:40 -- nvmf/common.sh@296 -- # local -ga e810 00:15:40.353 12:11:40 -- nvmf/common.sh@297 -- # x722=() 00:15:40.353 12:11:40 -- nvmf/common.sh@297 -- # local -ga x722 00:15:40.353 12:11:40 -- nvmf/common.sh@298 -- # mlx=() 00:15:40.353 12:11:40 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:40.353 12:11:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:40.353 12:11:40 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:40.353 12:11:40 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:40.353 12:11:40 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:40.353 12:11:40 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:40.353 12:11:40 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:40.353 12:11:40 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:40.353 12:11:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:40.353 12:11:40 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:40.353 12:11:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:40.353 12:11:40 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:40.353 12:11:40 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:40.353 12:11:40 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:40.353 12:11:40 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:40.353 12:11:40 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:40.353 12:11:40 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:40.353 12:11:40 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:40.353 12:11:40 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:40.353 12:11:40 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:40.353 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:40.353 12:11:40 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:40.353 12:11:40 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:40.353 12:11:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:40.353 12:11:40 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:40.353 12:11:40 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:40.353 12:11:40 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:40.353 12:11:40 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:40.353 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:40.353 12:11:40 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:40.353 12:11:40 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:40.353 12:11:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:40.353 12:11:40 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:40.353 12:11:40 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:40.353 12:11:40 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:40.353 12:11:40 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:40.353 12:11:40 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:40.353 12:11:40 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:40.353 12:11:40 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:40.353 12:11:40 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:40.353 12:11:40 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:40.353 12:11:40 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:40.353 Found net devices under 0000:31:00.0: cvl_0_0 00:15:40.353 12:11:40 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:40.353 12:11:40 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:40.353 12:11:40 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:40.353 12:11:40 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:40.354 12:11:40 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:40.354 12:11:40 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:40.354 Found net devices under 0000:31:00.1: cvl_0_1 00:15:40.354 12:11:40 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:40.354 12:11:40 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:40.354 12:11:40 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:40.354 12:11:40 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:40.354 12:11:40 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:40.354 12:11:40 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:40.354 12:11:40 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:40.354 12:11:40 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:40.354 12:11:40 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:40.354 12:11:40 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:40.354 12:11:40 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:40.354 12:11:40 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:40.354 12:11:40 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:40.354 12:11:40 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:40.354 12:11:40 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:40.354 12:11:40 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:40.354 12:11:40 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:40.354 12:11:40 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:40.354 12:11:40 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:40.354 12:11:40 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:40.354 12:11:40 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:40.354 12:11:40 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:40.354 12:11:40 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:40.354 12:11:40 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:40.354 12:11:40 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:40.354 12:11:40 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:40.354 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:40.354 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.495 ms 00:15:40.354 00:15:40.354 --- 10.0.0.2 ping statistics --- 00:15:40.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:40.354 rtt min/avg/max/mdev = 0.495/0.495/0.495/0.000 ms 00:15:40.354 12:11:40 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:40.354 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:40.354 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:15:40.354 00:15:40.354 --- 10.0.0.1 ping statistics --- 00:15:40.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:40.354 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:15:40.354 12:11:40 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:40.354 12:11:40 -- nvmf/common.sh@411 -- # return 0 00:15:40.354 12:11:40 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:40.354 12:11:40 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:40.354 12:11:40 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:40.354 12:11:40 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:40.354 12:11:40 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:40.354 12:11:40 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:40.354 12:11:40 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:40.354 12:11:40 -- target/multipath.sh@45 -- # '[' -z ']' 00:15:40.354 12:11:40 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:15:40.354 only one NIC for nvmf test 00:15:40.354 12:11:40 -- target/multipath.sh@47 -- # nvmftestfini 00:15:40.354 12:11:40 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:40.354 12:11:40 -- nvmf/common.sh@117 -- # sync 00:15:40.354 12:11:40 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:40.354 12:11:40 -- nvmf/common.sh@120 -- # set +e 00:15:40.354 12:11:40 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:40.354 12:11:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:40.354 rmmod nvme_tcp 00:15:40.354 rmmod nvme_fabrics 00:15:40.354 rmmod nvme_keyring 00:15:40.354 12:11:40 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:40.354 12:11:40 -- nvmf/common.sh@124 -- # set -e 00:15:40.354 12:11:40 -- nvmf/common.sh@125 -- # return 0 00:15:40.354 12:11:40 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:15:40.354 12:11:40 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:40.354 12:11:40 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:40.354 12:11:40 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:40.354 12:11:40 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:40.354 12:11:40 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:40.354 12:11:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:40.354 12:11:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:40.354 12:11:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:41.298 12:11:42 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:41.560 12:11:42 -- target/multipath.sh@48 -- # exit 0 00:15:41.560 12:11:42 -- target/multipath.sh@1 -- # nvmftestfini 00:15:41.560 12:11:42 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:41.560 12:11:42 -- nvmf/common.sh@117 -- # sync 00:15:41.560 12:11:42 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:41.560 12:11:42 -- nvmf/common.sh@120 -- # set +e 00:15:41.560 12:11:42 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:41.560 12:11:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:41.560 12:11:42 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:41.560 12:11:42 -- nvmf/common.sh@124 -- # set -e 00:15:41.560 12:11:42 -- nvmf/common.sh@125 -- # return 0 00:15:41.560 12:11:42 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:15:41.560 12:11:42 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:41.560 12:11:42 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:41.560 12:11:42 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:41.560 12:11:42 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:41.560 12:11:42 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:41.560 12:11:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:41.560 12:11:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:41.560 12:11:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:41.560 12:11:42 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:41.560 00:15:41.560 real 0m9.592s 00:15:41.560 user 0m2.017s 00:15:41.560 sys 0m5.477s 00:15:41.560 12:11:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:41.560 12:11:42 -- common/autotest_common.sh@10 -- # set +x 00:15:41.560 ************************************ 00:15:41.560 END TEST nvmf_multipath 00:15:41.560 ************************************ 00:15:41.560 12:11:42 -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:41.560 12:11:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:41.560 12:11:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:41.560 12:11:42 -- common/autotest_common.sh@10 -- # set +x 00:15:41.560 ************************************ 00:15:41.560 START TEST nvmf_zcopy 00:15:41.560 ************************************ 00:15:41.560 12:11:42 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:41.822 * Looking for test storage... 00:15:41.822 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:41.822 12:11:42 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:41.822 12:11:42 -- nvmf/common.sh@7 -- # uname -s 00:15:41.822 12:11:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:41.822 12:11:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:41.822 12:11:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:41.822 12:11:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:41.822 12:11:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:41.822 12:11:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:41.822 12:11:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:41.822 12:11:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:41.822 12:11:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:41.823 12:11:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:41.823 12:11:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:41.823 12:11:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:41.823 12:11:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:41.823 12:11:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:41.823 12:11:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:41.823 12:11:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:41.823 12:11:42 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:41.823 12:11:42 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:41.823 12:11:42 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:41.823 12:11:42 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:41.823 12:11:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.823 12:11:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.823 12:11:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.823 12:11:42 -- paths/export.sh@5 -- # export PATH 00:15:41.823 12:11:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.823 12:11:42 -- nvmf/common.sh@47 -- # : 0 00:15:41.823 12:11:42 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:41.823 12:11:42 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:41.823 12:11:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:41.823 12:11:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:41.823 12:11:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:41.823 12:11:42 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:41.823 12:11:42 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:41.823 12:11:42 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:41.823 12:11:42 -- target/zcopy.sh@12 -- # nvmftestinit 00:15:41.823 12:11:42 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:41.823 12:11:42 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:41.823 12:11:42 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:41.823 12:11:42 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:41.823 12:11:42 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:41.823 12:11:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:41.823 12:11:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:41.823 12:11:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:41.823 12:11:42 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:41.823 12:11:42 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:41.823 12:11:42 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:41.823 12:11:42 -- common/autotest_common.sh@10 -- # set +x 00:15:49.975 12:11:49 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:49.975 12:11:49 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:49.975 12:11:49 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:49.975 12:11:49 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:49.975 12:11:49 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:49.975 12:11:49 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:49.975 12:11:49 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:49.975 12:11:49 -- nvmf/common.sh@295 -- # net_devs=() 00:15:49.976 12:11:49 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:49.976 12:11:49 -- nvmf/common.sh@296 -- # e810=() 00:15:49.976 12:11:49 -- nvmf/common.sh@296 -- # local -ga e810 00:15:49.976 12:11:49 -- nvmf/common.sh@297 -- # x722=() 00:15:49.976 12:11:49 -- nvmf/common.sh@297 -- # local -ga x722 00:15:49.976 12:11:49 -- nvmf/common.sh@298 -- # mlx=() 00:15:49.976 12:11:49 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:49.976 12:11:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:49.976 12:11:49 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:49.976 12:11:49 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:49.976 12:11:49 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:49.976 12:11:49 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:49.976 12:11:49 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:49.976 12:11:49 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:49.976 12:11:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:49.976 12:11:49 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:49.976 12:11:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:49.976 12:11:49 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:49.976 12:11:49 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:49.976 12:11:49 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:49.976 12:11:49 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:49.976 12:11:49 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:49.976 12:11:49 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:49.976 12:11:49 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:49.976 12:11:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:49.976 12:11:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:49.976 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:49.976 12:11:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:49.976 12:11:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:49.976 12:11:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:49.976 12:11:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:49.976 12:11:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:49.976 12:11:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:49.976 12:11:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:49.976 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:49.976 12:11:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:49.976 12:11:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:49.976 12:11:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:49.976 12:11:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:49.976 12:11:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:49.976 12:11:49 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:49.976 12:11:49 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:49.976 12:11:49 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:49.976 12:11:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:49.976 12:11:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:49.976 12:11:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:49.976 12:11:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:49.976 12:11:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:49.976 Found net devices under 0000:31:00.0: cvl_0_0 00:15:49.976 12:11:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:49.976 12:11:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:49.976 12:11:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:49.976 12:11:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:49.976 12:11:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:49.976 12:11:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:49.976 Found net devices under 0000:31:00.1: cvl_0_1 00:15:49.976 12:11:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:49.976 12:11:49 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:49.976 12:11:49 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:49.976 12:11:49 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:49.976 12:11:49 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:49.976 12:11:49 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:49.976 12:11:49 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:49.976 12:11:49 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:49.976 12:11:49 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:49.976 12:11:49 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:49.976 12:11:49 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:49.976 12:11:49 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:49.976 12:11:49 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:49.976 12:11:49 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:49.976 12:11:49 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:49.976 12:11:49 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:49.976 12:11:49 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:49.976 12:11:49 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:49.976 12:11:49 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:49.976 12:11:50 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:49.976 12:11:50 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:49.976 12:11:50 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:49.976 12:11:50 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:49.976 12:11:50 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:49.976 12:11:50 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:49.976 12:11:50 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:49.976 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:49.976 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.590 ms 00:15:49.976 00:15:49.976 --- 10.0.0.2 ping statistics --- 00:15:49.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.976 rtt min/avg/max/mdev = 0.590/0.590/0.590/0.000 ms 00:15:49.976 12:11:50 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:49.976 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:49.976 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:15:49.976 00:15:49.976 --- 10.0.0.1 ping statistics --- 00:15:49.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.976 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:15:49.976 12:11:50 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:49.976 12:11:50 -- nvmf/common.sh@411 -- # return 0 00:15:49.976 12:11:50 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:49.976 12:11:50 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:49.976 12:11:50 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:49.976 12:11:50 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:49.976 12:11:50 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:49.976 12:11:50 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:49.976 12:11:50 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:49.976 12:11:50 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:15:49.976 12:11:50 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:49.976 12:11:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:49.976 12:11:50 -- common/autotest_common.sh@10 -- # set +x 00:15:49.976 12:11:50 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:49.976 12:11:50 -- nvmf/common.sh@470 -- # nvmfpid=3381247 00:15:49.976 12:11:50 -- nvmf/common.sh@471 -- # waitforlisten 3381247 00:15:49.976 12:11:50 -- common/autotest_common.sh@817 -- # '[' -z 3381247 ']' 00:15:49.976 12:11:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:49.976 12:11:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:49.976 12:11:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:49.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:49.976 12:11:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:49.976 12:11:50 -- common/autotest_common.sh@10 -- # set +x 00:15:49.976 [2024-04-26 12:11:50.324694] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:15:49.976 [2024-04-26 12:11:50.324743] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:49.976 EAL: No free 2048 kB hugepages reported on node 1 00:15:49.976 [2024-04-26 12:11:50.401269] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.976 [2024-04-26 12:11:50.488344] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:49.976 [2024-04-26 12:11:50.488403] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:49.976 [2024-04-26 12:11:50.488411] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:49.976 [2024-04-26 12:11:50.488418] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:49.976 [2024-04-26 12:11:50.488424] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:49.976 [2024-04-26 12:11:50.488450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:49.976 12:11:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:49.976 12:11:51 -- common/autotest_common.sh@850 -- # return 0 00:15:49.976 12:11:51 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:49.976 12:11:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:49.976 12:11:51 -- common/autotest_common.sh@10 -- # set +x 00:15:49.976 12:11:51 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:49.976 12:11:51 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:15:49.976 12:11:51 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:15:49.976 12:11:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:49.976 12:11:51 -- common/autotest_common.sh@10 -- # set +x 00:15:49.976 [2024-04-26 12:11:51.163347] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:49.976 12:11:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:49.976 12:11:51 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:49.976 12:11:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:49.976 12:11:51 -- common/autotest_common.sh@10 -- # set +x 00:15:49.976 12:11:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:49.976 12:11:51 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:49.976 12:11:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:49.976 12:11:51 -- common/autotest_common.sh@10 -- # set +x 00:15:49.976 [2024-04-26 12:11:51.179548] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:49.977 12:11:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:49.977 12:11:51 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:49.977 12:11:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:49.977 12:11:51 -- common/autotest_common.sh@10 -- # set +x 00:15:49.977 12:11:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:49.977 12:11:51 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:15:49.977 12:11:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:49.977 12:11:51 -- common/autotest_common.sh@10 -- # set +x 00:15:50.239 malloc0 00:15:50.239 12:11:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:50.239 12:11:51 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:50.239 12:11:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:50.239 12:11:51 -- common/autotest_common.sh@10 -- # set +x 00:15:50.239 12:11:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:50.239 12:11:51 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:15:50.239 12:11:51 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:15:50.239 12:11:51 -- nvmf/common.sh@521 -- # config=() 00:15:50.239 12:11:51 -- nvmf/common.sh@521 -- # local subsystem config 00:15:50.239 12:11:51 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:50.239 12:11:51 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:50.239 { 00:15:50.239 "params": { 00:15:50.239 "name": "Nvme$subsystem", 00:15:50.239 "trtype": "$TEST_TRANSPORT", 00:15:50.239 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:50.239 "adrfam": "ipv4", 00:15:50.239 "trsvcid": "$NVMF_PORT", 00:15:50.239 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:50.239 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:50.239 "hdgst": ${hdgst:-false}, 00:15:50.239 "ddgst": ${ddgst:-false} 00:15:50.239 }, 00:15:50.239 "method": "bdev_nvme_attach_controller" 00:15:50.239 } 00:15:50.239 EOF 00:15:50.239 )") 00:15:50.239 12:11:51 -- nvmf/common.sh@543 -- # cat 00:15:50.239 12:11:51 -- nvmf/common.sh@545 -- # jq . 00:15:50.239 12:11:51 -- nvmf/common.sh@546 -- # IFS=, 00:15:50.239 12:11:51 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:50.239 "params": { 00:15:50.239 "name": "Nvme1", 00:15:50.239 "trtype": "tcp", 00:15:50.239 "traddr": "10.0.0.2", 00:15:50.239 "adrfam": "ipv4", 00:15:50.239 "trsvcid": "4420", 00:15:50.239 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:50.239 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:50.239 "hdgst": false, 00:15:50.239 "ddgst": false 00:15:50.239 }, 00:15:50.239 "method": "bdev_nvme_attach_controller" 00:15:50.239 }' 00:15:50.239 [2024-04-26 12:11:51.264369] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:15:50.239 [2024-04-26 12:11:51.264451] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3381285 ] 00:15:50.239 EAL: No free 2048 kB hugepages reported on node 1 00:15:50.239 [2024-04-26 12:11:51.332637] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:50.240 [2024-04-26 12:11:51.407275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:50.501 Running I/O for 10 seconds... 00:16:00.616 00:16:00.616 Latency(us) 00:16:00.616 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:00.616 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:16:00.616 Verification LBA range: start 0x0 length 0x1000 00:16:00.616 Nvme1n1 : 10.01 7676.97 59.98 0.00 0.00 16619.72 1911.47 29491.20 00:16:00.616 =================================================================================================================== 00:16:00.616 Total : 7676.97 59.98 0.00 0.00 16619.72 1911.47 29491.20 00:16:00.877 12:12:01 -- target/zcopy.sh@39 -- # perfpid=3383569 00:16:00.877 12:12:01 -- target/zcopy.sh@41 -- # xtrace_disable 00:16:00.877 12:12:01 -- common/autotest_common.sh@10 -- # set +x 00:16:00.877 12:12:01 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:16:00.877 12:12:01 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:16:00.877 12:12:01 -- nvmf/common.sh@521 -- # config=() 00:16:00.877 12:12:01 -- nvmf/common.sh@521 -- # local subsystem config 00:16:00.877 12:12:01 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:00.877 12:12:01 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:00.877 { 00:16:00.877 "params": { 00:16:00.877 "name": "Nvme$subsystem", 00:16:00.877 "trtype": "$TEST_TRANSPORT", 00:16:00.877 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:00.877 "adrfam": "ipv4", 00:16:00.877 "trsvcid": "$NVMF_PORT", 00:16:00.877 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:00.877 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:00.877 "hdgst": ${hdgst:-false}, 00:16:00.877 "ddgst": ${ddgst:-false} 00:16:00.877 }, 00:16:00.877 "method": "bdev_nvme_attach_controller" 00:16:00.877 } 00:16:00.877 EOF 00:16:00.877 )") 00:16:00.877 [2024-04-26 12:12:01.880874] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.877 [2024-04-26 12:12:01.880902] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.877 12:12:01 -- nvmf/common.sh@543 -- # cat 00:16:00.877 12:12:01 -- nvmf/common.sh@545 -- # jq . 00:16:00.877 [2024-04-26 12:12:01.888865] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.877 [2024-04-26 12:12:01.888873] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.877 12:12:01 -- nvmf/common.sh@546 -- # IFS=, 00:16:00.877 12:12:01 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:00.877 "params": { 00:16:00.877 "name": "Nvme1", 00:16:00.877 "trtype": "tcp", 00:16:00.877 "traddr": "10.0.0.2", 00:16:00.877 "adrfam": "ipv4", 00:16:00.877 "trsvcid": "4420", 00:16:00.877 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:00.877 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:00.877 "hdgst": false, 00:16:00.877 "ddgst": false 00:16:00.877 }, 00:16:00.878 "method": "bdev_nvme_attach_controller" 00:16:00.878 }' 00:16:00.878 [2024-04-26 12:12:01.896882] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.878 [2024-04-26 12:12:01.896889] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.878 [2024-04-26 12:12:01.904899] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.878 [2024-04-26 12:12:01.904907] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.878 [2024-04-26 12:12:01.912920] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.878 [2024-04-26 12:12:01.912927] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.878 [2024-04-26 12:12:01.920940] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.878 [2024-04-26 12:12:01.920947] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.878 [2024-04-26 12:12:01.924750] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:16:00.878 [2024-04-26 12:12:01.924795] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3383569 ] 00:16:00.878 [2024-04-26 12:12:01.928960] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.878 [2024-04-26 12:12:01.928968] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.878 [2024-04-26 12:12:01.936982] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.878 [2024-04-26 12:12:01.936989] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.878 [2024-04-26 12:12:01.945001] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.878 [2024-04-26 12:12:01.945008] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.878 EAL: No free 2048 kB hugepages reported on node 1 00:16:00.878 [2024-04-26 12:12:01.953021] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.878 [2024-04-26 12:12:01.953028] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.878 [2024-04-26 12:12:01.961041] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.878 [2024-04-26 12:12:01.961048] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.878 [2024-04-26 12:12:01.969061] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.878 [2024-04-26 12:12:01.969068] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.878 [2024-04-26 12:12:01.977083] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.878 [2024-04-26 12:12:01.977090] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.878 [2024-04-26 12:12:01.983164] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:00.878 [2024-04-26 12:12:01.985105] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.878 [2024-04-26 12:12:01.985111] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.878 [2024-04-26 12:12:01.993124] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.878 [2024-04-26 12:12:01.993132] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.878 [2024-04-26 12:12:02.001144] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.878 [2024-04-26 12:12:02.001151] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.878 [2024-04-26 12:12:02.009164] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.878 [2024-04-26 12:12:02.009171] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.878 [2024-04-26 12:12:02.017186] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.878 [2024-04-26 12:12:02.017194] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.878 [2024-04-26 12:12:02.025208] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.878 [2024-04-26 12:12:02.025219] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.878 [2024-04-26 12:12:02.033228] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.878 [2024-04-26 12:12:02.033236] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.878 [2024-04-26 12:12:02.041250] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.878 [2024-04-26 12:12:02.041257] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.878 [2024-04-26 12:12:02.049271] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.878 [2024-04-26 12:12:02.049279] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.878 [2024-04-26 12:12:02.049818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:00.878 [2024-04-26 12:12:02.057292] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.878 [2024-04-26 12:12:02.057299] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.878 [2024-04-26 12:12:02.065321] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.878 [2024-04-26 12:12:02.065333] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.878 [2024-04-26 12:12:02.073341] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.878 [2024-04-26 12:12:02.073350] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.878 [2024-04-26 12:12:02.081359] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.878 [2024-04-26 12:12:02.081371] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.878 [2024-04-26 12:12:02.089376] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.878 [2024-04-26 12:12:02.089383] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.138 [2024-04-26 12:12:02.097397] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.138 [2024-04-26 12:12:02.097405] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.138 [2024-04-26 12:12:02.109430] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.138 [2024-04-26 12:12:02.109437] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.138 [2024-04-26 12:12:02.117450] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.138 [2024-04-26 12:12:02.117458] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.138 [2024-04-26 12:12:02.125475] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.138 [2024-04-26 12:12:02.125485] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.138 [2024-04-26 12:12:02.133496] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.138 [2024-04-26 12:12:02.133505] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.138 [2024-04-26 12:12:02.141517] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.138 [2024-04-26 12:12:02.141525] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.138 [2024-04-26 12:12:02.149537] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.138 [2024-04-26 12:12:02.149546] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.138 [2024-04-26 12:12:02.157555] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.138 [2024-04-26 12:12:02.157564] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.138 [2024-04-26 12:12:02.165576] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.138 [2024-04-26 12:12:02.165582] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.138 [2024-04-26 12:12:02.173598] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.138 [2024-04-26 12:12:02.173604] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.138 [2024-04-26 12:12:02.181618] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.138 [2024-04-26 12:12:02.181624] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.138 [2024-04-26 12:12:02.189637] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.138 [2024-04-26 12:12:02.189643] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.138 [2024-04-26 12:12:02.197657] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.138 [2024-04-26 12:12:02.197664] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.138 [2024-04-26 12:12:02.205680] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.138 [2024-04-26 12:12:02.205688] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.138 [2024-04-26 12:12:02.213702] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.138 [2024-04-26 12:12:02.213708] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.138 [2024-04-26 12:12:02.221722] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.138 [2024-04-26 12:12:02.221728] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.138 [2024-04-26 12:12:02.229744] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.138 [2024-04-26 12:12:02.229751] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.138 [2024-04-26 12:12:02.237765] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.138 [2024-04-26 12:12:02.237774] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.138 [2024-04-26 12:12:02.245787] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.138 [2024-04-26 12:12:02.245795] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.138 [2024-04-26 12:12:02.253807] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.138 [2024-04-26 12:12:02.253813] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.138 [2024-04-26 12:12:02.261827] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.138 [2024-04-26 12:12:02.261833] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.138 [2024-04-26 12:12:02.269852] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.138 [2024-04-26 12:12:02.269859] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.138 [2024-04-26 12:12:02.277872] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.138 [2024-04-26 12:12:02.277878] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.138 [2024-04-26 12:12:02.285889] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.138 [2024-04-26 12:12:02.285895] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.138 [2024-04-26 12:12:02.293912] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.138 [2024-04-26 12:12:02.293919] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.138 [2024-04-26 12:12:02.301986] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.138 [2024-04-26 12:12:02.301999] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.138 Running I/O for 5 seconds... 00:16:01.138 [2024-04-26 12:12:02.310001] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.138 [2024-04-26 12:12:02.310009] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.138 [2024-04-26 12:12:02.318023] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.138 [2024-04-26 12:12:02.318032] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.138 [2024-04-26 12:12:02.328735] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.138 [2024-04-26 12:12:02.328750] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.138 [2024-04-26 12:12:02.336818] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.138 [2024-04-26 12:12:02.336833] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.138 [2024-04-26 12:12:02.345588] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.138 [2024-04-26 12:12:02.345603] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.138 [2024-04-26 12:12:02.354590] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.138 [2024-04-26 12:12:02.354605] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.397 [2024-04-26 12:12:02.363722] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.397 [2024-04-26 12:12:02.363737] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.397 [2024-04-26 12:12:02.372310] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.397 [2024-04-26 12:12:02.372325] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.397 [2024-04-26 12:12:02.381229] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.397 [2024-04-26 12:12:02.381243] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.397 [2024-04-26 12:12:02.389698] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.397 [2024-04-26 12:12:02.389712] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.397 [2024-04-26 12:12:02.398827] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.397 [2024-04-26 12:12:02.398849] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.397 [2024-04-26 12:12:02.407638] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.397 [2024-04-26 12:12:02.407652] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.397 [2024-04-26 12:12:02.416307] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.397 [2024-04-26 12:12:02.416321] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.397 [2024-04-26 12:12:02.424890] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.397 [2024-04-26 12:12:02.424904] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.397 [2024-04-26 12:12:02.433520] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.397 [2024-04-26 12:12:02.433534] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.398 [2024-04-26 12:12:02.442341] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.398 [2024-04-26 12:12:02.442355] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.398 [2024-04-26 12:12:02.451338] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.398 [2024-04-26 12:12:02.451352] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.398 [2024-04-26 12:12:02.460193] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.398 [2024-04-26 12:12:02.460207] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.398 [2024-04-26 12:12:02.469399] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.398 [2024-04-26 12:12:02.469414] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.398 [2024-04-26 12:12:02.478016] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.398 [2024-04-26 12:12:02.478030] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.398 [2024-04-26 12:12:02.486788] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.398 [2024-04-26 12:12:02.486802] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.398 [2024-04-26 12:12:02.495885] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.398 [2024-04-26 12:12:02.495900] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.398 [2024-04-26 12:12:02.505167] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.398 [2024-04-26 12:12:02.505181] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.398 [2024-04-26 12:12:02.513827] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.398 [2024-04-26 12:12:02.513846] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.398 [2024-04-26 12:12:02.522526] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.398 [2024-04-26 12:12:02.522539] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.398 [2024-04-26 12:12:02.531730] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.398 [2024-04-26 12:12:02.531744] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.398 [2024-04-26 12:12:02.540350] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.398 [2024-04-26 12:12:02.540364] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.398 [2024-04-26 12:12:02.548998] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.398 [2024-04-26 12:12:02.549013] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.398 [2024-04-26 12:12:02.558444] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.398 [2024-04-26 12:12:02.558459] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.398 [2024-04-26 12:12:02.566899] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.398 [2024-04-26 12:12:02.566913] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.398 [2024-04-26 12:12:02.576144] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.398 [2024-04-26 12:12:02.576159] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.398 [2024-04-26 12:12:02.584662] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.398 [2024-04-26 12:12:02.584676] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.398 [2024-04-26 12:12:02.593412] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.398 [2024-04-26 12:12:02.593426] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.398 [2024-04-26 12:12:02.602445] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.398 [2024-04-26 12:12:02.602459] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.398 [2024-04-26 12:12:02.611073] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.398 [2024-04-26 12:12:02.611087] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.657 [2024-04-26 12:12:02.620045] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.658 [2024-04-26 12:12:02.620059] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.658 [2024-04-26 12:12:02.628603] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.658 [2024-04-26 12:12:02.628618] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.658 [2024-04-26 12:12:02.637525] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.658 [2024-04-26 12:12:02.637540] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.658 [2024-04-26 12:12:02.646385] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.658 [2024-04-26 12:12:02.646399] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.658 [2024-04-26 12:12:02.654624] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.658 [2024-04-26 12:12:02.654639] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.658 [2024-04-26 12:12:02.663695] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.658 [2024-04-26 12:12:02.663709] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.658 [2024-04-26 12:12:02.672785] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.658 [2024-04-26 12:12:02.672799] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.658 [2024-04-26 12:12:02.681869] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.658 [2024-04-26 12:12:02.681882] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.658 [2024-04-26 12:12:02.691072] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.658 [2024-04-26 12:12:02.691087] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.658 [2024-04-26 12:12:02.700029] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.658 [2024-04-26 12:12:02.700044] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.658 [2024-04-26 12:12:02.709088] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.658 [2024-04-26 12:12:02.709102] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.658 [2024-04-26 12:12:02.717653] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.658 [2024-04-26 12:12:02.717667] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.658 [2024-04-26 12:12:02.726212] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.658 [2024-04-26 12:12:02.726226] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.658 [2024-04-26 12:12:02.735293] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.658 [2024-04-26 12:12:02.735308] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.658 [2024-04-26 12:12:02.743885] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.658 [2024-04-26 12:12:02.743899] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.658 [2024-04-26 12:12:02.752853] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.658 [2024-04-26 12:12:02.752867] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.658 [2024-04-26 12:12:02.760795] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.658 [2024-04-26 12:12:02.760808] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.658 [2024-04-26 12:12:02.769628] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.658 [2024-04-26 12:12:02.769642] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.658 [2024-04-26 12:12:02.778003] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.658 [2024-04-26 12:12:02.778017] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.658 [2024-04-26 12:12:02.786532] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.658 [2024-04-26 12:12:02.786546] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.658 [2024-04-26 12:12:02.795441] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.658 [2024-04-26 12:12:02.795455] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.658 [2024-04-26 12:12:02.804037] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.658 [2024-04-26 12:12:02.804051] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.658 [2024-04-26 12:12:02.813083] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.658 [2024-04-26 12:12:02.813097] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.658 [2024-04-26 12:12:02.821627] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.658 [2024-04-26 12:12:02.821641] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.658 [2024-04-26 12:12:02.830084] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.658 [2024-04-26 12:12:02.830098] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.658 [2024-04-26 12:12:02.838562] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.658 [2024-04-26 12:12:02.838576] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.658 [2024-04-26 12:12:02.847126] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.658 [2024-04-26 12:12:02.847140] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.658 [2024-04-26 12:12:02.855940] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.658 [2024-04-26 12:12:02.855953] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.658 [2024-04-26 12:12:02.865161] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.658 [2024-04-26 12:12:02.865174] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.658 [2024-04-26 12:12:02.873822] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.658 [2024-04-26 12:12:02.873841] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.918 [2024-04-26 12:12:02.882562] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.918 [2024-04-26 12:12:02.882576] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.918 [2024-04-26 12:12:02.891147] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.918 [2024-04-26 12:12:02.891161] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.918 [2024-04-26 12:12:02.900056] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.918 [2024-04-26 12:12:02.900070] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.918 [2024-04-26 12:12:02.909120] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.918 [2024-04-26 12:12:02.909135] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.918 [2024-04-26 12:12:02.918293] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.918 [2024-04-26 12:12:02.918308] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.918 [2024-04-26 12:12:02.927324] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.918 [2024-04-26 12:12:02.927338] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.918 [2024-04-26 12:12:02.935906] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.918 [2024-04-26 12:12:02.935921] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.918 [2024-04-26 12:12:02.944820] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.918 [2024-04-26 12:12:02.944834] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.919 [2024-04-26 12:12:02.953048] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.919 [2024-04-26 12:12:02.953062] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.919 [2024-04-26 12:12:02.961721] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.919 [2024-04-26 12:12:02.961735] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.919 [2024-04-26 12:12:02.970760] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.919 [2024-04-26 12:12:02.970774] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.919 [2024-04-26 12:12:02.979387] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.919 [2024-04-26 12:12:02.979401] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.919 [2024-04-26 12:12:02.987989] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.919 [2024-04-26 12:12:02.988003] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.919 [2024-04-26 12:12:02.996593] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.919 [2024-04-26 12:12:02.996607] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.919 [2024-04-26 12:12:03.005357] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.919 [2024-04-26 12:12:03.005371] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.919 [2024-04-26 12:12:03.014578] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.919 [2024-04-26 12:12:03.014592] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.919 [2024-04-26 12:12:03.023835] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.919 [2024-04-26 12:12:03.023852] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.919 [2024-04-26 12:12:03.032859] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.919 [2024-04-26 12:12:03.032873] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.919 [2024-04-26 12:12:03.041287] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.919 [2024-04-26 12:12:03.041301] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.919 [2024-04-26 12:12:03.050606] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.919 [2024-04-26 12:12:03.050621] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.919 [2024-04-26 12:12:03.058819] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.919 [2024-04-26 12:12:03.058833] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.919 [2024-04-26 12:12:03.066848] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.919 [2024-04-26 12:12:03.066863] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.919 [2024-04-26 12:12:03.075694] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.919 [2024-04-26 12:12:03.075708] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.919 [2024-04-26 12:12:03.084231] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.919 [2024-04-26 12:12:03.084245] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.919 [2024-04-26 12:12:03.093182] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.919 [2024-04-26 12:12:03.093196] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.919 [2024-04-26 12:12:03.102353] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.919 [2024-04-26 12:12:03.102367] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.919 [2024-04-26 12:12:03.111329] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.919 [2024-04-26 12:12:03.111344] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.919 [2024-04-26 12:12:03.119895] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.919 [2024-04-26 12:12:03.119909] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.919 [2024-04-26 12:12:03.129216] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.919 [2024-04-26 12:12:03.129230] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.182 [2024-04-26 12:12:03.138414] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.182 [2024-04-26 12:12:03.138429] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.182 [2024-04-26 12:12:03.147271] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.182 [2024-04-26 12:12:03.147285] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.182 [2024-04-26 12:12:03.156255] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.182 [2024-04-26 12:12:03.156269] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.182 [2024-04-26 12:12:03.165249] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.182 [2024-04-26 12:12:03.165263] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.182 [2024-04-26 12:12:03.174071] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.182 [2024-04-26 12:12:03.174085] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.182 [2024-04-26 12:12:03.183238] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.182 [2024-04-26 12:12:03.183252] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.182 [2024-04-26 12:12:03.191980] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.182 [2024-04-26 12:12:03.191995] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.182 [2024-04-26 12:12:03.200901] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.182 [2024-04-26 12:12:03.200915] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.182 [2024-04-26 12:12:03.210094] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.182 [2024-04-26 12:12:03.210108] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.182 [2024-04-26 12:12:03.219205] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.182 [2024-04-26 12:12:03.219219] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.182 [2024-04-26 12:12:03.227125] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.182 [2024-04-26 12:12:03.227143] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.182 [2024-04-26 12:12:03.235928] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.182 [2024-04-26 12:12:03.235942] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.182 [2024-04-26 12:12:03.244577] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.182 [2024-04-26 12:12:03.244591] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.182 [2024-04-26 12:12:03.253524] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.182 [2024-04-26 12:12:03.253538] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.182 [2024-04-26 12:12:03.262356] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.182 [2024-04-26 12:12:03.262370] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.182 [2024-04-26 12:12:03.271222] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.182 [2024-04-26 12:12:03.271236] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.182 [2024-04-26 12:12:03.280154] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.182 [2024-04-26 12:12:03.280168] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.182 [2024-04-26 12:12:03.288749] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.182 [2024-04-26 12:12:03.288763] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.182 [2024-04-26 12:12:03.297560] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.182 [2024-04-26 12:12:03.297574] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.182 [2024-04-26 12:12:03.306572] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.182 [2024-04-26 12:12:03.306587] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.182 [2024-04-26 12:12:03.315655] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.182 [2024-04-26 12:12:03.315669] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.182 [2024-04-26 12:12:03.324406] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.182 [2024-04-26 12:12:03.324419] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.182 [2024-04-26 12:12:03.333554] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.182 [2024-04-26 12:12:03.333569] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.182 [2024-04-26 12:12:03.342780] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.182 [2024-04-26 12:12:03.342794] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.182 [2024-04-26 12:12:03.351409] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.182 [2024-04-26 12:12:03.351423] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.182 [2024-04-26 12:12:03.360684] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.182 [2024-04-26 12:12:03.360698] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.183 [2024-04-26 12:12:03.369241] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.183 [2024-04-26 12:12:03.369255] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.183 [2024-04-26 12:12:03.377962] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.183 [2024-04-26 12:12:03.377977] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.183 [2024-04-26 12:12:03.387241] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.183 [2024-04-26 12:12:03.387255] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.183 [2024-04-26 12:12:03.395861] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.183 [2024-04-26 12:12:03.395879] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.443 [2024-04-26 12:12:03.404680] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.443 [2024-04-26 12:12:03.404695] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.443 [2024-04-26 12:12:03.413955] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.443 [2024-04-26 12:12:03.413969] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.443 [2024-04-26 12:12:03.422559] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.443 [2024-04-26 12:12:03.422573] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.443 [2024-04-26 12:12:03.431456] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.443 [2024-04-26 12:12:03.431470] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.443 [2024-04-26 12:12:03.440592] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.443 [2024-04-26 12:12:03.440606] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.443 [2024-04-26 12:12:03.449340] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.443 [2024-04-26 12:12:03.449354] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.443 [2024-04-26 12:12:03.458247] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.443 [2024-04-26 12:12:03.458261] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.443 [2024-04-26 12:12:03.467005] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.443 [2024-04-26 12:12:03.467019] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.443 [2024-04-26 12:12:03.475748] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.443 [2024-04-26 12:12:03.475762] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.443 [2024-04-26 12:12:03.484576] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.443 [2024-04-26 12:12:03.484590] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.443 [2024-04-26 12:12:03.493236] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.443 [2024-04-26 12:12:03.493250] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.443 [2024-04-26 12:12:03.502404] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.443 [2024-04-26 12:12:03.502418] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.443 [2024-04-26 12:12:03.511036] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.443 [2024-04-26 12:12:03.511050] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.443 [2024-04-26 12:12:03.519201] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.443 [2024-04-26 12:12:03.519215] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.443 [2024-04-26 12:12:03.528601] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.443 [2024-04-26 12:12:03.528615] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.443 [2024-04-26 12:12:03.536487] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.443 [2024-04-26 12:12:03.536500] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.443 [2024-04-26 12:12:03.545496] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.443 [2024-04-26 12:12:03.545510] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.443 [2024-04-26 12:12:03.554285] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.443 [2024-04-26 12:12:03.554299] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.443 [2024-04-26 12:12:03.563100] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.443 [2024-04-26 12:12:03.563117] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.443 [2024-04-26 12:12:03.572258] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.443 [2024-04-26 12:12:03.572272] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.443 [2024-04-26 12:12:03.581326] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.443 [2024-04-26 12:12:03.581341] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.443 [2024-04-26 12:12:03.590511] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.443 [2024-04-26 12:12:03.590526] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.443 [2024-04-26 12:12:03.599360] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.443 [2024-04-26 12:12:03.599373] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.443 [2024-04-26 12:12:03.608265] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.443 [2024-04-26 12:12:03.608279] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.443 [2024-04-26 12:12:03.616942] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.443 [2024-04-26 12:12:03.616956] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.443 [2024-04-26 12:12:03.625905] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.443 [2024-04-26 12:12:03.625919] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.443 [2024-04-26 12:12:03.634855] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.443 [2024-04-26 12:12:03.634868] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.443 [2024-04-26 12:12:03.643433] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.443 [2024-04-26 12:12:03.643446] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.443 [2024-04-26 12:12:03.652264] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.443 [2024-04-26 12:12:03.652279] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.443 [2024-04-26 12:12:03.661177] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.443 [2024-04-26 12:12:03.661191] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.703 [2024-04-26 12:12:03.669742] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.704 [2024-04-26 12:12:03.669757] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.704 [2024-04-26 12:12:03.677725] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.704 [2024-04-26 12:12:03.677739] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.704 [2024-04-26 12:12:03.686797] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.704 [2024-04-26 12:12:03.686811] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.704 [2024-04-26 12:12:03.695929] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.704 [2024-04-26 12:12:03.695943] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.704 [2024-04-26 12:12:03.704480] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.704 [2024-04-26 12:12:03.704494] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.704 [2024-04-26 12:12:03.713452] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.704 [2024-04-26 12:12:03.713466] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.704 [2024-04-26 12:12:03.722155] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.704 [2024-04-26 12:12:03.722169] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.704 [2024-04-26 12:12:03.730173] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.704 [2024-04-26 12:12:03.730192] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.704 [2024-04-26 12:12:03.739399] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.704 [2024-04-26 12:12:03.739413] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.704 [2024-04-26 12:12:03.747860] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.704 [2024-04-26 12:12:03.747874] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.704 [2024-04-26 12:12:03.756392] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.704 [2024-04-26 12:12:03.756406] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.704 [2024-04-26 12:12:03.765431] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.704 [2024-04-26 12:12:03.765445] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.704 [2024-04-26 12:12:03.774376] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.704 [2024-04-26 12:12:03.774391] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.704 [2024-04-26 12:12:03.782955] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.704 [2024-04-26 12:12:03.782969] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.704 [2024-04-26 12:12:03.791625] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.704 [2024-04-26 12:12:03.791639] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.704 [2024-04-26 12:12:03.800577] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.704 [2024-04-26 12:12:03.800591] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.704 [2024-04-26 12:12:03.809699] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.704 [2024-04-26 12:12:03.809713] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.704 [2024-04-26 12:12:03.818434] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.704 [2024-04-26 12:12:03.818448] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.704 [2024-04-26 12:12:03.827260] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.704 [2024-04-26 12:12:03.827274] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.704 [2024-04-26 12:12:03.835883] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.704 [2024-04-26 12:12:03.835898] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.704 [2024-04-26 12:12:03.844642] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.704 [2024-04-26 12:12:03.844656] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.704 [2024-04-26 12:12:03.853334] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.704 [2024-04-26 12:12:03.853349] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.704 [2024-04-26 12:12:03.862181] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.704 [2024-04-26 12:12:03.862195] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.704 [2024-04-26 12:12:03.870689] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.704 [2024-04-26 12:12:03.870703] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.704 [2024-04-26 12:12:03.879684] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.704 [2024-04-26 12:12:03.879697] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.704 [2024-04-26 12:12:03.888489] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.704 [2024-04-26 12:12:03.888502] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.704 [2024-04-26 12:12:03.897300] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.704 [2024-04-26 12:12:03.897315] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.704 [2024-04-26 12:12:03.906201] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.704 [2024-04-26 12:12:03.906216] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.704 [2024-04-26 12:12:03.914983] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.704 [2024-04-26 12:12:03.914998] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.704 [2024-04-26 12:12:03.923427] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.704 [2024-04-26 12:12:03.923441] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.965 [2024-04-26 12:12:03.932612] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.965 [2024-04-26 12:12:03.932627] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.965 [2024-04-26 12:12:03.940694] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.965 [2024-04-26 12:12:03.940708] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.965 [2024-04-26 12:12:03.949484] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.965 [2024-04-26 12:12:03.949498] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.965 [2024-04-26 12:12:03.958315] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.965 [2024-04-26 12:12:03.958329] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.965 [2024-04-26 12:12:03.966885] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.965 [2024-04-26 12:12:03.966899] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.966 [2024-04-26 12:12:03.975642] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.966 [2024-04-26 12:12:03.975656] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.966 [2024-04-26 12:12:03.984449] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.966 [2024-04-26 12:12:03.984463] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.966 [2024-04-26 12:12:03.993058] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.966 [2024-04-26 12:12:03.993072] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.966 [2024-04-26 12:12:04.002112] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.966 [2024-04-26 12:12:04.002126] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.966 [2024-04-26 12:12:04.011091] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.966 [2024-04-26 12:12:04.011105] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.966 [2024-04-26 12:12:04.020025] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.966 [2024-04-26 12:12:04.020039] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.966 [2024-04-26 12:12:04.028698] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.966 [2024-04-26 12:12:04.028712] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.966 [2024-04-26 12:12:04.037767] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.966 [2024-04-26 12:12:04.037781] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.966 [2024-04-26 12:12:04.046341] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.966 [2024-04-26 12:12:04.046355] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.966 [2024-04-26 12:12:04.055343] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.966 [2024-04-26 12:12:04.055357] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.966 [2024-04-26 12:12:04.063463] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.966 [2024-04-26 12:12:04.063477] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.966 [2024-04-26 12:12:04.072186] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.966 [2024-04-26 12:12:04.072200] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.966 [2024-04-26 12:12:04.081181] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.966 [2024-04-26 12:12:04.081196] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.966 [2024-04-26 12:12:04.090520] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.966 [2024-04-26 12:12:04.090533] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.966 [2024-04-26 12:12:04.099581] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.966 [2024-04-26 12:12:04.099596] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.966 [2024-04-26 12:12:04.108201] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.966 [2024-04-26 12:12:04.108215] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.966 [2024-04-26 12:12:04.116950] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.966 [2024-04-26 12:12:04.116964] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.966 [2024-04-26 12:12:04.125507] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.966 [2024-04-26 12:12:04.125520] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.966 [2024-04-26 12:12:04.134055] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.966 [2024-04-26 12:12:04.134069] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.966 [2024-04-26 12:12:04.142473] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.966 [2024-04-26 12:12:04.142487] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.966 [2024-04-26 12:12:04.151362] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.966 [2024-04-26 12:12:04.151376] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.966 [2024-04-26 12:12:04.160337] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.966 [2024-04-26 12:12:04.160352] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.966 [2024-04-26 12:12:04.169221] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.966 [2024-04-26 12:12:04.169235] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.966 [2024-04-26 12:12:04.178085] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.966 [2024-04-26 12:12:04.178099] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.226 [2024-04-26 12:12:04.186779] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.226 [2024-04-26 12:12:04.186793] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.226 [2024-04-26 12:12:04.195442] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.226 [2024-04-26 12:12:04.195455] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.226 [2024-04-26 12:12:04.204338] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.226 [2024-04-26 12:12:04.204352] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.226 [2024-04-26 12:12:04.213770] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.226 [2024-04-26 12:12:04.213784] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.226 [2024-04-26 12:12:04.222323] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.226 [2024-04-26 12:12:04.222337] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.226 [2024-04-26 12:12:04.231651] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.226 [2024-04-26 12:12:04.231665] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.226 [2024-04-26 12:12:04.239862] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.226 [2024-04-26 12:12:04.239875] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.226 [2024-04-26 12:12:04.249067] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.226 [2024-04-26 12:12:04.249081] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.226 [2024-04-26 12:12:04.257782] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.226 [2024-04-26 12:12:04.257796] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.226 [2024-04-26 12:12:04.266627] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.226 [2024-04-26 12:12:04.266641] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.226 [2024-04-26 12:12:04.275373] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.226 [2024-04-26 12:12:04.275387] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.226 [2024-04-26 12:12:04.284512] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.227 [2024-04-26 12:12:04.284526] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.227 [2024-04-26 12:12:04.293067] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.227 [2024-04-26 12:12:04.293081] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.227 [2024-04-26 12:12:04.302105] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.227 [2024-04-26 12:12:04.302119] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.227 [2024-04-26 12:12:04.311004] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.227 [2024-04-26 12:12:04.311017] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.227 [2024-04-26 12:12:04.320238] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.227 [2024-04-26 12:12:04.320251] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.227 [2024-04-26 12:12:04.328219] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.227 [2024-04-26 12:12:04.328233] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.227 [2024-04-26 12:12:04.336717] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.227 [2024-04-26 12:12:04.336730] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.227 [2024-04-26 12:12:04.345567] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.227 [2024-04-26 12:12:04.345581] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.227 [2024-04-26 12:12:04.354288] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.227 [2024-04-26 12:12:04.354302] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.227 [2024-04-26 12:12:04.363254] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.227 [2024-04-26 12:12:04.363268] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.227 [2024-04-26 12:12:04.372342] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.227 [2024-04-26 12:12:04.372356] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.227 [2024-04-26 12:12:04.380885] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.227 [2024-04-26 12:12:04.380898] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.227 [2024-04-26 12:12:04.389636] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.227 [2024-04-26 12:12:04.389650] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.227 [2024-04-26 12:12:04.398090] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.227 [2024-04-26 12:12:04.398104] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.227 [2024-04-26 12:12:04.406737] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.227 [2024-04-26 12:12:04.406751] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.227 [2024-04-26 12:12:04.415723] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.227 [2024-04-26 12:12:04.415737] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.227 [2024-04-26 12:12:04.424824] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.227 [2024-04-26 12:12:04.424842] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.227 [2024-04-26 12:12:04.434012] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.227 [2024-04-26 12:12:04.434026] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.227 [2024-04-26 12:12:04.442797] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.227 [2024-04-26 12:12:04.442811] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.487 [2024-04-26 12:12:04.451506] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.487 [2024-04-26 12:12:04.451520] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.487 [2024-04-26 12:12:04.459514] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.487 [2024-04-26 12:12:04.459528] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.487 [2024-04-26 12:12:04.468448] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.487 [2024-04-26 12:12:04.468462] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.487 [2024-04-26 12:12:04.477385] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.487 [2024-04-26 12:12:04.477398] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.487 [2024-04-26 12:12:04.486461] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.487 [2024-04-26 12:12:04.486475] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.487 [2024-04-26 12:12:04.495468] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.487 [2024-04-26 12:12:04.495482] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.487 [2024-04-26 12:12:04.504328] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.487 [2024-04-26 12:12:04.504342] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.487 [2024-04-26 12:12:04.513066] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.487 [2024-04-26 12:12:04.513080] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.487 [2024-04-26 12:12:04.521971] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.487 [2024-04-26 12:12:04.521985] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.487 [2024-04-26 12:12:04.530428] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.487 [2024-04-26 12:12:04.530442] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.487 [2024-04-26 12:12:04.539540] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.487 [2024-04-26 12:12:04.539554] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.487 [2024-04-26 12:12:04.548646] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.487 [2024-04-26 12:12:04.548660] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.487 [2024-04-26 12:12:04.557578] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.487 [2024-04-26 12:12:04.557595] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.487 [2024-04-26 12:12:04.566573] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.487 [2024-04-26 12:12:04.566587] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.487 [2024-04-26 12:12:04.575218] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.487 [2024-04-26 12:12:04.575232] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.487 [2024-04-26 12:12:04.583997] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.487 [2024-04-26 12:12:04.584011] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.487 [2024-04-26 12:12:04.593153] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.487 [2024-04-26 12:12:04.593168] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.487 [2024-04-26 12:12:04.602434] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.487 [2024-04-26 12:12:04.602448] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.487 [2024-04-26 12:12:04.610933] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.487 [2024-04-26 12:12:04.610948] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.487 [2024-04-26 12:12:04.619590] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.487 [2024-04-26 12:12:04.619603] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.487 [2024-04-26 12:12:04.628202] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.487 [2024-04-26 12:12:04.628216] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.487 [2024-04-26 12:12:04.637638] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.487 [2024-04-26 12:12:04.637653] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.487 [2024-04-26 12:12:04.646284] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.487 [2024-04-26 12:12:04.646297] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.487 [2024-04-26 12:12:04.654856] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.487 [2024-04-26 12:12:04.654871] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.487 [2024-04-26 12:12:04.664063] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.487 [2024-04-26 12:12:04.664078] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.487 [2024-04-26 12:12:04.672620] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.487 [2024-04-26 12:12:04.672634] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.487 [2024-04-26 12:12:04.681572] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.487 [2024-04-26 12:12:04.681586] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.487 [2024-04-26 12:12:04.690173] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.487 [2024-04-26 12:12:04.690187] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.487 [2024-04-26 12:12:04.699369] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.487 [2024-04-26 12:12:04.699383] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.749 [2024-04-26 12:12:04.708077] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.749 [2024-04-26 12:12:04.708092] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.749 [2024-04-26 12:12:04.716953] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.749 [2024-04-26 12:12:04.716967] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.749 [2024-04-26 12:12:04.725565] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.749 [2024-04-26 12:12:04.725582] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.749 [2024-04-26 12:12:04.734120] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.749 [2024-04-26 12:12:04.734133] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.749 [2024-04-26 12:12:04.742915] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.749 [2024-04-26 12:12:04.742937] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.749 [2024-04-26 12:12:04.751221] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.749 [2024-04-26 12:12:04.751235] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.749 [2024-04-26 12:12:04.760414] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.749 [2024-04-26 12:12:04.760428] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.749 [2024-04-26 12:12:04.769490] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.749 [2024-04-26 12:12:04.769504] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.749 [2024-04-26 12:12:04.778281] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.749 [2024-04-26 12:12:04.778295] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.749 [2024-04-26 12:12:04.786915] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.749 [2024-04-26 12:12:04.786929] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.749 [2024-04-26 12:12:04.796199] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.749 [2024-04-26 12:12:04.796212] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.749 [2024-04-26 12:12:04.804662] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.749 [2024-04-26 12:12:04.804676] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.749 [2024-04-26 12:12:04.813464] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.749 [2024-04-26 12:12:04.813477] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.749 [2024-04-26 12:12:04.822132] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.749 [2024-04-26 12:12:04.822146] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.749 [2024-04-26 12:12:04.830743] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.749 [2024-04-26 12:12:04.830757] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.749 [2024-04-26 12:12:04.839691] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.749 [2024-04-26 12:12:04.839705] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.749 [2024-04-26 12:12:04.848520] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.749 [2024-04-26 12:12:04.848534] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.749 [2024-04-26 12:12:04.857398] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.749 [2024-04-26 12:12:04.857412] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.749 [2024-04-26 12:12:04.865963] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.749 [2024-04-26 12:12:04.865978] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.749 [2024-04-26 12:12:04.874884] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.749 [2024-04-26 12:12:04.874898] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.749 [2024-04-26 12:12:04.883515] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.749 [2024-04-26 12:12:04.883529] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.749 [2024-04-26 12:12:04.892505] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.749 [2024-04-26 12:12:04.892523] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.749 [2024-04-26 12:12:04.901823] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.749 [2024-04-26 12:12:04.901842] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.749 [2024-04-26 12:12:04.910436] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.749 [2024-04-26 12:12:04.910450] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.749 [2024-04-26 12:12:04.919514] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.749 [2024-04-26 12:12:04.919528] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.749 [2024-04-26 12:12:04.928653] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.749 [2024-04-26 12:12:04.928667] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.749 [2024-04-26 12:12:04.937760] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.749 [2024-04-26 12:12:04.937775] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.749 [2024-04-26 12:12:04.946547] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.749 [2024-04-26 12:12:04.946562] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.749 [2024-04-26 12:12:04.955196] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.749 [2024-04-26 12:12:04.955210] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.749 [2024-04-26 12:12:04.963820] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.749 [2024-04-26 12:12:04.963834] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.010 [2024-04-26 12:12:04.972506] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.010 [2024-04-26 12:12:04.972521] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.010 [2024-04-26 12:12:04.981407] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.010 [2024-04-26 12:12:04.981421] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.010 [2024-04-26 12:12:04.990618] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.010 [2024-04-26 12:12:04.990632] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.010 [2024-04-26 12:12:04.999125] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.010 [2024-04-26 12:12:04.999140] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.010 [2024-04-26 12:12:05.008247] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.010 [2024-04-26 12:12:05.008261] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.010 [2024-04-26 12:12:05.016653] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.010 [2024-04-26 12:12:05.016667] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.010 [2024-04-26 12:12:05.025484] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.010 [2024-04-26 12:12:05.025498] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.010 [2024-04-26 12:12:05.034921] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.010 [2024-04-26 12:12:05.034935] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.010 [2024-04-26 12:12:05.042947] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.011 [2024-04-26 12:12:05.042961] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.011 [2024-04-26 12:12:05.051807] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.011 [2024-04-26 12:12:05.051821] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.011 [2024-04-26 12:12:05.060385] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.011 [2024-04-26 12:12:05.060402] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.011 [2024-04-26 12:12:05.069317] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.011 [2024-04-26 12:12:05.069331] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.011 [2024-04-26 12:12:05.078118] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.011 [2024-04-26 12:12:05.078132] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.011 [2024-04-26 12:12:05.086670] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.011 [2024-04-26 12:12:05.086684] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.011 [2024-04-26 12:12:05.095278] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.011 [2024-04-26 12:12:05.095292] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.011 [2024-04-26 12:12:05.104560] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.011 [2024-04-26 12:12:05.104574] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.011 [2024-04-26 12:12:05.113753] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.011 [2024-04-26 12:12:05.113768] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.011 [2024-04-26 12:12:05.122508] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.011 [2024-04-26 12:12:05.122522] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.011 [2024-04-26 12:12:05.131193] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.011 [2024-04-26 12:12:05.131207] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.011 [2024-04-26 12:12:05.140391] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.011 [2024-04-26 12:12:05.140405] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.011 [2024-04-26 12:12:05.149597] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.011 [2024-04-26 12:12:05.149610] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.011 [2024-04-26 12:12:05.158547] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.011 [2024-04-26 12:12:05.158561] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.011 [2024-04-26 12:12:05.167229] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.011 [2024-04-26 12:12:05.167243] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.011 [2024-04-26 12:12:05.175988] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.011 [2024-04-26 12:12:05.176002] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.011 [2024-04-26 12:12:05.185146] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.011 [2024-04-26 12:12:05.185160] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.011 [2024-04-26 12:12:05.194073] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.011 [2024-04-26 12:12:05.194087] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.011 [2024-04-26 12:12:05.203010] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.011 [2024-04-26 12:12:05.203024] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.011 [2024-04-26 12:12:05.211480] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.011 [2024-04-26 12:12:05.211494] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.011 [2024-04-26 12:12:05.220020] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.011 [2024-04-26 12:12:05.220034] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.011 [2024-04-26 12:12:05.228813] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.011 [2024-04-26 12:12:05.228827] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.274 [2024-04-26 12:12:05.237566] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.274 [2024-04-26 12:12:05.237581] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.274 [2024-04-26 12:12:05.246167] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.274 [2024-04-26 12:12:05.246181] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.274 [2024-04-26 12:12:05.255119] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.274 [2024-04-26 12:12:05.255133] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.274 [2024-04-26 12:12:05.263734] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.274 [2024-04-26 12:12:05.263748] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.274 [2024-04-26 12:12:05.272786] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.274 [2024-04-26 12:12:05.272799] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.274 [2024-04-26 12:12:05.281273] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.275 [2024-04-26 12:12:05.281287] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.275 [2024-04-26 12:12:05.290240] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.275 [2024-04-26 12:12:05.290254] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.275 [2024-04-26 12:12:05.298747] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.275 [2024-04-26 12:12:05.298761] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.275 [2024-04-26 12:12:05.307305] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.275 [2024-04-26 12:12:05.307319] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.275 [2024-04-26 12:12:05.316203] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.275 [2024-04-26 12:12:05.316217] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.275 [2024-04-26 12:12:05.325140] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.275 [2024-04-26 12:12:05.325154] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.275 [2024-04-26 12:12:05.333671] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.275 [2024-04-26 12:12:05.333685] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.275 [2024-04-26 12:12:05.342505] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.275 [2024-04-26 12:12:05.342519] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.275 [2024-04-26 12:12:05.351045] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.275 [2024-04-26 12:12:05.351059] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.275 [2024-04-26 12:12:05.360130] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.275 [2024-04-26 12:12:05.360144] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.275 [2024-04-26 12:12:05.368818] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.275 [2024-04-26 12:12:05.368833] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.275 [2024-04-26 12:12:05.377229] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.275 [2024-04-26 12:12:05.377243] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.275 [2024-04-26 12:12:05.386362] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.275 [2024-04-26 12:12:05.386377] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.275 [2024-04-26 12:12:05.395315] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.275 [2024-04-26 12:12:05.395329] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.275 [2024-04-26 12:12:05.404409] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.275 [2024-04-26 12:12:05.404423] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.275 [2024-04-26 12:12:05.413446] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.275 [2024-04-26 12:12:05.413460] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.275 [2024-04-26 12:12:05.421827] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.275 [2024-04-26 12:12:05.421846] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.275 [2024-04-26 12:12:05.430769] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.275 [2024-04-26 12:12:05.430783] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.275 [2024-04-26 12:12:05.439994] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.275 [2024-04-26 12:12:05.440008] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.275 [2024-04-26 12:12:05.448534] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.275 [2024-04-26 12:12:05.448548] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.275 [2024-04-26 12:12:05.457567] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.275 [2024-04-26 12:12:05.457581] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.275 [2024-04-26 12:12:05.466561] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.275 [2024-04-26 12:12:05.466575] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.275 [2024-04-26 12:12:05.475722] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.275 [2024-04-26 12:12:05.475737] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.275 [2024-04-26 12:12:05.483955] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.275 [2024-04-26 12:12:05.483969] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.537 [2024-04-26 12:12:05.492672] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.537 [2024-04-26 12:12:05.492686] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.537 [2024-04-26 12:12:05.501417] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.537 [2024-04-26 12:12:05.501431] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.537 [2024-04-26 12:12:05.510240] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.537 [2024-04-26 12:12:05.510254] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.537 [2024-04-26 12:12:05.518535] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.537 [2024-04-26 12:12:05.518548] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.537 [2024-04-26 12:12:05.527291] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.537 [2024-04-26 12:12:05.527305] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.537 [2024-04-26 12:12:05.536148] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.537 [2024-04-26 12:12:05.536162] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.537 [2024-04-26 12:12:05.545266] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.537 [2024-04-26 12:12:05.545280] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.537 [2024-04-26 12:12:05.553803] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.538 [2024-04-26 12:12:05.553817] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.538 [2024-04-26 12:12:05.562694] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.538 [2024-04-26 12:12:05.562708] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.538 [2024-04-26 12:12:05.571561] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.538 [2024-04-26 12:12:05.571576] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.538 [2024-04-26 12:12:05.579919] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.538 [2024-04-26 12:12:05.579933] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.538 [2024-04-26 12:12:05.588897] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.538 [2024-04-26 12:12:05.588911] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.538 [2024-04-26 12:12:05.597608] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.538 [2024-04-26 12:12:05.597622] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.538 [2024-04-26 12:12:05.606499] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.538 [2024-04-26 12:12:05.606513] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.538 [2024-04-26 12:12:05.615577] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.538 [2024-04-26 12:12:05.615590] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.538 [2024-04-26 12:12:05.624102] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.538 [2024-04-26 12:12:05.624116] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.538 [2024-04-26 12:12:05.633385] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.538 [2024-04-26 12:12:05.633399] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.538 [2024-04-26 12:12:05.642054] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.538 [2024-04-26 12:12:05.642068] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.538 [2024-04-26 12:12:05.651156] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.538 [2024-04-26 12:12:05.651170] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.538 [2024-04-26 12:12:05.659973] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.538 [2024-04-26 12:12:05.659987] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.538 [2024-04-26 12:12:05.668833] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.538 [2024-04-26 12:12:05.668852] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.538 [2024-04-26 12:12:05.677304] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.538 [2024-04-26 12:12:05.677317] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.538 [2024-04-26 12:12:05.686239] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.538 [2024-04-26 12:12:05.686254] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.538 [2024-04-26 12:12:05.695288] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.538 [2024-04-26 12:12:05.695302] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.538 [2024-04-26 12:12:05.704153] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.538 [2024-04-26 12:12:05.704167] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.538 [2024-04-26 12:12:05.713351] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.538 [2024-04-26 12:12:05.713365] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.538 [2024-04-26 12:12:05.722253] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.538 [2024-04-26 12:12:05.722267] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.538 [2024-04-26 12:12:05.731031] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.538 [2024-04-26 12:12:05.731046] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.538 [2024-04-26 12:12:05.739567] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.538 [2024-04-26 12:12:05.739581] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.538 [2024-04-26 12:12:05.748059] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.538 [2024-04-26 12:12:05.748073] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.798 [2024-04-26 12:12:05.757290] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.798 [2024-04-26 12:12:05.757304] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.798 [2024-04-26 12:12:05.766294] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.798 [2024-04-26 12:12:05.766309] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.798 [2024-04-26 12:12:05.774916] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.798 [2024-04-26 12:12:05.774930] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.798 [2024-04-26 12:12:05.783732] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.798 [2024-04-26 12:12:05.783746] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.798 [2024-04-26 12:12:05.792574] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.798 [2024-04-26 12:12:05.792588] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.798 [2024-04-26 12:12:05.801593] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.798 [2024-04-26 12:12:05.801606] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.798 [2024-04-26 12:12:05.810131] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.798 [2024-04-26 12:12:05.810145] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.798 [2024-04-26 12:12:05.819567] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.799 [2024-04-26 12:12:05.819581] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.799 [2024-04-26 12:12:05.827727] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.799 [2024-04-26 12:12:05.827742] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.799 [2024-04-26 12:12:05.841204] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.799 [2024-04-26 12:12:05.841220] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.799 [2024-04-26 12:12:05.849356] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.799 [2024-04-26 12:12:05.849371] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.799 [2024-04-26 12:12:05.858415] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.799 [2024-04-26 12:12:05.858429] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.799 [2024-04-26 12:12:05.867073] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.799 [2024-04-26 12:12:05.867087] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.799 [2024-04-26 12:12:05.875959] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.799 [2024-04-26 12:12:05.875973] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.799 [2024-04-26 12:12:05.885328] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.799 [2024-04-26 12:12:05.885342] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.799 [2024-04-26 12:12:05.893928] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.799 [2024-04-26 12:12:05.893945] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.799 [2024-04-26 12:12:05.902689] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.799 [2024-04-26 12:12:05.902703] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.799 [2024-04-26 12:12:05.912083] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.799 [2024-04-26 12:12:05.912098] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.799 [2024-04-26 12:12:05.920877] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.799 [2024-04-26 12:12:05.920892] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.799 [2024-04-26 12:12:05.929494] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.799 [2024-04-26 12:12:05.929508] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.799 [2024-04-26 12:12:05.938313] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.799 [2024-04-26 12:12:05.938326] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.799 [2024-04-26 12:12:05.947143] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.799 [2024-04-26 12:12:05.947156] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.799 [2024-04-26 12:12:05.956314] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.799 [2024-04-26 12:12:05.956328] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.799 [2024-04-26 12:12:05.964960] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.799 [2024-04-26 12:12:05.964973] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.799 [2024-04-26 12:12:05.973313] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.799 [2024-04-26 12:12:05.973326] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.799 [2024-04-26 12:12:05.982307] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.799 [2024-04-26 12:12:05.982322] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.799 [2024-04-26 12:12:05.991124] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.799 [2024-04-26 12:12:05.991138] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.799 [2024-04-26 12:12:05.999901] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.799 [2024-04-26 12:12:05.999915] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.799 [2024-04-26 12:12:06.008471] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.799 [2024-04-26 12:12:06.008485] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.799 [2024-04-26 12:12:06.017107] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.799 [2024-04-26 12:12:06.017121] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.060 [2024-04-26 12:12:06.026439] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.060 [2024-04-26 12:12:06.026453] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.060 [2024-04-26 12:12:06.034566] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.060 [2024-04-26 12:12:06.034580] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.060 [2024-04-26 12:12:06.043621] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.060 [2024-04-26 12:12:06.043635] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.060 [2024-04-26 12:12:06.052475] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.060 [2024-04-26 12:12:06.052489] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.060 [2024-04-26 12:12:06.061327] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.060 [2024-04-26 12:12:06.061344] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.060 [2024-04-26 12:12:06.070003] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.060 [2024-04-26 12:12:06.070017] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.060 [2024-04-26 12:12:06.078792] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.060 [2024-04-26 12:12:06.078806] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.060 [2024-04-26 12:12:06.087832] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.060 [2024-04-26 12:12:06.087850] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.060 [2024-04-26 12:12:06.096855] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.060 [2024-04-26 12:12:06.096869] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.060 [2024-04-26 12:12:06.105381] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.060 [2024-04-26 12:12:06.105395] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.060 [2024-04-26 12:12:06.114035] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.060 [2024-04-26 12:12:06.114049] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.060 [2024-04-26 12:12:06.122015] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.060 [2024-04-26 12:12:06.122029] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.060 [2024-04-26 12:12:06.131140] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.060 [2024-04-26 12:12:06.131154] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.060 [2024-04-26 12:12:06.139673] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.060 [2024-04-26 12:12:06.139687] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.060 [2024-04-26 12:12:06.148446] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.060 [2024-04-26 12:12:06.148460] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.060 [2024-04-26 12:12:06.156864] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.060 [2024-04-26 12:12:06.156878] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.060 [2024-04-26 12:12:06.165746] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.060 [2024-04-26 12:12:06.165759] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.060 [2024-04-26 12:12:06.174476] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.060 [2024-04-26 12:12:06.174490] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.060 [2024-04-26 12:12:06.183119] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.060 [2024-04-26 12:12:06.183133] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.060 [2024-04-26 12:12:06.191429] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.060 [2024-04-26 12:12:06.191443] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.060 [2024-04-26 12:12:06.200264] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.060 [2024-04-26 12:12:06.200278] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.060 [2024-04-26 12:12:06.209057] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.060 [2024-04-26 12:12:06.209072] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.060 [2024-04-26 12:12:06.218051] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.060 [2024-04-26 12:12:06.218065] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.060 [2024-04-26 12:12:06.226663] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.060 [2024-04-26 12:12:06.226680] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.060 [2024-04-26 12:12:06.235195] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.060 [2024-04-26 12:12:06.235209] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.060 [2024-04-26 12:12:06.243550] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.060 [2024-04-26 12:12:06.243564] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.060 [2024-04-26 12:12:06.252663] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.060 [2024-04-26 12:12:06.252677] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.060 [2024-04-26 12:12:06.261664] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.060 [2024-04-26 12:12:06.261678] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.060 [2024-04-26 12:12:06.270615] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.060 [2024-04-26 12:12:06.270628] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.060 [2024-04-26 12:12:06.279503] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.060 [2024-04-26 12:12:06.279517] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.321 [2024-04-26 12:12:06.288918] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.321 [2024-04-26 12:12:06.288933] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.321 [2024-04-26 12:12:06.297688] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.321 [2024-04-26 12:12:06.297702] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.321 [2024-04-26 12:12:06.307073] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.321 [2024-04-26 12:12:06.307087] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.321 [2024-04-26 12:12:06.316266] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.321 [2024-04-26 12:12:06.316280] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.321 [2024-04-26 12:12:06.325101] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.321 [2024-04-26 12:12:06.325115] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.321 [2024-04-26 12:12:06.334109] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.321 [2024-04-26 12:12:06.334124] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.321 [2024-04-26 12:12:06.342615] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.321 [2024-04-26 12:12:06.342628] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.321 [2024-04-26 12:12:06.351492] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.321 [2024-04-26 12:12:06.351506] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.321 [2024-04-26 12:12:06.360491] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.321 [2024-04-26 12:12:06.360505] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.321 [2024-04-26 12:12:06.369551] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.321 [2024-04-26 12:12:06.369565] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.321 [2024-04-26 12:12:06.378223] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.321 [2024-04-26 12:12:06.378237] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.321 [2024-04-26 12:12:06.387146] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.321 [2024-04-26 12:12:06.387160] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.321 [2024-04-26 12:12:06.396430] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.321 [2024-04-26 12:12:06.396449] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.321 [2024-04-26 12:12:06.405479] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.321 [2024-04-26 12:12:06.405493] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.321 [2024-04-26 12:12:06.413727] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.321 [2024-04-26 12:12:06.413741] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.321 [2024-04-26 12:12:06.422728] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.321 [2024-04-26 12:12:06.422741] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.321 [2024-04-26 12:12:06.431054] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.321 [2024-04-26 12:12:06.431068] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.321 [2024-04-26 12:12:06.439780] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.321 [2024-04-26 12:12:06.439794] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.321 [2024-04-26 12:12:06.448876] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.321 [2024-04-26 12:12:06.448890] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.321 [2024-04-26 12:12:06.457647] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.321 [2024-04-26 12:12:06.457661] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.321 [2024-04-26 12:12:06.466073] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.321 [2024-04-26 12:12:06.466087] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.321 [2024-04-26 12:12:06.474995] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.321 [2024-04-26 12:12:06.475009] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.321 [2024-04-26 12:12:06.484186] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.321 [2024-04-26 12:12:06.484201] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.321 [2024-04-26 12:12:06.492650] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.321 [2024-04-26 12:12:06.492664] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.321 [2024-04-26 12:12:06.501615] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.321 [2024-04-26 12:12:06.501630] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.321 [2024-04-26 12:12:06.510307] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.321 [2024-04-26 12:12:06.510321] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.321 [2024-04-26 12:12:06.518966] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.321 [2024-04-26 12:12:06.518980] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.321 [2024-04-26 12:12:06.527459] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.321 [2024-04-26 12:12:06.527473] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.321 [2024-04-26 12:12:06.536353] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.321 [2024-04-26 12:12:06.536367] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.582 [2024-04-26 12:12:06.545377] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.582 [2024-04-26 12:12:06.545391] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.582 [2024-04-26 12:12:06.554405] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.582 [2024-04-26 12:12:06.554418] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.582 [2024-04-26 12:12:06.562948] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.582 [2024-04-26 12:12:06.562962] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.582 [2024-04-26 12:12:06.571305] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.582 [2024-04-26 12:12:06.571319] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.582 [2024-04-26 12:12:06.580633] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.582 [2024-04-26 12:12:06.580647] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.582 [2024-04-26 12:12:06.589356] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.582 [2024-04-26 12:12:06.589370] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.582 [2024-04-26 12:12:06.598458] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.582 [2024-04-26 12:12:06.598474] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.582 [2024-04-26 12:12:06.607525] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.582 [2024-04-26 12:12:06.607539] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.582 [2024-04-26 12:12:06.616663] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.582 [2024-04-26 12:12:06.616678] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.582 [2024-04-26 12:12:06.625178] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.582 [2024-04-26 12:12:06.625192] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.582 [2024-04-26 12:12:06.634425] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.582 [2024-04-26 12:12:06.634440] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.582 [2024-04-26 12:12:06.643256] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.582 [2024-04-26 12:12:06.643271] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.582 [2024-04-26 12:12:06.652555] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.582 [2024-04-26 12:12:06.652569] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.582 [2024-04-26 12:12:06.660649] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.582 [2024-04-26 12:12:06.660663] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.582 [2024-04-26 12:12:06.669524] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.582 [2024-04-26 12:12:06.669537] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.582 [2024-04-26 12:12:06.678146] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.582 [2024-04-26 12:12:06.678159] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.582 [2024-04-26 12:12:06.687012] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.582 [2024-04-26 12:12:06.687027] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.582 [2024-04-26 12:12:06.695737] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.582 [2024-04-26 12:12:06.695751] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.582 [2024-04-26 12:12:06.704566] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.582 [2024-04-26 12:12:06.704580] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.582 [2024-04-26 12:12:06.713267] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.582 [2024-04-26 12:12:06.713281] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.582 [2024-04-26 12:12:06.722371] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.582 [2024-04-26 12:12:06.722385] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.582 [2024-04-26 12:12:06.731263] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.582 [2024-04-26 12:12:06.731277] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.582 [2024-04-26 12:12:06.739943] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.582 [2024-04-26 12:12:06.739958] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.582 [2024-04-26 12:12:06.747954] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.582 [2024-04-26 12:12:06.747967] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.582 [2024-04-26 12:12:06.756928] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.583 [2024-04-26 12:12:06.756942] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.583 [2024-04-26 12:12:06.765981] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.583 [2024-04-26 12:12:06.765996] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.583 [2024-04-26 12:12:06.774873] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.583 [2024-04-26 12:12:06.774887] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.583 [2024-04-26 12:12:06.783610] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.583 [2024-04-26 12:12:06.783624] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.583 [2024-04-26 12:12:06.791917] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.583 [2024-04-26 12:12:06.791931] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.583 [2024-04-26 12:12:06.800776] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.583 [2024-04-26 12:12:06.800790] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.844 [2024-04-26 12:12:06.809423] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.844 [2024-04-26 12:12:06.809437] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.844 [2024-04-26 12:12:06.818747] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.844 [2024-04-26 12:12:06.818761] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.844 [2024-04-26 12:12:06.827759] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.844 [2024-04-26 12:12:06.827773] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.844 [2024-04-26 12:12:06.836096] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.844 [2024-04-26 12:12:06.836110] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.844 [2024-04-26 12:12:06.845200] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.844 [2024-04-26 12:12:06.845214] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.844 [2024-04-26 12:12:06.853852] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.844 [2024-04-26 12:12:06.853866] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.844 [2024-04-26 12:12:06.862424] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.844 [2024-04-26 12:12:06.862438] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.844 [2024-04-26 12:12:06.871267] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.844 [2024-04-26 12:12:06.871281] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.844 [2024-04-26 12:12:06.880172] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.844 [2024-04-26 12:12:06.880186] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.844 [2024-04-26 12:12:06.889004] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.844 [2024-04-26 12:12:06.889018] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.844 [2024-04-26 12:12:06.897549] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.844 [2024-04-26 12:12:06.897563] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.844 [2024-04-26 12:12:06.906632] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.844 [2024-04-26 12:12:06.906646] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.844 [2024-04-26 12:12:06.915145] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.844 [2024-04-26 12:12:06.915160] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.844 [2024-04-26 12:12:06.924104] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.844 [2024-04-26 12:12:06.924119] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.844 [2024-04-26 12:12:06.933188] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.844 [2024-04-26 12:12:06.933204] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.844 [2024-04-26 12:12:06.942193] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.844 [2024-04-26 12:12:06.942208] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.844 [2024-04-26 12:12:06.951594] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.844 [2024-04-26 12:12:06.951609] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.844 [2024-04-26 12:12:06.960559] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.844 [2024-04-26 12:12:06.960573] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.844 [2024-04-26 12:12:06.968988] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.844 [2024-04-26 12:12:06.969002] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.844 [2024-04-26 12:12:06.978098] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.844 [2024-04-26 12:12:06.978113] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.844 [2024-04-26 12:12:06.986684] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.844 [2024-04-26 12:12:06.986698] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.844 [2024-04-26 12:12:06.994976] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.844 [2024-04-26 12:12:06.994990] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.844 [2024-04-26 12:12:07.003536] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.844 [2024-04-26 12:12:07.003550] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.844 [2024-04-26 12:12:07.012035] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.844 [2024-04-26 12:12:07.012049] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.844 [2024-04-26 12:12:07.020585] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.844 [2024-04-26 12:12:07.020599] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.844 [2024-04-26 12:12:07.029347] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.844 [2024-04-26 12:12:07.029361] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.844 [2024-04-26 12:12:07.038325] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.844 [2024-04-26 12:12:07.038339] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.844 [2024-04-26 12:12:07.047253] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.844 [2024-04-26 12:12:07.047268] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.844 [2024-04-26 12:12:07.056150] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.844 [2024-04-26 12:12:07.056164] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.105 [2024-04-26 12:12:07.065147] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.105 [2024-04-26 12:12:07.065161] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.105 [2024-04-26 12:12:07.073860] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.105 [2024-04-26 12:12:07.073874] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.105 [2024-04-26 12:12:07.083008] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.105 [2024-04-26 12:12:07.083022] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.105 [2024-04-26 12:12:07.091635] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.105 [2024-04-26 12:12:07.091649] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.105 [2024-04-26 12:12:07.100190] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.105 [2024-04-26 12:12:07.100204] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.105 [2024-04-26 12:12:07.109124] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.105 [2024-04-26 12:12:07.109138] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.105 [2024-04-26 12:12:07.117221] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.105 [2024-04-26 12:12:07.117235] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.105 [2024-04-26 12:12:07.126342] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.105 [2024-04-26 12:12:07.126356] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.105 [2024-04-26 12:12:07.134993] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.105 [2024-04-26 12:12:07.135007] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.105 [2024-04-26 12:12:07.144141] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.105 [2024-04-26 12:12:07.144155] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.105 [2024-04-26 12:12:07.153233] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.105 [2024-04-26 12:12:07.153247] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.105 [2024-04-26 12:12:07.161865] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.105 [2024-04-26 12:12:07.161879] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.105 [2024-04-26 12:12:07.170694] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.105 [2024-04-26 12:12:07.170708] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.105 [2024-04-26 12:12:07.179602] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.105 [2024-04-26 12:12:07.179615] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.105 [2024-04-26 12:12:07.188690] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.105 [2024-04-26 12:12:07.188703] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.105 [2024-04-26 12:12:07.197328] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.105 [2024-04-26 12:12:07.197342] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.105 [2024-04-26 12:12:07.206199] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.105 [2024-04-26 12:12:07.206213] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.105 [2024-04-26 12:12:07.215381] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.105 [2024-04-26 12:12:07.215395] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.106 [2024-04-26 12:12:07.223948] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.106 [2024-04-26 12:12:07.223965] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.106 [2024-04-26 12:12:07.232699] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.106 [2024-04-26 12:12:07.232713] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.106 [2024-04-26 12:12:07.241578] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.106 [2024-04-26 12:12:07.241592] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.106 [2024-04-26 12:12:07.250345] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.106 [2024-04-26 12:12:07.250359] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.106 [2024-04-26 12:12:07.259307] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.106 [2024-04-26 12:12:07.259321] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.106 [2024-04-26 12:12:07.267794] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.106 [2024-04-26 12:12:07.267808] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.106 [2024-04-26 12:12:07.276784] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.106 [2024-04-26 12:12:07.276798] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.106 [2024-04-26 12:12:07.286216] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.106 [2024-04-26 12:12:07.286230] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.106 [2024-04-26 12:12:07.294149] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.106 [2024-04-26 12:12:07.294163] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.106 [2024-04-26 12:12:07.303188] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.106 [2024-04-26 12:12:07.303201] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.106 [2024-04-26 12:12:07.312004] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.106 [2024-04-26 12:12:07.312019] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.106 [2024-04-26 12:12:07.320695] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.106 [2024-04-26 12:12:07.320709] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.367 00:16:06.367 Latency(us) 00:16:06.367 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:06.367 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:16:06.367 Nvme1n1 : 5.00 19005.86 148.48 0.00 0.00 6729.05 2430.29 17476.27 00:16:06.367 =================================================================================================================== 00:16:06.367 Total : 19005.86 148.48 0.00 0.00 6729.05 2430.29 17476.27 00:16:06.367 [2024-04-26 12:12:07.329576] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.367 [2024-04-26 12:12:07.329590] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.367 [2024-04-26 12:12:07.334958] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.367 [2024-04-26 12:12:07.334969] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.367 [2024-04-26 12:12:07.342985] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.367 [2024-04-26 12:12:07.342995] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.367 [2024-04-26 12:12:07.351003] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.367 [2024-04-26 12:12:07.351012] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.367 [2024-04-26 12:12:07.359020] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.367 [2024-04-26 12:12:07.359035] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.367 [2024-04-26 12:12:07.367039] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.367 [2024-04-26 12:12:07.367049] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.367 [2024-04-26 12:12:07.375057] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.367 [2024-04-26 12:12:07.375066] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.367 [2024-04-26 12:12:07.383074] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.367 [2024-04-26 12:12:07.383082] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.367 [2024-04-26 12:12:07.391094] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.367 [2024-04-26 12:12:07.391102] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.367 [2024-04-26 12:12:07.399115] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.367 [2024-04-26 12:12:07.399123] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.367 [2024-04-26 12:12:07.407137] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.367 [2024-04-26 12:12:07.407145] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.367 [2024-04-26 12:12:07.415159] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.367 [2024-04-26 12:12:07.415167] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.367 [2024-04-26 12:12:07.423180] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.367 [2024-04-26 12:12:07.423190] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.367 [2024-04-26 12:12:07.431200] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.367 [2024-04-26 12:12:07.431208] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.367 [2024-04-26 12:12:07.439221] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.367 [2024-04-26 12:12:07.439230] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.367 [2024-04-26 12:12:07.447241] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.367 [2024-04-26 12:12:07.447248] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.367 [2024-04-26 12:12:07.455261] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.367 [2024-04-26 12:12:07.455268] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.367 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3383569) - No such process 00:16:06.367 12:12:07 -- target/zcopy.sh@49 -- # wait 3383569 00:16:06.367 12:12:07 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:06.367 12:12:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:06.367 12:12:07 -- common/autotest_common.sh@10 -- # set +x 00:16:06.367 12:12:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:06.367 12:12:07 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:06.367 12:12:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:06.367 12:12:07 -- common/autotest_common.sh@10 -- # set +x 00:16:06.367 delay0 00:16:06.367 12:12:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:06.367 12:12:07 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:16:06.367 12:12:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:06.367 12:12:07 -- common/autotest_common.sh@10 -- # set +x 00:16:06.367 12:12:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:06.367 12:12:07 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:16:06.367 EAL: No free 2048 kB hugepages reported on node 1 00:16:06.628 [2024-04-26 12:12:07.590247] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:14.767 Initializing NVMe Controllers 00:16:14.767 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:14.767 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:14.767 Initialization complete. Launching workers. 00:16:14.767 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 233, failed: 30680 00:16:14.767 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 30792, failed to submit 121 00:16:14.767 success 30711, unsuccess 81, failed 0 00:16:14.767 12:12:14 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:16:14.767 12:12:14 -- target/zcopy.sh@60 -- # nvmftestfini 00:16:14.767 12:12:14 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:14.767 12:12:14 -- nvmf/common.sh@117 -- # sync 00:16:14.767 12:12:14 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:14.767 12:12:14 -- nvmf/common.sh@120 -- # set +e 00:16:14.767 12:12:14 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:14.767 12:12:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:14.767 rmmod nvme_tcp 00:16:14.767 rmmod nvme_fabrics 00:16:14.767 rmmod nvme_keyring 00:16:14.767 12:12:14 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:14.767 12:12:14 -- nvmf/common.sh@124 -- # set -e 00:16:14.767 12:12:14 -- nvmf/common.sh@125 -- # return 0 00:16:14.767 12:12:14 -- nvmf/common.sh@478 -- # '[' -n 3381247 ']' 00:16:14.767 12:12:14 -- nvmf/common.sh@479 -- # killprocess 3381247 00:16:14.767 12:12:14 -- common/autotest_common.sh@936 -- # '[' -z 3381247 ']' 00:16:14.767 12:12:14 -- common/autotest_common.sh@940 -- # kill -0 3381247 00:16:14.767 12:12:14 -- common/autotest_common.sh@941 -- # uname 00:16:14.767 12:12:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:14.767 12:12:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3381247 00:16:14.767 12:12:14 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:14.767 12:12:14 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:14.767 12:12:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3381247' 00:16:14.767 killing process with pid 3381247 00:16:14.767 12:12:14 -- common/autotest_common.sh@955 -- # kill 3381247 00:16:14.767 12:12:14 -- common/autotest_common.sh@960 -- # wait 3381247 00:16:14.767 12:12:15 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:14.767 12:12:15 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:14.767 12:12:15 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:14.767 12:12:15 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:14.767 12:12:15 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:14.767 12:12:15 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:14.767 12:12:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:14.767 12:12:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:16.153 12:12:17 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:16.153 00:16:16.153 real 0m34.330s 00:16:16.153 user 0m46.069s 00:16:16.153 sys 0m10.819s 00:16:16.153 12:12:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:16.153 12:12:17 -- common/autotest_common.sh@10 -- # set +x 00:16:16.153 ************************************ 00:16:16.153 END TEST nvmf_zcopy 00:16:16.153 ************************************ 00:16:16.153 12:12:17 -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:16.153 12:12:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:16.153 12:12:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:16.153 12:12:17 -- common/autotest_common.sh@10 -- # set +x 00:16:16.153 ************************************ 00:16:16.153 START TEST nvmf_nmic 00:16:16.153 ************************************ 00:16:16.153 12:12:17 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:16.484 * Looking for test storage... 00:16:16.484 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:16.484 12:12:17 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:16.484 12:12:17 -- nvmf/common.sh@7 -- # uname -s 00:16:16.484 12:12:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:16.484 12:12:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:16.484 12:12:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:16.484 12:12:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:16.484 12:12:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:16.484 12:12:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:16.484 12:12:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:16.484 12:12:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:16.484 12:12:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:16.484 12:12:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:16.484 12:12:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:16.484 12:12:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:16.484 12:12:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:16.484 12:12:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:16.484 12:12:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:16.484 12:12:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:16.484 12:12:17 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:16.484 12:12:17 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:16.484 12:12:17 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:16.484 12:12:17 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:16.484 12:12:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.484 12:12:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.484 12:12:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.484 12:12:17 -- paths/export.sh@5 -- # export PATH 00:16:16.484 12:12:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.484 12:12:17 -- nvmf/common.sh@47 -- # : 0 00:16:16.484 12:12:17 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:16.484 12:12:17 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:16.484 12:12:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:16.484 12:12:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:16.484 12:12:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:16.484 12:12:17 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:16.484 12:12:17 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:16.484 12:12:17 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:16.484 12:12:17 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:16.484 12:12:17 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:16.484 12:12:17 -- target/nmic.sh@14 -- # nvmftestinit 00:16:16.484 12:12:17 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:16.484 12:12:17 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:16.484 12:12:17 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:16.484 12:12:17 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:16.484 12:12:17 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:16.484 12:12:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:16.484 12:12:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:16.484 12:12:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:16.484 12:12:17 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:16.484 12:12:17 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:16.484 12:12:17 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:16.484 12:12:17 -- common/autotest_common.sh@10 -- # set +x 00:16:24.624 12:12:24 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:24.624 12:12:24 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:24.624 12:12:24 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:24.624 12:12:24 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:24.624 12:12:24 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:24.624 12:12:24 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:24.624 12:12:24 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:24.624 12:12:24 -- nvmf/common.sh@295 -- # net_devs=() 00:16:24.624 12:12:24 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:24.624 12:12:24 -- nvmf/common.sh@296 -- # e810=() 00:16:24.624 12:12:24 -- nvmf/common.sh@296 -- # local -ga e810 00:16:24.624 12:12:24 -- nvmf/common.sh@297 -- # x722=() 00:16:24.624 12:12:24 -- nvmf/common.sh@297 -- # local -ga x722 00:16:24.624 12:12:24 -- nvmf/common.sh@298 -- # mlx=() 00:16:24.624 12:12:24 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:24.624 12:12:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:24.624 12:12:24 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:24.624 12:12:24 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:24.624 12:12:24 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:24.624 12:12:24 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:24.624 12:12:24 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:24.624 12:12:24 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:24.624 12:12:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:24.624 12:12:24 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:24.624 12:12:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:24.624 12:12:24 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:24.624 12:12:24 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:24.624 12:12:24 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:24.624 12:12:24 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:24.624 12:12:24 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:24.624 12:12:24 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:24.624 12:12:24 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:24.624 12:12:24 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:24.624 12:12:24 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:24.624 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:24.624 12:12:24 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:24.624 12:12:24 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:24.624 12:12:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:24.624 12:12:24 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:24.624 12:12:24 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:24.624 12:12:24 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:24.624 12:12:24 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:24.624 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:24.624 12:12:24 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:24.624 12:12:24 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:24.624 12:12:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:24.624 12:12:24 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:24.624 12:12:24 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:24.624 12:12:24 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:24.624 12:12:24 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:24.624 12:12:24 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:24.624 12:12:24 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:24.624 12:12:24 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:24.624 12:12:24 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:24.624 12:12:24 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:24.624 12:12:24 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:24.624 Found net devices under 0000:31:00.0: cvl_0_0 00:16:24.624 12:12:24 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:24.624 12:12:24 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:24.624 12:12:24 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:24.624 12:12:24 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:24.624 12:12:24 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:24.624 12:12:24 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:24.625 Found net devices under 0000:31:00.1: cvl_0_1 00:16:24.625 12:12:24 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:24.625 12:12:24 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:24.625 12:12:24 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:24.625 12:12:24 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:24.625 12:12:24 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:24.625 12:12:24 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:24.625 12:12:24 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:24.625 12:12:24 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:24.625 12:12:24 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:24.625 12:12:24 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:24.625 12:12:24 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:24.625 12:12:24 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:24.625 12:12:24 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:24.625 12:12:24 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:24.625 12:12:24 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:24.625 12:12:24 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:24.625 12:12:24 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:24.625 12:12:24 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:24.625 12:12:24 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:24.625 12:12:24 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:24.625 12:12:24 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:24.625 12:12:24 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:24.625 12:12:24 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:24.625 12:12:24 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:24.625 12:12:24 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:24.625 12:12:24 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:24.625 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:24.625 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.520 ms 00:16:24.625 00:16:24.625 --- 10.0.0.2 ping statistics --- 00:16:24.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:24.625 rtt min/avg/max/mdev = 0.520/0.520/0.520/0.000 ms 00:16:24.625 12:12:24 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:24.625 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:24.625 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:16:24.625 00:16:24.625 --- 10.0.0.1 ping statistics --- 00:16:24.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:24.625 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:16:24.625 12:12:24 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:24.625 12:12:24 -- nvmf/common.sh@411 -- # return 0 00:16:24.625 12:12:24 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:24.625 12:12:24 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:24.625 12:12:24 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:24.625 12:12:24 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:24.625 12:12:24 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:24.625 12:12:24 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:24.625 12:12:24 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:24.625 12:12:24 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:16:24.625 12:12:24 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:24.625 12:12:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:24.625 12:12:24 -- common/autotest_common.sh@10 -- # set +x 00:16:24.625 12:12:24 -- nvmf/common.sh@470 -- # nvmfpid=3390918 00:16:24.625 12:12:24 -- nvmf/common.sh@471 -- # waitforlisten 3390918 00:16:24.625 12:12:24 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:24.625 12:12:24 -- common/autotest_common.sh@817 -- # '[' -z 3390918 ']' 00:16:24.625 12:12:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:24.625 12:12:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:24.625 12:12:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:24.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:24.625 12:12:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:24.625 12:12:24 -- common/autotest_common.sh@10 -- # set +x 00:16:24.625 [2024-04-26 12:12:24.802381] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:16:24.625 [2024-04-26 12:12:24.802449] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:24.625 EAL: No free 2048 kB hugepages reported on node 1 00:16:24.625 [2024-04-26 12:12:24.876298] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:24.625 [2024-04-26 12:12:24.950892] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:24.625 [2024-04-26 12:12:24.950938] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:24.625 [2024-04-26 12:12:24.950947] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:24.625 [2024-04-26 12:12:24.950955] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:24.625 [2024-04-26 12:12:24.950961] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:24.625 [2024-04-26 12:12:24.951492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:24.625 [2024-04-26 12:12:24.951576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:24.625 [2024-04-26 12:12:24.951740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:24.625 [2024-04-26 12:12:24.951794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.625 12:12:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:24.625 12:12:25 -- common/autotest_common.sh@850 -- # return 0 00:16:24.625 12:12:25 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:24.625 12:12:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:24.625 12:12:25 -- common/autotest_common.sh@10 -- # set +x 00:16:24.625 12:12:25 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:24.625 12:12:25 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:24.625 12:12:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.625 12:12:25 -- common/autotest_common.sh@10 -- # set +x 00:16:24.625 [2024-04-26 12:12:25.620383] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:24.625 12:12:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.625 12:12:25 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:24.625 12:12:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.625 12:12:25 -- common/autotest_common.sh@10 -- # set +x 00:16:24.625 Malloc0 00:16:24.625 12:12:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.625 12:12:25 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:24.625 12:12:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.625 12:12:25 -- common/autotest_common.sh@10 -- # set +x 00:16:24.625 12:12:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.625 12:12:25 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:24.625 12:12:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.625 12:12:25 -- common/autotest_common.sh@10 -- # set +x 00:16:24.625 12:12:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.625 12:12:25 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:24.625 12:12:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.625 12:12:25 -- common/autotest_common.sh@10 -- # set +x 00:16:24.625 [2024-04-26 12:12:25.679794] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:24.625 12:12:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.625 12:12:25 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:16:24.625 test case1: single bdev can't be used in multiple subsystems 00:16:24.625 12:12:25 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:16:24.625 12:12:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.625 12:12:25 -- common/autotest_common.sh@10 -- # set +x 00:16:24.625 12:12:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.625 12:12:25 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:24.625 12:12:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.625 12:12:25 -- common/autotest_common.sh@10 -- # set +x 00:16:24.625 12:12:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.625 12:12:25 -- target/nmic.sh@28 -- # nmic_status=0 00:16:24.625 12:12:25 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:16:24.625 12:12:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.625 12:12:25 -- common/autotest_common.sh@10 -- # set +x 00:16:24.625 [2024-04-26 12:12:25.715762] bdev.c:8005:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:16:24.625 [2024-04-26 12:12:25.715781] subsystem.c:1940:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:16:24.625 [2024-04-26 12:12:25.715789] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.625 request: 00:16:24.625 { 00:16:24.625 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:24.625 "namespace": { 00:16:24.625 "bdev_name": "Malloc0", 00:16:24.625 "no_auto_visible": false 00:16:24.625 }, 00:16:24.625 "method": "nvmf_subsystem_add_ns", 00:16:24.625 "req_id": 1 00:16:24.625 } 00:16:24.625 Got JSON-RPC error response 00:16:24.625 response: 00:16:24.625 { 00:16:24.625 "code": -32602, 00:16:24.625 "message": "Invalid parameters" 00:16:24.625 } 00:16:24.625 12:12:25 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:16:24.625 12:12:25 -- target/nmic.sh@29 -- # nmic_status=1 00:16:24.625 12:12:25 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:16:24.625 12:12:25 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:16:24.625 Adding namespace failed - expected result. 00:16:24.625 12:12:25 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:16:24.625 test case2: host connect to nvmf target in multiple paths 00:16:24.625 12:12:25 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:24.625 12:12:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.625 12:12:25 -- common/autotest_common.sh@10 -- # set +x 00:16:24.626 [2024-04-26 12:12:25.727896] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:24.626 12:12:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.626 12:12:25 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:26.538 12:12:27 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:16:27.922 12:12:28 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:16:27.922 12:12:28 -- common/autotest_common.sh@1184 -- # local i=0 00:16:27.922 12:12:28 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:16:27.922 12:12:28 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:16:27.922 12:12:28 -- common/autotest_common.sh@1191 -- # sleep 2 00:16:29.833 12:12:30 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:16:29.833 12:12:30 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:16:29.833 12:12:30 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:16:29.833 12:12:30 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:16:29.833 12:12:30 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:16:29.833 12:12:30 -- common/autotest_common.sh@1194 -- # return 0 00:16:29.833 12:12:30 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:29.833 [global] 00:16:29.833 thread=1 00:16:29.833 invalidate=1 00:16:29.833 rw=write 00:16:29.833 time_based=1 00:16:29.833 runtime=1 00:16:29.833 ioengine=libaio 00:16:29.833 direct=1 00:16:29.833 bs=4096 00:16:29.833 iodepth=1 00:16:29.833 norandommap=0 00:16:29.833 numjobs=1 00:16:29.833 00:16:29.833 verify_dump=1 00:16:29.833 verify_backlog=512 00:16:29.833 verify_state_save=0 00:16:29.833 do_verify=1 00:16:29.833 verify=crc32c-intel 00:16:29.833 [job0] 00:16:29.833 filename=/dev/nvme0n1 00:16:29.833 Could not set queue depth (nvme0n1) 00:16:30.093 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:30.093 fio-3.35 00:16:30.093 Starting 1 thread 00:16:31.475 00:16:31.475 job0: (groupid=0, jobs=1): err= 0: pid=3392260: Fri Apr 26 12:12:32 2024 00:16:31.475 read: IOPS=648, BW=2593KiB/s (2656kB/s)(2596KiB/1001msec) 00:16:31.475 slat (nsec): min=6771, max=42696, avg=22070.17, stdev=7244.24 00:16:31.475 clat (usec): min=409, max=2645, avg=770.48, stdev=94.33 00:16:31.475 lat (usec): min=417, max=2652, avg=792.55, stdev=95.40 00:16:31.475 clat percentiles (usec): 00:16:31.475 | 1.00th=[ 619], 5.00th=[ 668], 10.00th=[ 676], 20.00th=[ 725], 00:16:31.475 | 30.00th=[ 758], 40.00th=[ 775], 50.00th=[ 775], 60.00th=[ 783], 00:16:31.475 | 70.00th=[ 799], 80.00th=[ 807], 90.00th=[ 824], 95.00th=[ 865], 00:16:31.475 | 99.00th=[ 881], 99.50th=[ 889], 99.90th=[ 2638], 99.95th=[ 2638], 00:16:31.475 | 99.99th=[ 2638] 00:16:31.475 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:16:31.475 slat (nsec): min=9315, max=46389, avg=26445.01, stdev=9719.13 00:16:31.475 clat (usec): min=242, max=559, avg=436.92, stdev=62.89 00:16:31.475 lat (usec): min=252, max=591, avg=463.36, stdev=68.30 00:16:31.475 clat percentiles (usec): 00:16:31.475 | 1.00th=[ 262], 5.00th=[ 330], 10.00th=[ 351], 20.00th=[ 375], 00:16:31.475 | 30.00th=[ 404], 40.00th=[ 445], 50.00th=[ 453], 60.00th=[ 465], 00:16:31.475 | 70.00th=[ 482], 80.00th=[ 490], 90.00th=[ 498], 95.00th=[ 510], 00:16:31.475 | 99.00th=[ 529], 99.50th=[ 545], 99.90th=[ 553], 99.95th=[ 562], 00:16:31.475 | 99.99th=[ 562] 00:16:31.475 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:16:31.475 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:31.475 lat (usec) : 250=0.30%, 500=55.47%, 750=15.42%, 1000=28.75% 00:16:31.475 lat (msec) : 4=0.06% 00:16:31.475 cpu : usr=2.60%, sys=3.90%, ctx=1674, majf=0, minf=1 00:16:31.475 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:31.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:31.475 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:31.475 issued rwts: total=649,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:31.475 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:31.475 00:16:31.475 Run status group 0 (all jobs): 00:16:31.475 READ: bw=2593KiB/s (2656kB/s), 2593KiB/s-2593KiB/s (2656kB/s-2656kB/s), io=2596KiB (2658kB), run=1001-1001msec 00:16:31.475 WRITE: bw=4092KiB/s (4190kB/s), 4092KiB/s-4092KiB/s (4190kB/s-4190kB/s), io=4096KiB (4194kB), run=1001-1001msec 00:16:31.475 00:16:31.475 Disk stats (read/write): 00:16:31.475 nvme0n1: ios=570/1024, merge=0/0, ticks=665/448, in_queue=1113, util=97.70% 00:16:31.475 12:12:32 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:31.475 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:31.475 12:12:32 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:31.476 12:12:32 -- common/autotest_common.sh@1205 -- # local i=0 00:16:31.476 12:12:32 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:16:31.476 12:12:32 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:31.476 12:12:32 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:16:31.476 12:12:32 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:31.476 12:12:32 -- common/autotest_common.sh@1217 -- # return 0 00:16:31.476 12:12:32 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:16:31.476 12:12:32 -- target/nmic.sh@53 -- # nvmftestfini 00:16:31.476 12:12:32 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:31.476 12:12:32 -- nvmf/common.sh@117 -- # sync 00:16:31.476 12:12:32 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:31.476 12:12:32 -- nvmf/common.sh@120 -- # set +e 00:16:31.476 12:12:32 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:31.476 12:12:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:31.476 rmmod nvme_tcp 00:16:31.476 rmmod nvme_fabrics 00:16:31.476 rmmod nvme_keyring 00:16:31.476 12:12:32 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:31.476 12:12:32 -- nvmf/common.sh@124 -- # set -e 00:16:31.476 12:12:32 -- nvmf/common.sh@125 -- # return 0 00:16:31.476 12:12:32 -- nvmf/common.sh@478 -- # '[' -n 3390918 ']' 00:16:31.476 12:12:32 -- nvmf/common.sh@479 -- # killprocess 3390918 00:16:31.476 12:12:32 -- common/autotest_common.sh@936 -- # '[' -z 3390918 ']' 00:16:31.476 12:12:32 -- common/autotest_common.sh@940 -- # kill -0 3390918 00:16:31.476 12:12:32 -- common/autotest_common.sh@941 -- # uname 00:16:31.476 12:12:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:31.476 12:12:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3390918 00:16:31.476 12:12:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:31.476 12:12:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:31.476 12:12:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3390918' 00:16:31.476 killing process with pid 3390918 00:16:31.476 12:12:32 -- common/autotest_common.sh@955 -- # kill 3390918 00:16:31.476 12:12:32 -- common/autotest_common.sh@960 -- # wait 3390918 00:16:31.736 12:12:32 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:31.736 12:12:32 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:31.736 12:12:32 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:31.736 12:12:32 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:31.736 12:12:32 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:31.736 12:12:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:31.736 12:12:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:31.736 12:12:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:34.285 12:12:34 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:34.285 00:16:34.285 real 0m17.617s 00:16:34.285 user 0m49.752s 00:16:34.285 sys 0m6.206s 00:16:34.285 12:12:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:34.285 12:12:34 -- common/autotest_common.sh@10 -- # set +x 00:16:34.285 ************************************ 00:16:34.285 END TEST nvmf_nmic 00:16:34.285 ************************************ 00:16:34.285 12:12:34 -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:34.285 12:12:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:34.285 12:12:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:34.285 12:12:34 -- common/autotest_common.sh@10 -- # set +x 00:16:34.285 ************************************ 00:16:34.285 START TEST nvmf_fio_target 00:16:34.285 ************************************ 00:16:34.285 12:12:35 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:34.285 * Looking for test storage... 00:16:34.285 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:34.285 12:12:35 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:34.285 12:12:35 -- nvmf/common.sh@7 -- # uname -s 00:16:34.285 12:12:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:34.285 12:12:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:34.285 12:12:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:34.285 12:12:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:34.285 12:12:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:34.285 12:12:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:34.285 12:12:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:34.285 12:12:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:34.285 12:12:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:34.285 12:12:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:34.285 12:12:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:34.285 12:12:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:34.285 12:12:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:34.285 12:12:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:34.285 12:12:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:34.285 12:12:35 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:34.285 12:12:35 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:34.285 12:12:35 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:34.285 12:12:35 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:34.285 12:12:35 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:34.285 12:12:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.285 12:12:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.286 12:12:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.286 12:12:35 -- paths/export.sh@5 -- # export PATH 00:16:34.286 12:12:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.286 12:12:35 -- nvmf/common.sh@47 -- # : 0 00:16:34.286 12:12:35 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:34.286 12:12:35 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:34.286 12:12:35 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:34.286 12:12:35 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:34.286 12:12:35 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:34.286 12:12:35 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:34.286 12:12:35 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:34.286 12:12:35 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:34.286 12:12:35 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:34.286 12:12:35 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:34.286 12:12:35 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:34.286 12:12:35 -- target/fio.sh@16 -- # nvmftestinit 00:16:34.286 12:12:35 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:34.286 12:12:35 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:34.286 12:12:35 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:34.286 12:12:35 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:34.286 12:12:35 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:34.286 12:12:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:34.286 12:12:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:34.286 12:12:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:34.286 12:12:35 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:34.286 12:12:35 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:34.286 12:12:35 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:34.286 12:12:35 -- common/autotest_common.sh@10 -- # set +x 00:16:42.423 12:12:42 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:42.423 12:12:42 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:42.423 12:12:42 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:42.423 12:12:42 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:42.423 12:12:42 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:42.423 12:12:42 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:42.423 12:12:42 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:42.423 12:12:42 -- nvmf/common.sh@295 -- # net_devs=() 00:16:42.423 12:12:42 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:42.423 12:12:42 -- nvmf/common.sh@296 -- # e810=() 00:16:42.423 12:12:42 -- nvmf/common.sh@296 -- # local -ga e810 00:16:42.423 12:12:42 -- nvmf/common.sh@297 -- # x722=() 00:16:42.423 12:12:42 -- nvmf/common.sh@297 -- # local -ga x722 00:16:42.423 12:12:42 -- nvmf/common.sh@298 -- # mlx=() 00:16:42.423 12:12:42 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:42.423 12:12:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:42.423 12:12:42 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:42.423 12:12:42 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:42.423 12:12:42 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:42.423 12:12:42 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:42.423 12:12:42 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:42.423 12:12:42 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:42.423 12:12:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:42.423 12:12:42 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:42.423 12:12:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:42.423 12:12:42 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:42.423 12:12:42 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:42.423 12:12:42 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:42.423 12:12:42 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:42.423 12:12:42 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:42.423 12:12:42 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:42.423 12:12:42 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:42.423 12:12:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:42.423 12:12:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:42.423 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:42.423 12:12:42 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:42.423 12:12:42 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:42.423 12:12:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:42.423 12:12:42 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:42.423 12:12:42 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:42.423 12:12:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:42.423 12:12:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:42.423 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:42.423 12:12:42 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:42.423 12:12:42 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:42.423 12:12:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:42.423 12:12:42 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:42.423 12:12:42 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:42.423 12:12:42 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:42.423 12:12:42 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:42.423 12:12:42 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:42.423 12:12:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:42.423 12:12:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:42.423 12:12:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:42.423 12:12:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:42.423 12:12:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:42.423 Found net devices under 0000:31:00.0: cvl_0_0 00:16:42.423 12:12:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:42.423 12:12:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:42.423 12:12:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:42.423 12:12:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:42.423 12:12:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:42.423 12:12:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:42.423 Found net devices under 0000:31:00.1: cvl_0_1 00:16:42.423 12:12:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:42.423 12:12:42 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:42.423 12:12:42 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:42.423 12:12:42 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:42.423 12:12:42 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:42.423 12:12:42 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:42.423 12:12:42 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:42.423 12:12:42 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:42.423 12:12:42 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:42.423 12:12:42 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:42.423 12:12:42 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:42.423 12:12:42 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:42.423 12:12:42 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:42.423 12:12:42 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:42.423 12:12:42 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:42.423 12:12:42 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:42.423 12:12:42 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:42.423 12:12:42 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:42.423 12:12:42 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:42.423 12:12:42 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:42.423 12:12:42 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:42.423 12:12:42 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:42.423 12:12:42 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:42.423 12:12:42 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:42.423 12:12:42 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:42.423 12:12:42 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:42.423 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:42.423 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.704 ms 00:16:42.423 00:16:42.423 --- 10.0.0.2 ping statistics --- 00:16:42.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:42.423 rtt min/avg/max/mdev = 0.704/0.704/0.704/0.000 ms 00:16:42.423 12:12:42 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:42.423 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:42.423 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:16:42.423 00:16:42.423 --- 10.0.0.1 ping statistics --- 00:16:42.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:42.423 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:16:42.423 12:12:42 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:42.423 12:12:42 -- nvmf/common.sh@411 -- # return 0 00:16:42.423 12:12:42 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:42.423 12:12:42 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:42.423 12:12:42 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:42.423 12:12:42 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:42.423 12:12:42 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:42.423 12:12:42 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:42.423 12:12:42 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:42.423 12:12:42 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:16:42.423 12:12:42 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:42.423 12:12:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:42.423 12:12:42 -- common/autotest_common.sh@10 -- # set +x 00:16:42.423 12:12:42 -- nvmf/common.sh@470 -- # nvmfpid=3396869 00:16:42.423 12:12:42 -- nvmf/common.sh@471 -- # waitforlisten 3396869 00:16:42.423 12:12:42 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:42.423 12:12:42 -- common/autotest_common.sh@817 -- # '[' -z 3396869 ']' 00:16:42.423 12:12:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:42.424 12:12:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:42.424 12:12:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:42.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:42.424 12:12:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:42.424 12:12:42 -- common/autotest_common.sh@10 -- # set +x 00:16:42.424 [2024-04-26 12:12:42.593336] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:16:42.424 [2024-04-26 12:12:42.593424] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:42.424 EAL: No free 2048 kB hugepages reported on node 1 00:16:42.424 [2024-04-26 12:12:42.667219] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:42.424 [2024-04-26 12:12:42.739770] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:42.424 [2024-04-26 12:12:42.739811] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:42.424 [2024-04-26 12:12:42.739824] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:42.424 [2024-04-26 12:12:42.739832] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:42.424 [2024-04-26 12:12:42.739843] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:42.424 [2024-04-26 12:12:42.739990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:42.424 [2024-04-26 12:12:42.740121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:42.424 [2024-04-26 12:12:42.740278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:42.424 [2024-04-26 12:12:42.740279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:42.424 12:12:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:42.424 12:12:43 -- common/autotest_common.sh@850 -- # return 0 00:16:42.424 12:12:43 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:42.424 12:12:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:42.424 12:12:43 -- common/autotest_common.sh@10 -- # set +x 00:16:42.424 12:12:43 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:42.424 12:12:43 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:42.424 [2024-04-26 12:12:43.551857] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:42.424 12:12:43 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:42.684 12:12:43 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:16:42.684 12:12:43 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:42.944 12:12:43 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:16:42.944 12:12:43 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:42.944 12:12:44 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:16:42.944 12:12:44 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:43.204 12:12:44 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:16:43.204 12:12:44 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:16:43.465 12:12:44 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:43.465 12:12:44 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:16:43.465 12:12:44 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:43.725 12:12:44 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:16:43.725 12:12:44 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:43.985 12:12:44 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:16:43.985 12:12:44 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:16:43.985 12:12:45 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:44.245 12:12:45 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:44.245 12:12:45 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:44.506 12:12:45 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:44.506 12:12:45 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:44.506 12:12:45 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:44.767 [2024-04-26 12:12:45.789798] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:44.767 12:12:45 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:16:45.073 12:12:45 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:16:45.073 12:12:46 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:46.458 12:12:47 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:16:46.458 12:12:47 -- common/autotest_common.sh@1184 -- # local i=0 00:16:46.458 12:12:47 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:16:46.458 12:12:47 -- common/autotest_common.sh@1186 -- # [[ -n 4 ]] 00:16:46.458 12:12:47 -- common/autotest_common.sh@1187 -- # nvme_device_counter=4 00:16:46.458 12:12:47 -- common/autotest_common.sh@1191 -- # sleep 2 00:16:49.003 12:12:49 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:16:49.003 12:12:49 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:16:49.003 12:12:49 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:16:49.003 12:12:49 -- common/autotest_common.sh@1193 -- # nvme_devices=4 00:16:49.003 12:12:49 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:16:49.003 12:12:49 -- common/autotest_common.sh@1194 -- # return 0 00:16:49.003 12:12:49 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:49.003 [global] 00:16:49.003 thread=1 00:16:49.003 invalidate=1 00:16:49.003 rw=write 00:16:49.003 time_based=1 00:16:49.003 runtime=1 00:16:49.003 ioengine=libaio 00:16:49.003 direct=1 00:16:49.003 bs=4096 00:16:49.003 iodepth=1 00:16:49.003 norandommap=0 00:16:49.003 numjobs=1 00:16:49.003 00:16:49.003 verify_dump=1 00:16:49.003 verify_backlog=512 00:16:49.003 verify_state_save=0 00:16:49.003 do_verify=1 00:16:49.003 verify=crc32c-intel 00:16:49.003 [job0] 00:16:49.003 filename=/dev/nvme0n1 00:16:49.003 [job1] 00:16:49.003 filename=/dev/nvme0n2 00:16:49.003 [job2] 00:16:49.003 filename=/dev/nvme0n3 00:16:49.003 [job3] 00:16:49.003 filename=/dev/nvme0n4 00:16:49.003 Could not set queue depth (nvme0n1) 00:16:49.003 Could not set queue depth (nvme0n2) 00:16:49.003 Could not set queue depth (nvme0n3) 00:16:49.003 Could not set queue depth (nvme0n4) 00:16:49.003 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:49.003 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:49.003 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:49.003 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:49.003 fio-3.35 00:16:49.003 Starting 4 threads 00:16:50.390 00:16:50.390 job0: (groupid=0, jobs=1): err= 0: pid=3398469: Fri Apr 26 12:12:51 2024 00:16:50.390 read: IOPS=648, BW=2593KiB/s (2656kB/s)(2596KiB/1001msec) 00:16:50.390 slat (nsec): min=6179, max=64264, avg=24353.53, stdev=7874.37 00:16:50.390 clat (usec): min=353, max=1094, avg=713.56, stdev=108.30 00:16:50.390 lat (usec): min=380, max=1123, avg=737.91, stdev=110.15 00:16:50.390 clat percentiles (usec): 00:16:50.390 | 1.00th=[ 441], 5.00th=[ 537], 10.00th=[ 570], 20.00th=[ 619], 00:16:50.390 | 30.00th=[ 660], 40.00th=[ 685], 50.00th=[ 725], 60.00th=[ 750], 00:16:50.390 | 70.00th=[ 775], 80.00th=[ 807], 90.00th=[ 848], 95.00th=[ 881], 00:16:50.390 | 99.00th=[ 947], 99.50th=[ 963], 99.90th=[ 1090], 99.95th=[ 1090], 00:16:50.390 | 99.99th=[ 1090] 00:16:50.390 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:16:50.390 slat (usec): min=8, max=13391, avg=44.15, stdev=417.64 00:16:50.390 clat (usec): min=108, max=1921, avg=452.54, stdev=154.17 00:16:50.390 lat (usec): min=118, max=13838, avg=496.68, stdev=445.97 00:16:50.390 clat percentiles (usec): 00:16:50.390 | 1.00th=[ 141], 5.00th=[ 239], 10.00th=[ 269], 20.00th=[ 338], 00:16:50.390 | 30.00th=[ 379], 40.00th=[ 416], 50.00th=[ 453], 60.00th=[ 482], 00:16:50.390 | 70.00th=[ 519], 80.00th=[ 570], 90.00th=[ 627], 95.00th=[ 676], 00:16:50.390 | 99.00th=[ 766], 99.50th=[ 840], 99.90th=[ 1598], 99.95th=[ 1926], 00:16:50.390 | 99.99th=[ 1926] 00:16:50.390 bw ( KiB/s): min= 4087, max= 4087, per=41.27%, avg=4087.00, stdev= 0.00, samples=1 00:16:50.390 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:16:50.390 lat (usec) : 250=3.89%, 500=37.36%, 750=41.96%, 1000=16.38% 00:16:50.390 lat (msec) : 2=0.42% 00:16:50.390 cpu : usr=3.80%, sys=5.80%, ctx=1676, majf=0, minf=1 00:16:50.390 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:50.390 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:50.390 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:50.390 issued rwts: total=649,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:50.390 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:50.390 job1: (groupid=0, jobs=1): err= 0: pid=3398474: Fri Apr 26 12:12:51 2024 00:16:50.390 read: IOPS=17, BW=70.2KiB/s (71.9kB/s)(72.0KiB/1026msec) 00:16:50.390 slat (nsec): min=26014, max=27339, avg=26819.28, stdev=373.40 00:16:50.390 clat (usec): min=40944, max=42488, avg=41879.95, stdev=367.02 00:16:50.390 lat (usec): min=40971, max=42515, avg=41906.77, stdev=366.95 00:16:50.390 clat percentiles (usec): 00:16:50.390 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:16:50.390 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:16:50.390 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:16:50.390 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:16:50.390 | 99.99th=[42730] 00:16:50.390 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:16:50.390 slat (usec): min=9, max=13220, avg=55.90, stdev=583.04 00:16:50.390 clat (usec): min=120, max=1903, avg=467.35, stdev=168.54 00:16:50.390 lat (usec): min=130, max=14063, avg=523.25, stdev=623.14 00:16:50.390 clat percentiles (usec): 00:16:50.390 | 1.00th=[ 190], 5.00th=[ 269], 10.00th=[ 302], 20.00th=[ 347], 00:16:50.390 | 30.00th=[ 383], 40.00th=[ 429], 50.00th=[ 457], 60.00th=[ 482], 00:16:50.390 | 70.00th=[ 502], 80.00th=[ 537], 90.00th=[ 635], 95.00th=[ 742], 00:16:50.390 | 99.00th=[ 873], 99.50th=[ 1516], 99.90th=[ 1909], 99.95th=[ 1909], 00:16:50.390 | 99.99th=[ 1909] 00:16:50.390 bw ( KiB/s): min= 4096, max= 4096, per=41.36%, avg=4096.00, stdev= 0.00, samples=1 00:16:50.390 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:50.390 lat (usec) : 250=2.26%, 500=65.28%, 750=24.53%, 1000=3.58% 00:16:50.390 lat (msec) : 2=0.94%, 50=3.40% 00:16:50.390 cpu : usr=1.07%, sys=1.76%, ctx=532, majf=0, minf=1 00:16:50.391 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:50.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:50.391 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:50.391 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:50.391 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:50.391 job2: (groupid=0, jobs=1): err= 0: pid=3398484: Fri Apr 26 12:12:51 2024 00:16:50.391 read: IOPS=17, BW=69.6KiB/s (71.3kB/s)(72.0KiB/1034msec) 00:16:50.391 slat (nsec): min=26457, max=27593, avg=26953.78, stdev=309.37 00:16:50.391 clat (usec): min=1186, max=42996, avg=39724.09, stdev=9621.69 00:16:50.391 lat (usec): min=1213, max=43022, avg=39751.04, stdev=9621.79 00:16:50.391 clat percentiles (usec): 00:16:50.391 | 1.00th=[ 1188], 5.00th=[ 1188], 10.00th=[41681], 20.00th=[41681], 00:16:50.391 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:16:50.391 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[43254], 00:16:50.391 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:16:50.391 | 99.99th=[43254] 00:16:50.391 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:16:50.391 slat (nsec): min=9214, max=59464, avg=29528.73, stdev=10943.26 00:16:50.391 clat (usec): min=147, max=1951, avg=585.79, stdev=168.99 00:16:50.391 lat (usec): min=156, max=1990, avg=615.32, stdev=174.00 00:16:50.391 clat percentiles (usec): 00:16:50.391 | 1.00th=[ 245], 5.00th=[ 330], 10.00th=[ 379], 20.00th=[ 453], 00:16:50.391 | 30.00th=[ 502], 40.00th=[ 545], 50.00th=[ 594], 60.00th=[ 635], 00:16:50.391 | 70.00th=[ 676], 80.00th=[ 725], 90.00th=[ 766], 95.00th=[ 799], 00:16:50.391 | 99.00th=[ 865], 99.50th=[ 971], 99.90th=[ 1958], 99.95th=[ 1958], 00:16:50.391 | 99.99th=[ 1958] 00:16:50.391 bw ( KiB/s): min= 4096, max= 4096, per=41.36%, avg=4096.00, stdev= 0.00, samples=1 00:16:50.391 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:50.391 lat (usec) : 250=1.13%, 500=27.74%, 750=54.72%, 1000=12.64% 00:16:50.391 lat (msec) : 2=0.57%, 50=3.21% 00:16:50.391 cpu : usr=0.97%, sys=1.74%, ctx=531, majf=0, minf=1 00:16:50.391 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:50.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:50.391 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:50.391 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:50.391 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:50.391 job3: (groupid=0, jobs=1): err= 0: pid=3398489: Fri Apr 26 12:12:51 2024 00:16:50.391 read: IOPS=349, BW=1399KiB/s (1432kB/s)(1400KiB/1001msec) 00:16:50.391 slat (nsec): min=26263, max=56998, avg=27429.02, stdev=3135.76 00:16:50.391 clat (usec): min=523, max=42261, avg=2003.14, stdev=6506.23 00:16:50.391 lat (usec): min=551, max=42288, avg=2030.57, stdev=6506.21 00:16:50.391 clat percentiles (usec): 00:16:50.391 | 1.00th=[ 676], 5.00th=[ 775], 10.00th=[ 816], 20.00th=[ 873], 00:16:50.391 | 30.00th=[ 906], 40.00th=[ 938], 50.00th=[ 963], 60.00th=[ 979], 00:16:50.391 | 70.00th=[ 1012], 80.00th=[ 1037], 90.00th=[ 1074], 95.00th=[ 1106], 00:16:50.391 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:50.391 | 99.99th=[42206] 00:16:50.391 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:16:50.391 slat (nsec): min=9033, max=67272, avg=30093.91, stdev=11351.28 00:16:50.391 clat (usec): min=152, max=1780, avg=523.42, stdev=204.94 00:16:50.391 lat (usec): min=164, max=1814, avg=553.51, stdev=208.61 00:16:50.391 clat percentiles (usec): 00:16:50.391 | 1.00th=[ 245], 5.00th=[ 281], 10.00th=[ 322], 20.00th=[ 383], 00:16:50.391 | 30.00th=[ 420], 40.00th=[ 457], 50.00th=[ 486], 60.00th=[ 519], 00:16:50.391 | 70.00th=[ 562], 80.00th=[ 635], 90.00th=[ 758], 95.00th=[ 816], 00:16:50.391 | 99.00th=[ 1385], 99.50th=[ 1582], 99.90th=[ 1778], 99.95th=[ 1778], 00:16:50.391 | 99.99th=[ 1778] 00:16:50.391 bw ( KiB/s): min= 4096, max= 4096, per=41.36%, avg=4096.00, stdev= 0.00, samples=1 00:16:50.391 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:50.391 lat (usec) : 250=0.81%, 500=30.97%, 750=22.85%, 1000=29.35% 00:16:50.391 lat (msec) : 2=14.97%, 50=1.04% 00:16:50.391 cpu : usr=1.90%, sys=3.10%, ctx=863, majf=0, minf=1 00:16:50.391 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:50.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:50.391 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:50.391 issued rwts: total=350,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:50.391 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:50.391 00:16:50.391 Run status group 0 (all jobs): 00:16:50.391 READ: bw=4004KiB/s (4100kB/s), 69.6KiB/s-2593KiB/s (71.3kB/s-2656kB/s), io=4140KiB (4239kB), run=1001-1034msec 00:16:50.391 WRITE: bw=9903KiB/s (10.1MB/s), 1981KiB/s-4092KiB/s (2028kB/s-4190kB/s), io=10.0MiB (10.5MB), run=1001-1034msec 00:16:50.391 00:16:50.391 Disk stats (read/write): 00:16:50.391 nvme0n1: ios=563/880, merge=0/0, ticks=744/297, in_queue=1041, util=86.97% 00:16:50.391 nvme0n2: ios=64/512, merge=0/0, ticks=1270/186, in_queue=1456, util=90.91% 00:16:50.391 nvme0n3: ios=70/512, merge=0/0, ticks=1094/242, in_queue=1336, util=92.17% 00:16:50.391 nvme0n4: ios=247/512, merge=0/0, ticks=1119/210, in_queue=1329, util=97.00% 00:16:50.391 12:12:51 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:16:50.391 [global] 00:16:50.391 thread=1 00:16:50.391 invalidate=1 00:16:50.391 rw=randwrite 00:16:50.391 time_based=1 00:16:50.391 runtime=1 00:16:50.391 ioengine=libaio 00:16:50.391 direct=1 00:16:50.391 bs=4096 00:16:50.391 iodepth=1 00:16:50.391 norandommap=0 00:16:50.391 numjobs=1 00:16:50.391 00:16:50.391 verify_dump=1 00:16:50.391 verify_backlog=512 00:16:50.391 verify_state_save=0 00:16:50.391 do_verify=1 00:16:50.391 verify=crc32c-intel 00:16:50.391 [job0] 00:16:50.391 filename=/dev/nvme0n1 00:16:50.391 [job1] 00:16:50.391 filename=/dev/nvme0n2 00:16:50.391 [job2] 00:16:50.391 filename=/dev/nvme0n3 00:16:50.391 [job3] 00:16:50.391 filename=/dev/nvme0n4 00:16:50.391 Could not set queue depth (nvme0n1) 00:16:50.391 Could not set queue depth (nvme0n2) 00:16:50.391 Could not set queue depth (nvme0n3) 00:16:50.391 Could not set queue depth (nvme0n4) 00:16:50.662 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:50.662 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:50.663 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:50.663 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:50.663 fio-3.35 00:16:50.663 Starting 4 threads 00:16:52.060 00:16:52.060 job0: (groupid=0, jobs=1): err= 0: pid=3398988: Fri Apr 26 12:12:52 2024 00:16:52.060 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:16:52.060 slat (nsec): min=23796, max=58931, avg=24667.02, stdev=2238.43 00:16:52.060 clat (usec): min=537, max=1563, avg=1036.02, stdev=92.27 00:16:52.060 lat (usec): min=561, max=1587, avg=1060.69, stdev=92.22 00:16:52.060 clat percentiles (usec): 00:16:52.060 | 1.00th=[ 758], 5.00th=[ 873], 10.00th=[ 914], 20.00th=[ 971], 00:16:52.060 | 30.00th=[ 1004], 40.00th=[ 1045], 50.00th=[ 1057], 60.00th=[ 1074], 00:16:52.060 | 70.00th=[ 1090], 80.00th=[ 1106], 90.00th=[ 1123], 95.00th=[ 1139], 00:16:52.060 | 99.00th=[ 1188], 99.50th=[ 1221], 99.90th=[ 1565], 99.95th=[ 1565], 00:16:52.060 | 99.99th=[ 1565] 00:16:52.060 write: IOPS=724, BW=2897KiB/s (2967kB/s)(2900KiB/1001msec); 0 zone resets 00:16:52.060 slat (nsec): min=8727, max=66233, avg=27287.42, stdev=8450.33 00:16:52.060 clat (usec): min=175, max=981, avg=589.30, stdev=131.07 00:16:52.060 lat (usec): min=185, max=1011, avg=616.59, stdev=134.70 00:16:52.060 clat percentiles (usec): 00:16:52.060 | 1.00th=[ 277], 5.00th=[ 367], 10.00th=[ 424], 20.00th=[ 486], 00:16:52.060 | 30.00th=[ 519], 40.00th=[ 553], 50.00th=[ 594], 60.00th=[ 627], 00:16:52.061 | 70.00th=[ 652], 80.00th=[ 685], 90.00th=[ 758], 95.00th=[ 816], 00:16:52.061 | 99.00th=[ 906], 99.50th=[ 930], 99.90th=[ 979], 99.95th=[ 979], 00:16:52.061 | 99.99th=[ 979] 00:16:52.061 bw ( KiB/s): min= 4096, max= 4096, per=46.47%, avg=4096.00, stdev= 0.00, samples=1 00:16:52.061 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:52.061 lat (usec) : 250=0.16%, 500=13.18%, 750=39.37%, 1000=17.95% 00:16:52.061 lat (msec) : 2=29.35% 00:16:52.061 cpu : usr=2.30%, sys=3.00%, ctx=1237, majf=0, minf=1 00:16:52.061 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:52.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:52.061 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:52.061 issued rwts: total=512,725,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:52.061 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:52.061 job1: (groupid=0, jobs=1): err= 0: pid=3398989: Fri Apr 26 12:12:52 2024 00:16:52.061 read: IOPS=17, BW=71.4KiB/s (73.1kB/s)(72.0KiB/1008msec) 00:16:52.061 slat (nsec): min=23320, max=24026, avg=23642.44, stdev=160.33 00:16:52.061 clat (usec): min=981, max=42040, avg=39507.75, stdev=9621.77 00:16:52.061 lat (usec): min=1004, max=42064, avg=39531.39, stdev=9621.77 00:16:52.061 clat percentiles (usec): 00:16:52.061 | 1.00th=[ 979], 5.00th=[ 979], 10.00th=[41157], 20.00th=[41157], 00:16:52.061 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:16:52.061 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:16:52.061 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:52.061 | 99.99th=[42206] 00:16:52.061 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:16:52.061 slat (nsec): min=9009, max=51185, avg=27455.37, stdev=7398.66 00:16:52.061 clat (usec): min=126, max=966, avg=542.55, stdev=155.10 00:16:52.061 lat (usec): min=136, max=996, avg=570.01, stdev=158.53 00:16:52.061 clat percentiles (usec): 00:16:52.061 | 1.00th=[ 184], 5.00th=[ 293], 10.00th=[ 338], 20.00th=[ 408], 00:16:52.061 | 30.00th=[ 457], 40.00th=[ 506], 50.00th=[ 545], 60.00th=[ 594], 00:16:52.061 | 70.00th=[ 635], 80.00th=[ 676], 90.00th=[ 734], 95.00th=[ 799], 00:16:52.061 | 99.00th=[ 914], 99.50th=[ 930], 99.90th=[ 963], 99.95th=[ 963], 00:16:52.061 | 99.99th=[ 963] 00:16:52.061 bw ( KiB/s): min= 4096, max= 4096, per=46.47%, avg=4096.00, stdev= 0.00, samples=1 00:16:52.061 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:52.061 lat (usec) : 250=2.45%, 500=34.91%, 750=51.32%, 1000=8.11% 00:16:52.061 lat (msec) : 50=3.21% 00:16:52.061 cpu : usr=0.99%, sys=1.19%, ctx=530, majf=0, minf=1 00:16:52.061 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:52.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:52.061 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:52.061 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:52.061 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:52.061 job2: (groupid=0, jobs=1): err= 0: pid=3398993: Fri Apr 26 12:12:52 2024 00:16:52.061 read: IOPS=16, BW=67.5KiB/s (69.1kB/s)(68.0KiB/1008msec) 00:16:52.061 slat (nsec): min=23916, max=24778, avg=24201.18, stdev=214.30 00:16:52.061 clat (usec): min=1109, max=42979, avg=39675.10, stdev=9943.77 00:16:52.061 lat (usec): min=1133, max=43003, avg=39699.30, stdev=9943.76 00:16:52.061 clat percentiles (usec): 00:16:52.061 | 1.00th=[ 1106], 5.00th=[ 1106], 10.00th=[41681], 20.00th=[41681], 00:16:52.061 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:16:52.061 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:16:52.061 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:16:52.061 | 99.99th=[42730] 00:16:52.061 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:16:52.061 slat (nsec): min=9011, max=48062, avg=26808.52, stdev=8512.59 00:16:52.061 clat (usec): min=267, max=854, avg=615.38, stdev=106.94 00:16:52.061 lat (usec): min=276, max=884, avg=642.19, stdev=110.60 00:16:52.061 clat percentiles (usec): 00:16:52.061 | 1.00th=[ 347], 5.00th=[ 424], 10.00th=[ 469], 20.00th=[ 529], 00:16:52.061 | 30.00th=[ 562], 40.00th=[ 594], 50.00th=[ 627], 60.00th=[ 660], 00:16:52.061 | 70.00th=[ 685], 80.00th=[ 717], 90.00th=[ 742], 95.00th=[ 758], 00:16:52.061 | 99.00th=[ 799], 99.50th=[ 824], 99.90th=[ 857], 99.95th=[ 857], 00:16:52.061 | 99.99th=[ 857] 00:16:52.061 bw ( KiB/s): min= 4096, max= 4096, per=46.47%, avg=4096.00, stdev= 0.00, samples=1 00:16:52.061 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:52.061 lat (usec) : 500=15.12%, 750=74.29%, 1000=7.37% 00:16:52.061 lat (msec) : 2=0.19%, 50=3.02% 00:16:52.061 cpu : usr=0.40%, sys=1.79%, ctx=529, majf=0, minf=1 00:16:52.061 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:52.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:52.061 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:52.061 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:52.061 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:52.061 job3: (groupid=0, jobs=1): err= 0: pid=3398995: Fri Apr 26 12:12:52 2024 00:16:52.061 read: IOPS=18, BW=74.1KiB/s (75.9kB/s)(76.0KiB/1026msec) 00:16:52.061 slat (nsec): min=24313, max=24917, avg=24630.89, stdev=174.23 00:16:52.061 clat (usec): min=41024, max=42946, avg=41880.79, stdev=517.38 00:16:52.061 lat (usec): min=41048, max=42971, avg=41905.42, stdev=517.36 00:16:52.061 clat percentiles (usec): 00:16:52.061 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:16:52.061 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:16:52.061 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:16:52.061 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:16:52.061 | 99.99th=[42730] 00:16:52.061 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:16:52.061 slat (nsec): min=8847, max=67630, avg=26619.49, stdev=9060.45 00:16:52.061 clat (usec): min=141, max=878, avg=413.77, stdev=161.13 00:16:52.061 lat (usec): min=152, max=909, avg=440.39, stdev=165.38 00:16:52.061 clat percentiles (usec): 00:16:52.061 | 1.00th=[ 155], 5.00th=[ 167], 10.00th=[ 186], 20.00th=[ 281], 00:16:52.061 | 30.00th=[ 314], 40.00th=[ 347], 50.00th=[ 396], 60.00th=[ 453], 00:16:52.061 | 70.00th=[ 502], 80.00th=[ 570], 90.00th=[ 644], 95.00th=[ 685], 00:16:52.061 | 99.00th=[ 758], 99.50th=[ 832], 99.90th=[ 881], 99.95th=[ 881], 00:16:52.061 | 99.99th=[ 881] 00:16:52.061 bw ( KiB/s): min= 4096, max= 4096, per=46.47%, avg=4096.00, stdev= 0.00, samples=1 00:16:52.061 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:52.061 lat (usec) : 250=14.69%, 500=52.73%, 750=27.50%, 1000=1.51% 00:16:52.061 lat (msec) : 50=3.58% 00:16:52.061 cpu : usr=0.49%, sys=1.56%, ctx=531, majf=0, minf=1 00:16:52.061 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:52.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:52.061 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:52.061 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:52.061 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:52.061 00:16:52.061 Run status group 0 (all jobs): 00:16:52.061 READ: bw=2207KiB/s (2260kB/s), 67.5KiB/s-2046KiB/s (69.1kB/s-2095kB/s), io=2264KiB (2318kB), run=1001-1026msec 00:16:52.061 WRITE: bw=8815KiB/s (9026kB/s), 1996KiB/s-2897KiB/s (2044kB/s-2967kB/s), io=9044KiB (9261kB), run=1001-1026msec 00:16:52.061 00:16:52.061 Disk stats (read/write): 00:16:52.061 nvme0n1: ios=523/512, merge=0/0, ticks=552/294, in_queue=846, util=88.18% 00:16:52.061 nvme0n2: ios=50/512, merge=0/0, ticks=555/260, in_queue=815, util=88.07% 00:16:52.061 nvme0n3: ios=12/512, merge=0/0, ticks=464/295, in_queue=759, util=88.50% 00:16:52.061 nvme0n4: ios=14/512, merge=0/0, ticks=585/200, in_queue=785, util=89.54% 00:16:52.061 12:12:52 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:16:52.061 [global] 00:16:52.061 thread=1 00:16:52.061 invalidate=1 00:16:52.061 rw=write 00:16:52.061 time_based=1 00:16:52.061 runtime=1 00:16:52.061 ioengine=libaio 00:16:52.061 direct=1 00:16:52.061 bs=4096 00:16:52.061 iodepth=128 00:16:52.061 norandommap=0 00:16:52.061 numjobs=1 00:16:52.061 00:16:52.061 verify_dump=1 00:16:52.061 verify_backlog=512 00:16:52.061 verify_state_save=0 00:16:52.061 do_verify=1 00:16:52.061 verify=crc32c-intel 00:16:52.061 [job0] 00:16:52.061 filename=/dev/nvme0n1 00:16:52.061 [job1] 00:16:52.061 filename=/dev/nvme0n2 00:16:52.061 [job2] 00:16:52.061 filename=/dev/nvme0n3 00:16:52.061 [job3] 00:16:52.061 filename=/dev/nvme0n4 00:16:52.061 Could not set queue depth (nvme0n1) 00:16:52.061 Could not set queue depth (nvme0n2) 00:16:52.061 Could not set queue depth (nvme0n3) 00:16:52.061 Could not set queue depth (nvme0n4) 00:16:52.334 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:52.334 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:52.334 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:52.334 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:52.334 fio-3.35 00:16:52.334 Starting 4 threads 00:16:53.723 00:16:53.723 job0: (groupid=0, jobs=1): err= 0: pid=3399517: Fri Apr 26 12:12:54 2024 00:16:53.723 read: IOPS=4861, BW=19.0MiB/s (19.9MB/s)(19.8MiB/1045msec) 00:16:53.723 slat (nsec): min=947, max=18866k, avg=92367.53, stdev=834999.26 00:16:53.723 clat (usec): min=1760, max=57522, avg=13786.76, stdev=10614.72 00:16:53.723 lat (usec): min=1767, max=57530, avg=13879.12, stdev=10677.18 00:16:53.723 clat percentiles (usec): 00:16:53.723 | 1.00th=[ 3752], 5.00th=[ 5932], 10.00th=[ 6325], 20.00th=[ 6915], 00:16:53.723 | 30.00th=[ 7308], 40.00th=[ 7701], 50.00th=[ 8586], 60.00th=[11076], 00:16:53.723 | 70.00th=[15139], 80.00th=[21103], 90.00th=[28967], 95.00th=[41157], 00:16:53.723 | 99.00th=[50070], 99.50th=[50070], 99.90th=[51119], 99.95th=[51119], 00:16:53.724 | 99.99th=[57410] 00:16:53.724 write: IOPS=4899, BW=19.1MiB/s (20.1MB/s)(20.0MiB/1045msec); 0 zone resets 00:16:53.724 slat (nsec): min=1635, max=42932k, avg=84342.26, stdev=1074419.31 00:16:53.724 clat (usec): min=1330, max=114260, avg=10268.73, stdev=9953.92 00:16:53.724 lat (usec): min=1339, max=114268, avg=10353.08, stdev=10092.21 00:16:53.724 clat percentiles (msec): 00:16:53.724 | 1.00th=[ 3], 5.00th=[ 4], 10.00th=[ 5], 20.00th=[ 6], 00:16:53.724 | 30.00th=[ 6], 40.00th=[ 7], 50.00th=[ 7], 60.00th=[ 8], 00:16:53.724 | 70.00th=[ 10], 80.00th=[ 13], 90.00th=[ 22], 95.00th=[ 28], 00:16:53.724 | 99.00th=[ 45], 99.50th=[ 87], 99.90th=[ 114], 99.95th=[ 114], 00:16:53.724 | 99.99th=[ 114] 00:16:53.724 bw ( KiB/s): min=20216, max=20744, per=25.26%, avg=20480.00, stdev=373.35, samples=2 00:16:53.724 iops : min= 5054, max= 5186, avg=5120.00, stdev=93.34, samples=2 00:16:53.724 lat (msec) : 2=0.46%, 4=3.40%, 10=59.87%, 20=20.25%, 50=15.14% 00:16:53.724 lat (msec) : 100=0.80%, 250=0.08% 00:16:53.724 cpu : usr=4.31%, sys=5.36%, ctx=357, majf=0, minf=1 00:16:53.724 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:16:53.724 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:53.724 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:53.724 issued rwts: total=5080,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:53.724 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:53.724 job1: (groupid=0, jobs=1): err= 0: pid=3399518: Fri Apr 26 12:12:54 2024 00:16:53.724 read: IOPS=6684, BW=26.1MiB/s (27.4MB/s)(26.3MiB/1006msec) 00:16:53.724 slat (nsec): min=862, max=47103k, avg=66629.41, stdev=759879.11 00:16:53.724 clat (usec): min=2034, max=56892, avg=10044.23, stdev=6782.48 00:16:53.724 lat (usec): min=2036, max=56917, avg=10110.86, stdev=6816.75 00:16:53.724 clat percentiles (usec): 00:16:53.724 | 1.00th=[ 2999], 5.00th=[ 5014], 10.00th=[ 5800], 20.00th=[ 7111], 00:16:53.724 | 30.00th=[ 7832], 40.00th=[ 8225], 50.00th=[ 8717], 60.00th=[ 9372], 00:16:53.724 | 70.00th=[ 9896], 80.00th=[11207], 90.00th=[12911], 95.00th=[16712], 00:16:53.724 | 99.00th=[52691], 99.50th=[52691], 99.90th=[52691], 99.95th=[52691], 00:16:53.724 | 99.99th=[56886] 00:16:53.724 write: IOPS=7125, BW=27.8MiB/s (29.2MB/s)(28.0MiB/1006msec); 0 zone resets 00:16:53.724 slat (nsec): min=1545, max=10756k, avg=60282.71, stdev=502147.75 00:16:53.724 clat (usec): min=681, max=52798, avg=8385.46, stdev=3055.89 00:16:53.724 lat (usec): min=688, max=52807, avg=8445.74, stdev=3089.15 00:16:53.724 clat percentiles (usec): 00:16:53.724 | 1.00th=[ 1762], 5.00th=[ 3720], 10.00th=[ 4883], 20.00th=[ 5735], 00:16:53.724 | 30.00th=[ 6783], 40.00th=[ 7767], 50.00th=[ 8160], 60.00th=[ 8979], 00:16:53.724 | 70.00th=[ 9634], 80.00th=[10159], 90.00th=[12256], 95.00th=[13960], 00:16:53.724 | 99.00th=[16909], 99.50th=[16909], 99.90th=[22414], 99.95th=[22414], 00:16:53.724 | 99.99th=[52691] 00:16:53.724 bw ( KiB/s): min=28216, max=28664, per=35.08%, avg=28440.00, stdev=316.78, samples=2 00:16:53.724 iops : min= 7054, max= 7166, avg=7110.00, stdev=79.20, samples=2 00:16:53.724 lat (usec) : 750=0.02%, 1000=0.02% 00:16:53.724 lat (msec) : 2=0.52%, 4=3.66%, 10=70.57%, 20=23.65%, 50=0.66% 00:16:53.724 lat (msec) : 100=0.91% 00:16:53.724 cpu : usr=5.67%, sys=6.87%, ctx=403, majf=0, minf=1 00:16:53.724 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:16:53.724 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:53.724 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:53.724 issued rwts: total=6725,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:53.724 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:53.724 job2: (groupid=0, jobs=1): err= 0: pid=3399519: Fri Apr 26 12:12:54 2024 00:16:53.724 read: IOPS=3049, BW=11.9MiB/s (12.5MB/s)(12.1MiB/1012msec) 00:16:53.724 slat (nsec): min=949, max=16244k, avg=120584.22, stdev=846884.83 00:16:53.724 clat (usec): min=3804, max=86701, avg=13205.54, stdev=9323.96 00:16:53.724 lat (usec): min=3923, max=86708, avg=13326.13, stdev=9434.60 00:16:53.724 clat percentiles (usec): 00:16:53.724 | 1.00th=[ 4146], 5.00th=[ 7308], 10.00th=[ 7767], 20.00th=[ 8455], 00:16:53.724 | 30.00th=[ 9372], 40.00th=[10159], 50.00th=[11207], 60.00th=[12518], 00:16:53.724 | 70.00th=[13566], 80.00th=[15664], 90.00th=[17433], 95.00th=[22676], 00:16:53.724 | 99.00th=[67634], 99.50th=[80217], 99.90th=[86508], 99.95th=[86508], 00:16:53.724 | 99.99th=[86508] 00:16:53.724 write: IOPS=3541, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1012msec); 0 zone resets 00:16:53.724 slat (nsec): min=1552, max=9939.2k, avg=163617.12, stdev=830223.73 00:16:53.724 clat (usec): min=723, max=86705, avg=24450.89, stdev=22527.60 00:16:53.724 lat (usec): min=743, max=86714, avg=24614.50, stdev=22664.76 00:16:53.724 clat percentiles (usec): 00:16:53.724 | 1.00th=[ 2147], 5.00th=[ 4178], 10.00th=[ 5211], 20.00th=[ 7177], 00:16:53.724 | 30.00th=[ 9765], 40.00th=[11731], 50.00th=[18220], 60.00th=[20579], 00:16:53.724 | 70.00th=[24249], 80.00th=[39584], 90.00th=[70779], 95.00th=[76022], 00:16:53.724 | 99.00th=[81265], 99.50th=[81265], 99.90th=[83362], 99.95th=[86508], 00:16:53.724 | 99.99th=[86508] 00:16:53.724 bw ( KiB/s): min= 9520, max=18240, per=17.12%, avg=13880.00, stdev=6165.97, samples=2 00:16:53.724 iops : min= 2380, max= 4560, avg=3470.00, stdev=1541.49, samples=2 00:16:53.724 lat (usec) : 750=0.04% 00:16:53.724 lat (msec) : 2=0.28%, 4=2.50%, 10=32.53%, 20=39.75%, 50=15.47% 00:16:53.724 lat (msec) : 100=9.42% 00:16:53.724 cpu : usr=2.08%, sys=4.35%, ctx=361, majf=0, minf=1 00:16:53.724 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:16:53.724 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:53.724 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:53.724 issued rwts: total=3086,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:53.724 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:53.724 job3: (groupid=0, jobs=1): err= 0: pid=3399520: Fri Apr 26 12:12:54 2024 00:16:53.724 read: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec) 00:16:53.724 slat (nsec): min=898, max=9545.9k, avg=84059.04, stdev=568934.19 00:16:53.724 clat (usec): min=4411, max=30152, avg=10856.98, stdev=3006.66 00:16:53.724 lat (usec): min=4417, max=30161, avg=10941.04, stdev=3042.78 00:16:53.724 clat percentiles (usec): 00:16:53.724 | 1.00th=[ 6390], 5.00th=[ 7832], 10.00th=[ 8225], 20.00th=[ 8455], 00:16:53.724 | 30.00th=[ 8848], 40.00th=[ 9372], 50.00th=[10290], 60.00th=[11076], 00:16:53.724 | 70.00th=[11994], 80.00th=[12518], 90.00th=[15008], 95.00th=[15926], 00:16:53.724 | 99.00th=[20055], 99.50th=[24511], 99.90th=[30016], 99.95th=[30278], 00:16:53.724 | 99.99th=[30278] 00:16:53.724 write: IOPS=5289, BW=20.7MiB/s (21.7MB/s)(20.7MiB/1004msec); 0 zone resets 00:16:53.724 slat (nsec): min=1525, max=10710k, avg=102010.93, stdev=614160.14 00:16:53.724 clat (usec): min=1130, max=66094, avg=13516.86, stdev=10947.75 00:16:53.724 lat (usec): min=1141, max=66105, avg=13618.87, stdev=11023.55 00:16:53.724 clat percentiles (usec): 00:16:53.724 | 1.00th=[ 4686], 5.00th=[ 4883], 10.00th=[ 6063], 20.00th=[ 7242], 00:16:53.724 | 30.00th=[ 7963], 40.00th=[ 8225], 50.00th=[ 8848], 60.00th=[11207], 00:16:53.724 | 70.00th=[12387], 80.00th=[18482], 90.00th=[24249], 95.00th=[38536], 00:16:53.724 | 99.00th=[61080], 99.50th=[63701], 99.90th=[66323], 99.95th=[66323], 00:16:53.724 | 99.99th=[66323] 00:16:53.724 bw ( KiB/s): min=16896, max=24576, per=25.57%, avg=20736.00, stdev=5430.58, samples=2 00:16:53.724 iops : min= 4224, max= 6144, avg=5184.00, stdev=1357.65, samples=2 00:16:53.724 lat (msec) : 2=0.02%, 4=0.21%, 10=51.54%, 20=39.97%, 50=6.99% 00:16:53.724 lat (msec) : 100=1.28% 00:16:53.724 cpu : usr=3.69%, sys=5.68%, ctx=465, majf=0, minf=1 00:16:53.724 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:16:53.724 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:53.724 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:53.724 issued rwts: total=5120,5311,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:53.724 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:53.724 00:16:53.724 Run status group 0 (all jobs): 00:16:53.724 READ: bw=74.8MiB/s (78.4MB/s), 11.9MiB/s-26.1MiB/s (12.5MB/s-27.4MB/s), io=78.2MiB (82.0MB), run=1004-1045msec 00:16:53.724 WRITE: bw=79.2MiB/s (83.0MB/s), 13.8MiB/s-27.8MiB/s (14.5MB/s-29.2MB/s), io=82.7MiB (86.8MB), run=1004-1045msec 00:16:53.724 00:16:53.724 Disk stats (read/write): 00:16:53.724 nvme0n1: ios=4076/4102, merge=0/0, ticks=37974/25755, in_queue=63729, util=98.40% 00:16:53.724 nvme0n2: ios=5671/6045, merge=0/0, ticks=52426/45876, in_queue=98302, util=91.74% 00:16:53.724 nvme0n3: ios=2656/3072, merge=0/0, ticks=33865/68140, in_queue=102005, util=88.50% 00:16:53.724 nvme0n4: ios=3835/4096, merge=0/0, ticks=35027/40582, in_queue=75609, util=89.54% 00:16:53.725 12:12:54 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:16:53.725 [global] 00:16:53.725 thread=1 00:16:53.725 invalidate=1 00:16:53.725 rw=randwrite 00:16:53.725 time_based=1 00:16:53.725 runtime=1 00:16:53.725 ioengine=libaio 00:16:53.725 direct=1 00:16:53.725 bs=4096 00:16:53.725 iodepth=128 00:16:53.725 norandommap=0 00:16:53.725 numjobs=1 00:16:53.725 00:16:53.725 verify_dump=1 00:16:53.725 verify_backlog=512 00:16:53.725 verify_state_save=0 00:16:53.725 do_verify=1 00:16:53.725 verify=crc32c-intel 00:16:53.725 [job0] 00:16:53.725 filename=/dev/nvme0n1 00:16:53.725 [job1] 00:16:53.725 filename=/dev/nvme0n2 00:16:53.725 [job2] 00:16:53.725 filename=/dev/nvme0n3 00:16:53.725 [job3] 00:16:53.725 filename=/dev/nvme0n4 00:16:53.725 Could not set queue depth (nvme0n1) 00:16:53.725 Could not set queue depth (nvme0n2) 00:16:53.725 Could not set queue depth (nvme0n3) 00:16:53.725 Could not set queue depth (nvme0n4) 00:16:54.052 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:54.052 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:54.052 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:54.052 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:54.052 fio-3.35 00:16:54.052 Starting 4 threads 00:16:55.491 00:16:55.491 job0: (groupid=0, jobs=1): err= 0: pid=3400044: Fri Apr 26 12:12:56 2024 00:16:55.491 read: IOPS=6101, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1007msec) 00:16:55.491 slat (nsec): min=898, max=18323k, avg=79341.93, stdev=663978.33 00:16:55.491 clat (usec): min=5013, max=44707, avg=10458.18, stdev=5003.01 00:16:55.491 lat (usec): min=5016, max=44731, avg=10537.52, stdev=5059.53 00:16:55.491 clat percentiles (usec): 00:16:55.491 | 1.00th=[ 5473], 5.00th=[ 6259], 10.00th=[ 6980], 20.00th=[ 7242], 00:16:55.491 | 30.00th=[ 7439], 40.00th=[ 7570], 50.00th=[ 7832], 60.00th=[ 9634], 00:16:55.491 | 70.00th=[11076], 80.00th=[12911], 90.00th=[19006], 95.00th=[21365], 00:16:55.491 | 99.00th=[28181], 99.50th=[28181], 99.90th=[31589], 99.95th=[32375], 00:16:55.491 | 99.99th=[44827] 00:16:55.491 write: IOPS=6107, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1007msec); 0 zone resets 00:16:55.491 slat (nsec): min=1514, max=15536k, avg=71670.92, stdev=508334.30 00:16:55.491 clat (usec): min=499, max=51411, avg=10305.71, stdev=7385.73 00:16:55.491 lat (usec): min=540, max=51418, avg=10377.39, stdev=7437.24 00:16:55.491 clat percentiles (usec): 00:16:55.491 | 1.00th=[ 1958], 5.00th=[ 3720], 10.00th=[ 5800], 20.00th=[ 6980], 00:16:55.491 | 30.00th=[ 7177], 40.00th=[ 7308], 50.00th=[ 7439], 60.00th=[ 7504], 00:16:55.491 | 70.00th=[ 9634], 80.00th=[11207], 90.00th=[19792], 95.00th=[23725], 00:16:55.491 | 99.00th=[44303], 99.50th=[50070], 99.90th=[51643], 99.95th=[51643], 00:16:55.491 | 99.99th=[51643] 00:16:55.491 bw ( KiB/s): min=20480, max=28672, per=27.07%, avg=24576.00, stdev=5792.62, samples=2 00:16:55.491 iops : min= 5120, max= 7168, avg=6144.00, stdev=1448.15, samples=2 00:16:55.491 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.01% 00:16:55.491 lat (msec) : 2=0.53%, 4=2.36%, 10=65.48%, 20=24.29%, 50=7.06% 00:16:55.491 lat (msec) : 100=0.25% 00:16:55.491 cpu : usr=3.68%, sys=4.47%, ctx=697, majf=0, minf=1 00:16:55.491 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:16:55.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.491 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:55.491 issued rwts: total=6144,6150,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.491 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:55.491 job1: (groupid=0, jobs=1): err= 0: pid=3400045: Fri Apr 26 12:12:56 2024 00:16:55.491 read: IOPS=4047, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1012msec) 00:16:55.491 slat (nsec): min=904, max=18681k, avg=131422.07, stdev=1019550.70 00:16:55.491 clat (usec): min=3423, max=99757, avg=14608.11, stdev=11499.28 00:16:55.491 lat (usec): min=3428, max=99765, avg=14739.53, stdev=11608.15 00:16:55.491 clat percentiles (msec): 00:16:55.491 | 1.00th=[ 5], 5.00th=[ 8], 10.00th=[ 10], 20.00th=[ 10], 00:16:55.491 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 12], 60.00th=[ 12], 00:16:55.491 | 70.00th=[ 14], 80.00th=[ 17], 90.00th=[ 22], 95.00th=[ 25], 00:16:55.491 | 99.00th=[ 82], 99.50th=[ 95], 99.90th=[ 101], 99.95th=[ 101], 00:16:55.491 | 99.99th=[ 101] 00:16:55.491 write: IOPS=4480, BW=17.5MiB/s (18.4MB/s)(17.7MiB/1012msec); 0 zone resets 00:16:55.491 slat (nsec): min=1616, max=17796k, avg=97473.96, stdev=621572.00 00:16:55.491 clat (usec): min=1135, max=99725, avg=15084.40, stdev=11105.43 00:16:55.491 lat (usec): min=1144, max=99728, avg=15181.87, stdev=11159.94 00:16:55.492 clat percentiles (msec): 00:16:55.492 | 1.00th=[ 4], 5.00th=[ 6], 10.00th=[ 8], 20.00th=[ 10], 00:16:55.492 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 11], 00:16:55.492 | 70.00th=[ 19], 80.00th=[ 20], 90.00th=[ 22], 95.00th=[ 43], 00:16:55.492 | 99.00th=[ 54], 99.50th=[ 71], 99.90th=[ 90], 99.95th=[ 90], 00:16:55.492 | 99.99th=[ 101] 00:16:55.492 bw ( KiB/s): min=13960, max=21296, per=19.41%, avg=17628.00, stdev=5187.34, samples=2 00:16:55.492 iops : min= 3490, max= 5324, avg=4407.00, stdev=1296.83, samples=2 00:16:55.492 lat (msec) : 2=0.02%, 4=1.43%, 10=31.26%, 20=55.62%, 50=8.69% 00:16:55.492 lat (msec) : 100=2.98% 00:16:55.492 cpu : usr=2.37%, sys=4.25%, ctx=580, majf=0, minf=1 00:16:55.492 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:16:55.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.492 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:55.492 issued rwts: total=4096,4534,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.492 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:55.492 job2: (groupid=0, jobs=1): err= 0: pid=3400048: Fri Apr 26 12:12:56 2024 00:16:55.492 read: IOPS=5848, BW=22.8MiB/s (24.0MB/s)(22.9MiB/1003msec) 00:16:55.492 slat (nsec): min=942, max=19429k, avg=92535.82, stdev=698273.31 00:16:55.492 clat (usec): min=2072, max=44097, avg=11231.24, stdev=5569.13 00:16:55.492 lat (usec): min=3505, max=44121, avg=11323.77, stdev=5637.57 00:16:55.492 clat percentiles (usec): 00:16:55.492 | 1.00th=[ 4948], 5.00th=[ 6915], 10.00th=[ 7701], 20.00th=[ 8160], 00:16:55.492 | 30.00th=[ 8455], 40.00th=[ 8586], 50.00th=[ 9110], 60.00th=[10552], 00:16:55.492 | 70.00th=[11600], 80.00th=[12387], 90.00th=[15795], 95.00th=[26084], 00:16:55.492 | 99.00th=[32900], 99.50th=[35390], 99.90th=[43779], 99.95th=[43779], 00:16:55.492 | 99.99th=[44303] 00:16:55.492 write: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec); 0 zone resets 00:16:55.492 slat (nsec): min=1541, max=16072k, avg=67562.63, stdev=423445.63 00:16:55.492 clat (usec): min=1157, max=45300, avg=9794.64, stdev=4991.02 00:16:55.492 lat (usec): min=1170, max=45306, avg=9862.21, stdev=5022.50 00:16:55.492 clat percentiles (usec): 00:16:55.492 | 1.00th=[ 3949], 5.00th=[ 5342], 10.00th=[ 6390], 20.00th=[ 7570], 00:16:55.492 | 30.00th=[ 7963], 40.00th=[ 8160], 50.00th=[ 8356], 60.00th=[ 8717], 00:16:55.492 | 70.00th=[10028], 80.00th=[10945], 90.00th=[15533], 95.00th=[16581], 00:16:55.492 | 99.00th=[39584], 99.50th=[43254], 99.90th=[44303], 99.95th=[45351], 00:16:55.492 | 99.99th=[45351] 00:16:55.492 bw ( KiB/s): min=20480, max=28672, per=27.07%, avg=24576.00, stdev=5792.62, samples=2 00:16:55.492 iops : min= 5120, max= 7168, avg=6144.00, stdev=1448.15, samples=2 00:16:55.492 lat (msec) : 2=0.11%, 4=0.59%, 10=62.28%, 20=32.48%, 50=4.54% 00:16:55.492 cpu : usr=3.69%, sys=5.49%, ctx=615, majf=0, minf=1 00:16:55.492 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:16:55.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.492 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:55.492 issued rwts: total=5866,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.492 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:55.492 job3: (groupid=0, jobs=1): err= 0: pid=3400049: Fri Apr 26 12:12:56 2024 00:16:55.492 read: IOPS=5712, BW=22.3MiB/s (23.4MB/s)(22.4MiB/1004msec) 00:16:55.492 slat (nsec): min=967, max=10996k, avg=84998.71, stdev=614220.26 00:16:55.492 clat (usec): min=2626, max=34340, avg=10031.39, stdev=3841.20 00:16:55.492 lat (usec): min=3115, max=34345, avg=10116.38, stdev=3898.24 00:16:55.492 clat percentiles (usec): 00:16:55.492 | 1.00th=[ 4948], 5.00th=[ 6915], 10.00th=[ 7832], 20.00th=[ 8029], 00:16:55.492 | 30.00th=[ 8160], 40.00th=[ 8356], 50.00th=[ 8717], 60.00th=[ 9241], 00:16:55.492 | 70.00th=[ 9765], 80.00th=[11731], 90.00th=[13829], 95.00th=[17695], 00:16:55.492 | 99.00th=[26608], 99.50th=[29754], 99.90th=[33162], 99.95th=[34341], 00:16:55.492 | 99.99th=[34341] 00:16:55.492 write: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1004msec); 0 zone resets 00:16:55.492 slat (nsec): min=1647, max=7707.5k, avg=78560.29, stdev=431211.61 00:16:55.492 clat (usec): min=1156, max=34343, avg=11340.34, stdev=6936.02 00:16:55.492 lat (usec): min=1165, max=34353, avg=11418.90, stdev=6988.65 00:16:55.492 clat percentiles (usec): 00:16:55.492 | 1.00th=[ 2999], 5.00th=[ 5080], 10.00th=[ 6325], 20.00th=[ 7373], 00:16:55.492 | 30.00th=[ 7701], 40.00th=[ 8029], 50.00th=[ 8291], 60.00th=[ 8717], 00:16:55.492 | 70.00th=[ 9765], 80.00th=[16188], 90.00th=[24249], 95.00th=[27657], 00:16:55.492 | 99.00th=[31589], 99.50th=[31589], 99.90th=[32113], 99.95th=[33162], 00:16:55.492 | 99.99th=[34341] 00:16:55.492 bw ( KiB/s): min=20592, max=28368, per=26.96%, avg=24480.00, stdev=5498.46, samples=2 00:16:55.492 iops : min= 5148, max= 7092, avg=6120.00, stdev=1374.62, samples=2 00:16:55.492 lat (msec) : 2=0.05%, 4=1.61%, 10=70.28%, 20=18.92%, 50=9.15% 00:16:55.492 cpu : usr=4.49%, sys=5.68%, ctx=623, majf=0, minf=1 00:16:55.492 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:16:55.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.492 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:55.492 issued rwts: total=5735,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.492 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:55.492 00:16:55.492 Run status group 0 (all jobs): 00:16:55.492 READ: bw=84.3MiB/s (88.4MB/s), 15.8MiB/s-23.8MiB/s (16.6MB/s-25.0MB/s), io=85.3MiB (89.5MB), run=1003-1012msec 00:16:55.492 WRITE: bw=88.7MiB/s (93.0MB/s), 17.5MiB/s-23.9MiB/s (18.4MB/s-25.1MB/s), io=89.7MiB (94.1MB), run=1003-1012msec 00:16:55.492 00:16:55.492 Disk stats (read/write): 00:16:55.492 nvme0n1: ios=5273/5632, merge=0/0, ticks=40136/37190, in_queue=77326, util=85.27% 00:16:55.492 nvme0n2: ios=3628/3975, merge=0/0, ticks=50686/50988, in_queue=101674, util=90.22% 00:16:55.492 nvme0n3: ios=4658/5071, merge=0/0, ticks=32280/29627, in_queue=61907, util=94.84% 00:16:55.492 nvme0n4: ios=4655/4650, merge=0/0, ticks=45823/57154, in_queue=102977, util=97.23% 00:16:55.492 12:12:56 -- target/fio.sh@55 -- # sync 00:16:55.492 12:12:56 -- target/fio.sh@59 -- # fio_pid=3400381 00:16:55.492 12:12:56 -- target/fio.sh@61 -- # sleep 3 00:16:55.492 12:12:56 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:16:55.492 [global] 00:16:55.492 thread=1 00:16:55.492 invalidate=1 00:16:55.492 rw=read 00:16:55.492 time_based=1 00:16:55.492 runtime=10 00:16:55.492 ioengine=libaio 00:16:55.492 direct=1 00:16:55.492 bs=4096 00:16:55.492 iodepth=1 00:16:55.492 norandommap=1 00:16:55.492 numjobs=1 00:16:55.492 00:16:55.492 [job0] 00:16:55.492 filename=/dev/nvme0n1 00:16:55.492 [job1] 00:16:55.492 filename=/dev/nvme0n2 00:16:55.492 [job2] 00:16:55.492 filename=/dev/nvme0n3 00:16:55.492 [job3] 00:16:55.492 filename=/dev/nvme0n4 00:16:55.492 Could not set queue depth (nvme0n1) 00:16:55.492 Could not set queue depth (nvme0n2) 00:16:55.492 Could not set queue depth (nvme0n3) 00:16:55.492 Could not set queue depth (nvme0n4) 00:16:55.753 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:55.753 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:55.753 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:55.753 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:55.753 fio-3.35 00:16:55.753 Starting 4 threads 00:16:58.296 12:12:59 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:16:58.557 12:12:59 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:16:58.557 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=253952, buflen=4096 00:16:58.557 fio: pid=3400572, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:58.557 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=270336, buflen=4096 00:16:58.557 fio: pid=3400571, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:58.557 12:12:59 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:58.557 12:12:59 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:16:58.818 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=9031680, buflen=4096 00:16:58.818 fio: pid=3400569, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:58.818 12:12:59 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:58.818 12:12:59 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:16:58.818 12:13:00 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:58.818 12:13:00 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:16:58.818 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=6615040, buflen=4096 00:16:58.818 fio: pid=3400570, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:59.079 00:16:59.079 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3400569: Fri Apr 26 12:13:00 2024 00:16:59.079 read: IOPS=760, BW=3042KiB/s (3115kB/s)(8820KiB/2899msec) 00:16:59.079 slat (usec): min=6, max=34899, avg=53.84, stdev=829.62 00:16:59.079 clat (usec): min=313, max=43019, avg=1243.17, stdev=3713.90 00:16:59.079 lat (usec): min=339, max=43044, avg=1297.02, stdev=3804.10 00:16:59.079 clat percentiles (usec): 00:16:59.079 | 1.00th=[ 490], 5.00th=[ 627], 10.00th=[ 717], 20.00th=[ 807], 00:16:59.079 | 30.00th=[ 865], 40.00th=[ 914], 50.00th=[ 947], 60.00th=[ 971], 00:16:59.079 | 70.00th=[ 979], 80.00th=[ 996], 90.00th=[ 1020], 95.00th=[ 1045], 00:16:59.079 | 99.00th=[ 1975], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:16:59.079 | 99.99th=[43254] 00:16:59.079 bw ( KiB/s): min= 96, max= 4608, per=59.19%, avg=3025.60, stdev=1832.30, samples=5 00:16:59.079 iops : min= 24, max= 1152, avg=756.40, stdev=458.08, samples=5 00:16:59.079 lat (usec) : 500=1.18%, 750=11.11%, 1000=68.68% 00:16:59.079 lat (msec) : 2=18.00%, 4=0.14%, 10=0.05%, 50=0.82% 00:16:59.079 cpu : usr=1.41%, sys=2.83%, ctx=2210, majf=0, minf=1 00:16:59.080 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:59.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:59.080 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:59.080 issued rwts: total=2206,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:59.080 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:59.080 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3400570: Fri Apr 26 12:13:00 2024 00:16:59.080 read: IOPS=522, BW=2091KiB/s (2141kB/s)(6460KiB/3090msec) 00:16:59.080 slat (usec): min=6, max=5615, avg=28.84, stdev=139.20 00:16:59.080 clat (usec): min=307, max=42907, avg=1864.99, stdev=6395.72 00:16:59.080 lat (usec): min=315, max=46948, avg=1893.84, stdev=6418.75 00:16:59.080 clat percentiles (usec): 00:16:59.080 | 1.00th=[ 441], 5.00th=[ 570], 10.00th=[ 619], 20.00th=[ 709], 00:16:59.080 | 30.00th=[ 775], 40.00th=[ 816], 50.00th=[ 857], 60.00th=[ 906], 00:16:59.080 | 70.00th=[ 963], 80.00th=[ 996], 90.00th=[ 1057], 95.00th=[ 1106], 00:16:59.080 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:16:59.080 | 99.99th=[42730] 00:16:59.080 bw ( KiB/s): min= 96, max= 5184, per=42.09%, avg=2151.00, stdev=2156.25, samples=6 00:16:59.080 iops : min= 24, max= 1296, avg=537.67, stdev=539.16, samples=6 00:16:59.080 lat (usec) : 500=2.66%, 750=23.89%, 1000=53.59% 00:16:59.080 lat (msec) : 2=17.26%, 4=0.06%, 50=2.48% 00:16:59.080 cpu : usr=0.87%, sys=1.94%, ctx=1620, majf=0, minf=1 00:16:59.080 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:59.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:59.080 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:59.080 issued rwts: total=1616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:59.080 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:59.080 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3400571: Fri Apr 26 12:13:00 2024 00:16:59.080 read: IOPS=24, BW=96.3KiB/s (98.7kB/s)(264KiB/2740msec) 00:16:59.080 slat (usec): min=25, max=12509, avg=212.22, stdev=1525.10 00:16:59.080 clat (usec): min=958, max=43046, avg=40932.23, stdev=7129.35 00:16:59.080 lat (usec): min=984, max=54901, avg=41147.28, stdev=7330.85 00:16:59.080 clat percentiles (usec): 00:16:59.080 | 1.00th=[ 963], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:16:59.080 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:16:59.080 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[43254], 00:16:59.080 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:16:59.080 | 99.99th=[43254] 00:16:59.080 bw ( KiB/s): min= 96, max= 104, per=1.90%, avg=97.60, stdev= 3.58, samples=5 00:16:59.080 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:16:59.080 lat (usec) : 1000=2.99% 00:16:59.080 lat (msec) : 50=95.52% 00:16:59.080 cpu : usr=0.15%, sys=0.00%, ctx=68, majf=0, minf=1 00:16:59.080 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:59.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:59.080 complete : 0=1.5%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:59.080 issued rwts: total=67,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:59.080 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:59.080 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3400572: Fri Apr 26 12:13:00 2024 00:16:59.080 read: IOPS=24, BW=95.8KiB/s (98.1kB/s)(248KiB/2589msec) 00:16:59.080 slat (nsec): min=24888, max=61447, avg=28959.71, stdev=4914.21 00:16:59.080 clat (usec): min=743, max=43006, avg=41385.02, stdev=5264.71 00:16:59.080 lat (usec): min=805, max=43035, avg=41414.04, stdev=5260.52 00:16:59.080 clat percentiles (usec): 00:16:59.080 | 1.00th=[ 742], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:16:59.080 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:16:59.080 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:16:59.080 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:16:59.080 | 99.99th=[43254] 00:16:59.080 bw ( KiB/s): min= 96, max= 96, per=1.88%, avg=96.00, stdev= 0.00, samples=5 00:16:59.080 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:16:59.080 lat (usec) : 750=1.59% 00:16:59.080 lat (msec) : 50=96.83% 00:16:59.080 cpu : usr=0.15%, sys=0.00%, ctx=64, majf=0, minf=2 00:16:59.080 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:59.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:59.080 complete : 0=1.6%, 4=98.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:59.080 issued rwts: total=63,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:59.080 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:59.080 00:16:59.080 Run status group 0 (all jobs): 00:16:59.080 READ: bw=5111KiB/s (5233kB/s), 95.8KiB/s-3042KiB/s (98.1kB/s-3115kB/s), io=15.4MiB (16.2MB), run=2589-3090msec 00:16:59.080 00:16:59.080 Disk stats (read/write): 00:16:59.080 nvme0n1: ios=2152/0, merge=0/0, ticks=2581/0, in_queue=2581, util=92.22% 00:16:59.080 nvme0n2: ios=1633/0, merge=0/0, ticks=3169/0, in_queue=3169, util=99.13% 00:16:59.080 nvme0n3: ios=62/0, merge=0/0, ticks=2535/0, in_queue=2535, util=96.01% 00:16:59.080 nvme0n4: ios=62/0, merge=0/0, ticks=2568/0, in_queue=2568, util=96.40% 00:16:59.080 12:13:00 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:59.080 12:13:00 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:16:59.341 12:13:00 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:59.341 12:13:00 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:16:59.342 12:13:00 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:59.342 12:13:00 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:16:59.604 12:13:00 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:59.604 12:13:00 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:16:59.865 12:13:00 -- target/fio.sh@69 -- # fio_status=0 00:16:59.866 12:13:00 -- target/fio.sh@70 -- # wait 3400381 00:16:59.866 12:13:00 -- target/fio.sh@70 -- # fio_status=4 00:16:59.866 12:13:00 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:59.866 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:59.866 12:13:00 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:59.866 12:13:00 -- common/autotest_common.sh@1205 -- # local i=0 00:16:59.866 12:13:00 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:16:59.866 12:13:00 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:59.866 12:13:00 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:16:59.866 12:13:00 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:59.866 12:13:00 -- common/autotest_common.sh@1217 -- # return 0 00:16:59.866 12:13:00 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:16:59.866 12:13:00 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:16:59.866 nvmf hotplug test: fio failed as expected 00:16:59.866 12:13:00 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:00.126 12:13:01 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:17:00.126 12:13:01 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:17:00.126 12:13:01 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:17:00.126 12:13:01 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:17:00.126 12:13:01 -- target/fio.sh@91 -- # nvmftestfini 00:17:00.126 12:13:01 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:00.126 12:13:01 -- nvmf/common.sh@117 -- # sync 00:17:00.126 12:13:01 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:00.126 12:13:01 -- nvmf/common.sh@120 -- # set +e 00:17:00.126 12:13:01 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:00.126 12:13:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:00.126 rmmod nvme_tcp 00:17:00.126 rmmod nvme_fabrics 00:17:00.126 rmmod nvme_keyring 00:17:00.126 12:13:01 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:00.126 12:13:01 -- nvmf/common.sh@124 -- # set -e 00:17:00.126 12:13:01 -- nvmf/common.sh@125 -- # return 0 00:17:00.126 12:13:01 -- nvmf/common.sh@478 -- # '[' -n 3396869 ']' 00:17:00.126 12:13:01 -- nvmf/common.sh@479 -- # killprocess 3396869 00:17:00.126 12:13:01 -- common/autotest_common.sh@936 -- # '[' -z 3396869 ']' 00:17:00.126 12:13:01 -- common/autotest_common.sh@940 -- # kill -0 3396869 00:17:00.126 12:13:01 -- common/autotest_common.sh@941 -- # uname 00:17:00.126 12:13:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:00.126 12:13:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3396869 00:17:00.126 12:13:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:00.126 12:13:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:00.126 12:13:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3396869' 00:17:00.126 killing process with pid 3396869 00:17:00.126 12:13:01 -- common/autotest_common.sh@955 -- # kill 3396869 00:17:00.126 12:13:01 -- common/autotest_common.sh@960 -- # wait 3396869 00:17:00.387 12:13:01 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:00.387 12:13:01 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:00.387 12:13:01 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:00.387 12:13:01 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:00.387 12:13:01 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:00.387 12:13:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:00.387 12:13:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:00.387 12:13:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:02.307 12:13:03 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:02.307 00:17:02.307 real 0m28.411s 00:17:02.307 user 2m24.843s 00:17:02.307 sys 0m8.982s 00:17:02.307 12:13:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:02.307 12:13:03 -- common/autotest_common.sh@10 -- # set +x 00:17:02.307 ************************************ 00:17:02.307 END TEST nvmf_fio_target 00:17:02.307 ************************************ 00:17:02.569 12:13:03 -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:02.569 12:13:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:02.569 12:13:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:02.569 12:13:03 -- common/autotest_common.sh@10 -- # set +x 00:17:02.569 ************************************ 00:17:02.569 START TEST nvmf_bdevio 00:17:02.569 ************************************ 00:17:02.569 12:13:03 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:02.831 * Looking for test storage... 00:17:02.831 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:02.831 12:13:03 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:02.831 12:13:03 -- nvmf/common.sh@7 -- # uname -s 00:17:02.831 12:13:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:02.831 12:13:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:02.831 12:13:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:02.831 12:13:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:02.831 12:13:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:02.831 12:13:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:02.831 12:13:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:02.831 12:13:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:02.831 12:13:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:02.832 12:13:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:02.832 12:13:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:02.832 12:13:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:02.832 12:13:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:02.832 12:13:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:02.832 12:13:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:02.832 12:13:03 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:02.832 12:13:03 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:02.832 12:13:03 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:02.832 12:13:03 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:02.832 12:13:03 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:02.832 12:13:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.832 12:13:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.832 12:13:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.832 12:13:03 -- paths/export.sh@5 -- # export PATH 00:17:02.832 12:13:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.832 12:13:03 -- nvmf/common.sh@47 -- # : 0 00:17:02.832 12:13:03 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:02.832 12:13:03 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:02.832 12:13:03 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:02.832 12:13:03 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:02.832 12:13:03 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:02.832 12:13:03 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:02.832 12:13:03 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:02.832 12:13:03 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:02.832 12:13:03 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:02.832 12:13:03 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:02.832 12:13:03 -- target/bdevio.sh@14 -- # nvmftestinit 00:17:02.832 12:13:03 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:02.832 12:13:03 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:02.832 12:13:03 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:02.832 12:13:03 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:02.832 12:13:03 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:02.832 12:13:03 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:02.832 12:13:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:02.832 12:13:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:02.832 12:13:03 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:02.832 12:13:03 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:02.832 12:13:03 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:02.832 12:13:03 -- common/autotest_common.sh@10 -- # set +x 00:17:10.977 12:13:10 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:10.977 12:13:10 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:10.977 12:13:10 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:10.977 12:13:10 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:10.977 12:13:10 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:10.977 12:13:10 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:10.977 12:13:10 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:10.977 12:13:10 -- nvmf/common.sh@295 -- # net_devs=() 00:17:10.977 12:13:10 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:10.977 12:13:10 -- nvmf/common.sh@296 -- # e810=() 00:17:10.977 12:13:10 -- nvmf/common.sh@296 -- # local -ga e810 00:17:10.977 12:13:10 -- nvmf/common.sh@297 -- # x722=() 00:17:10.977 12:13:10 -- nvmf/common.sh@297 -- # local -ga x722 00:17:10.977 12:13:10 -- nvmf/common.sh@298 -- # mlx=() 00:17:10.977 12:13:10 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:10.977 12:13:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:10.977 12:13:10 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:10.977 12:13:10 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:10.977 12:13:10 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:10.977 12:13:10 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:10.977 12:13:10 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:10.977 12:13:10 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:10.977 12:13:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:10.977 12:13:10 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:10.977 12:13:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:10.977 12:13:10 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:10.977 12:13:10 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:10.977 12:13:10 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:10.977 12:13:10 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:10.977 12:13:10 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:10.977 12:13:10 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:10.977 12:13:10 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:10.977 12:13:10 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:10.977 12:13:10 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:10.977 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:10.977 12:13:10 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:10.977 12:13:10 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:10.977 12:13:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:10.977 12:13:10 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:10.977 12:13:10 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:10.977 12:13:10 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:10.977 12:13:10 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:10.977 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:10.977 12:13:10 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:10.977 12:13:10 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:10.977 12:13:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:10.977 12:13:10 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:10.977 12:13:10 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:10.977 12:13:10 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:10.977 12:13:10 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:10.977 12:13:10 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:10.977 12:13:10 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:10.977 12:13:10 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:10.977 12:13:10 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:10.977 12:13:10 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:10.977 12:13:10 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:10.977 Found net devices under 0000:31:00.0: cvl_0_0 00:17:10.977 12:13:10 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:10.977 12:13:10 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:10.977 12:13:10 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:10.977 12:13:10 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:10.977 12:13:10 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:10.977 12:13:10 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:10.977 Found net devices under 0000:31:00.1: cvl_0_1 00:17:10.977 12:13:10 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:10.977 12:13:10 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:10.977 12:13:10 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:10.977 12:13:10 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:10.977 12:13:10 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:10.977 12:13:10 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:10.977 12:13:10 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:10.977 12:13:10 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:10.977 12:13:10 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:10.977 12:13:10 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:10.977 12:13:10 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:10.977 12:13:10 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:10.977 12:13:10 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:10.977 12:13:10 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:10.977 12:13:10 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:10.977 12:13:10 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:10.977 12:13:10 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:10.978 12:13:10 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:10.978 12:13:10 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:10.978 12:13:10 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:10.978 12:13:10 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:10.978 12:13:10 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:10.978 12:13:10 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:10.978 12:13:10 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:10.978 12:13:10 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:10.978 12:13:10 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:10.978 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:10.978 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.659 ms 00:17:10.978 00:17:10.978 --- 10.0.0.2 ping statistics --- 00:17:10.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:10.978 rtt min/avg/max/mdev = 0.659/0.659/0.659/0.000 ms 00:17:10.978 12:13:10 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:10.978 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:10.978 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.335 ms 00:17:10.978 00:17:10.978 --- 10.0.0.1 ping statistics --- 00:17:10.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:10.978 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:17:10.978 12:13:11 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:10.978 12:13:11 -- nvmf/common.sh@411 -- # return 0 00:17:10.978 12:13:11 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:10.978 12:13:11 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:10.978 12:13:11 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:10.978 12:13:11 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:10.978 12:13:11 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:10.978 12:13:11 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:10.978 12:13:11 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:10.978 12:13:11 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:10.978 12:13:11 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:10.978 12:13:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:10.978 12:13:11 -- common/autotest_common.sh@10 -- # set +x 00:17:10.978 12:13:11 -- nvmf/common.sh@470 -- # nvmfpid=3405661 00:17:10.978 12:13:11 -- nvmf/common.sh@471 -- # waitforlisten 3405661 00:17:10.978 12:13:11 -- common/autotest_common.sh@817 -- # '[' -z 3405661 ']' 00:17:10.978 12:13:11 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:17:10.978 12:13:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:10.978 12:13:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:10.978 12:13:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:10.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:10.978 12:13:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:10.978 12:13:11 -- common/autotest_common.sh@10 -- # set +x 00:17:10.978 [2024-04-26 12:13:11.107890] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:17:10.978 [2024-04-26 12:13:11.107957] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:10.978 EAL: No free 2048 kB hugepages reported on node 1 00:17:10.978 [2024-04-26 12:13:11.199532] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:10.978 [2024-04-26 12:13:11.290943] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:10.978 [2024-04-26 12:13:11.291001] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:10.978 [2024-04-26 12:13:11.291009] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:10.978 [2024-04-26 12:13:11.291016] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:10.978 [2024-04-26 12:13:11.291022] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:10.978 [2024-04-26 12:13:11.291220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:10.978 [2024-04-26 12:13:11.291530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:10.978 [2024-04-26 12:13:11.291591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:10.978 [2024-04-26 12:13:11.291592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:10.978 12:13:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:10.978 12:13:11 -- common/autotest_common.sh@850 -- # return 0 00:17:10.978 12:13:11 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:10.978 12:13:11 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:10.978 12:13:11 -- common/autotest_common.sh@10 -- # set +x 00:17:10.978 12:13:11 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:10.978 12:13:11 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:10.978 12:13:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:10.978 12:13:11 -- common/autotest_common.sh@10 -- # set +x 00:17:10.978 [2024-04-26 12:13:11.958183] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:10.978 12:13:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:10.978 12:13:11 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:10.978 12:13:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:10.978 12:13:11 -- common/autotest_common.sh@10 -- # set +x 00:17:10.978 Malloc0 00:17:10.978 12:13:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:10.978 12:13:11 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:10.978 12:13:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:10.978 12:13:11 -- common/autotest_common.sh@10 -- # set +x 00:17:10.978 12:13:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:10.978 12:13:12 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:10.978 12:13:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:10.978 12:13:12 -- common/autotest_common.sh@10 -- # set +x 00:17:10.978 12:13:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:10.978 12:13:12 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:10.978 12:13:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:10.978 12:13:12 -- common/autotest_common.sh@10 -- # set +x 00:17:10.978 [2024-04-26 12:13:12.023022] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:10.978 12:13:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:10.978 12:13:12 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:17:10.978 12:13:12 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:10.978 12:13:12 -- nvmf/common.sh@521 -- # config=() 00:17:10.978 12:13:12 -- nvmf/common.sh@521 -- # local subsystem config 00:17:10.978 12:13:12 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:10.978 12:13:12 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:10.978 { 00:17:10.978 "params": { 00:17:10.978 "name": "Nvme$subsystem", 00:17:10.978 "trtype": "$TEST_TRANSPORT", 00:17:10.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:10.978 "adrfam": "ipv4", 00:17:10.978 "trsvcid": "$NVMF_PORT", 00:17:10.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:10.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:10.978 "hdgst": ${hdgst:-false}, 00:17:10.978 "ddgst": ${ddgst:-false} 00:17:10.978 }, 00:17:10.978 "method": "bdev_nvme_attach_controller" 00:17:10.978 } 00:17:10.978 EOF 00:17:10.978 )") 00:17:10.978 12:13:12 -- nvmf/common.sh@543 -- # cat 00:17:10.978 12:13:12 -- nvmf/common.sh@545 -- # jq . 00:17:10.978 12:13:12 -- nvmf/common.sh@546 -- # IFS=, 00:17:10.978 12:13:12 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:17:10.978 "params": { 00:17:10.978 "name": "Nvme1", 00:17:10.978 "trtype": "tcp", 00:17:10.978 "traddr": "10.0.0.2", 00:17:10.978 "adrfam": "ipv4", 00:17:10.978 "trsvcid": "4420", 00:17:10.978 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:10.978 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:10.978 "hdgst": false, 00:17:10.978 "ddgst": false 00:17:10.978 }, 00:17:10.978 "method": "bdev_nvme_attach_controller" 00:17:10.978 }' 00:17:10.978 [2024-04-26 12:13:12.077014] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:17:10.978 [2024-04-26 12:13:12.077079] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3406009 ] 00:17:10.978 EAL: No free 2048 kB hugepages reported on node 1 00:17:10.978 [2024-04-26 12:13:12.144280] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:11.237 [2024-04-26 12:13:12.217214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:11.237 [2024-04-26 12:13:12.217334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:11.237 [2024-04-26 12:13:12.217337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:11.498 I/O targets: 00:17:11.498 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:11.498 00:17:11.498 00:17:11.498 CUnit - A unit testing framework for C - Version 2.1-3 00:17:11.498 http://cunit.sourceforge.net/ 00:17:11.498 00:17:11.498 00:17:11.498 Suite: bdevio tests on: Nvme1n1 00:17:11.498 Test: blockdev write read block ...passed 00:17:11.498 Test: blockdev write zeroes read block ...passed 00:17:11.498 Test: blockdev write zeroes read no split ...passed 00:17:11.498 Test: blockdev write zeroes read split ...passed 00:17:11.498 Test: blockdev write zeroes read split partial ...passed 00:17:11.498 Test: blockdev reset ...[2024-04-26 12:13:12.684069] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:11.498 [2024-04-26 12:13:12.684130] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd868f0 (9): Bad file descriptor 00:17:11.758 [2024-04-26 12:13:12.793560] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:11.758 passed 00:17:11.758 Test: blockdev write read 8 blocks ...passed 00:17:11.758 Test: blockdev write read size > 128k ...passed 00:17:11.758 Test: blockdev write read invalid size ...passed 00:17:11.758 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:11.758 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:11.758 Test: blockdev write read max offset ...passed 00:17:11.758 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:11.758 Test: blockdev writev readv 8 blocks ...passed 00:17:12.020 Test: blockdev writev readv 30 x 1block ...passed 00:17:12.020 Test: blockdev writev readv block ...passed 00:17:12.020 Test: blockdev writev readv size > 128k ...passed 00:17:12.020 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:12.020 Test: blockdev comparev and writev ...[2024-04-26 12:13:13.058636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:12.020 [2024-04-26 12:13:13.058660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.020 [2024-04-26 12:13:13.058671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:12.021 [2024-04-26 12:13:13.058677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:12.021 [2024-04-26 12:13:13.059150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:12.021 [2024-04-26 12:13:13.059159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:12.021 [2024-04-26 12:13:13.059169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:12.021 [2024-04-26 12:13:13.059175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:12.021 [2024-04-26 12:13:13.059654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:12.021 [2024-04-26 12:13:13.059661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:12.021 [2024-04-26 12:13:13.059671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:12.021 [2024-04-26 12:13:13.059676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:12.021 [2024-04-26 12:13:13.060174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:12.021 [2024-04-26 12:13:13.060183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:12.021 [2024-04-26 12:13:13.060193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:12.021 [2024-04-26 12:13:13.060198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:12.021 passed 00:17:12.021 Test: blockdev nvme passthru rw ...passed 00:17:12.021 Test: blockdev nvme passthru vendor specific ...[2024-04-26 12:13:13.144718] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:12.021 [2024-04-26 12:13:13.144728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:12.021 [2024-04-26 12:13:13.145101] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:12.021 [2024-04-26 12:13:13.145112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:12.021 [2024-04-26 12:13:13.145473] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:12.021 [2024-04-26 12:13:13.145480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:12.021 [2024-04-26 12:13:13.145821] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:12.021 [2024-04-26 12:13:13.145829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:12.021 passed 00:17:12.021 Test: blockdev nvme admin passthru ...passed 00:17:12.021 Test: blockdev copy ...passed 00:17:12.021 00:17:12.021 Run Summary: Type Total Ran Passed Failed Inactive 00:17:12.021 suites 1 1 n/a 0 0 00:17:12.021 tests 23 23 23 0 0 00:17:12.021 asserts 152 152 152 0 n/a 00:17:12.021 00:17:12.021 Elapsed time = 1.377 seconds 00:17:12.282 12:13:13 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:12.282 12:13:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:12.282 12:13:13 -- common/autotest_common.sh@10 -- # set +x 00:17:12.282 12:13:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:12.282 12:13:13 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:12.282 12:13:13 -- target/bdevio.sh@30 -- # nvmftestfini 00:17:12.282 12:13:13 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:12.282 12:13:13 -- nvmf/common.sh@117 -- # sync 00:17:12.282 12:13:13 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:12.282 12:13:13 -- nvmf/common.sh@120 -- # set +e 00:17:12.282 12:13:13 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:12.282 12:13:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:12.282 rmmod nvme_tcp 00:17:12.282 rmmod nvme_fabrics 00:17:12.282 rmmod nvme_keyring 00:17:12.282 12:13:13 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:12.282 12:13:13 -- nvmf/common.sh@124 -- # set -e 00:17:12.282 12:13:13 -- nvmf/common.sh@125 -- # return 0 00:17:12.282 12:13:13 -- nvmf/common.sh@478 -- # '[' -n 3405661 ']' 00:17:12.282 12:13:13 -- nvmf/common.sh@479 -- # killprocess 3405661 00:17:12.282 12:13:13 -- common/autotest_common.sh@936 -- # '[' -z 3405661 ']' 00:17:12.282 12:13:13 -- common/autotest_common.sh@940 -- # kill -0 3405661 00:17:12.282 12:13:13 -- common/autotest_common.sh@941 -- # uname 00:17:12.282 12:13:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:12.282 12:13:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3405661 00:17:12.282 12:13:13 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:17:12.282 12:13:13 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:17:12.282 12:13:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3405661' 00:17:12.282 killing process with pid 3405661 00:17:12.282 12:13:13 -- common/autotest_common.sh@955 -- # kill 3405661 00:17:12.282 12:13:13 -- common/autotest_common.sh@960 -- # wait 3405661 00:17:12.541 12:13:13 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:12.541 12:13:13 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:12.541 12:13:13 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:12.541 12:13:13 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:12.541 12:13:13 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:12.541 12:13:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:12.541 12:13:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:12.541 12:13:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:15.081 12:13:15 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:15.081 00:17:15.081 real 0m12.037s 00:17:15.081 user 0m14.186s 00:17:15.081 sys 0m5.802s 00:17:15.081 12:13:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:15.081 12:13:15 -- common/autotest_common.sh@10 -- # set +x 00:17:15.081 ************************************ 00:17:15.081 END TEST nvmf_bdevio 00:17:15.081 ************************************ 00:17:15.081 12:13:15 -- nvmf/nvmf.sh@58 -- # '[' tcp = tcp ']' 00:17:15.081 12:13:15 -- nvmf/nvmf.sh@59 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:15.081 12:13:15 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:17:15.081 12:13:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:15.081 12:13:15 -- common/autotest_common.sh@10 -- # set +x 00:17:15.081 ************************************ 00:17:15.081 START TEST nvmf_bdevio_no_huge 00:17:15.081 ************************************ 00:17:15.081 12:13:15 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:15.081 * Looking for test storage... 00:17:15.081 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:15.081 12:13:16 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:15.081 12:13:16 -- nvmf/common.sh@7 -- # uname -s 00:17:15.081 12:13:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:15.081 12:13:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:15.081 12:13:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:15.081 12:13:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:15.081 12:13:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:15.081 12:13:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:15.081 12:13:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:15.081 12:13:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:15.081 12:13:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:15.081 12:13:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:15.081 12:13:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:15.081 12:13:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:15.082 12:13:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:15.082 12:13:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:15.082 12:13:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:15.082 12:13:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:15.082 12:13:16 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:15.082 12:13:16 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:15.082 12:13:16 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:15.082 12:13:16 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:15.082 12:13:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.082 12:13:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.082 12:13:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.082 12:13:16 -- paths/export.sh@5 -- # export PATH 00:17:15.082 12:13:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.082 12:13:16 -- nvmf/common.sh@47 -- # : 0 00:17:15.082 12:13:16 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:15.082 12:13:16 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:15.082 12:13:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:15.082 12:13:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:15.082 12:13:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:15.082 12:13:16 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:15.082 12:13:16 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:15.082 12:13:16 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:15.082 12:13:16 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:15.082 12:13:16 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:15.082 12:13:16 -- target/bdevio.sh@14 -- # nvmftestinit 00:17:15.082 12:13:16 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:15.082 12:13:16 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:15.082 12:13:16 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:15.082 12:13:16 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:15.082 12:13:16 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:15.082 12:13:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:15.082 12:13:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:15.082 12:13:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:15.082 12:13:16 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:15.082 12:13:16 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:15.082 12:13:16 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:15.082 12:13:16 -- common/autotest_common.sh@10 -- # set +x 00:17:21.673 12:13:22 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:21.673 12:13:22 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:21.673 12:13:22 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:21.673 12:13:22 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:21.673 12:13:22 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:21.673 12:13:22 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:21.673 12:13:22 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:21.673 12:13:22 -- nvmf/common.sh@295 -- # net_devs=() 00:17:21.673 12:13:22 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:21.673 12:13:22 -- nvmf/common.sh@296 -- # e810=() 00:17:21.673 12:13:22 -- nvmf/common.sh@296 -- # local -ga e810 00:17:21.673 12:13:22 -- nvmf/common.sh@297 -- # x722=() 00:17:21.673 12:13:22 -- nvmf/common.sh@297 -- # local -ga x722 00:17:21.673 12:13:22 -- nvmf/common.sh@298 -- # mlx=() 00:17:21.673 12:13:22 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:21.673 12:13:22 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:21.673 12:13:22 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:21.673 12:13:22 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:21.673 12:13:22 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:21.673 12:13:22 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:21.673 12:13:22 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:21.673 12:13:22 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:21.673 12:13:22 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:21.673 12:13:22 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:21.673 12:13:22 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:21.673 12:13:22 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:21.673 12:13:22 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:21.673 12:13:22 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:21.932 12:13:22 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:21.932 12:13:22 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:21.932 12:13:22 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:21.932 12:13:22 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:21.932 12:13:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:21.932 12:13:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:21.932 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:21.932 12:13:22 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:21.932 12:13:22 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:21.932 12:13:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:21.932 12:13:22 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:21.932 12:13:22 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:21.932 12:13:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:21.932 12:13:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:21.932 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:21.932 12:13:22 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:21.932 12:13:22 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:21.932 12:13:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:21.932 12:13:22 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:21.932 12:13:22 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:21.932 12:13:22 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:21.932 12:13:22 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:21.932 12:13:22 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:21.932 12:13:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:21.932 12:13:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:21.932 12:13:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:21.932 12:13:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:21.932 12:13:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:21.932 Found net devices under 0000:31:00.0: cvl_0_0 00:17:21.932 12:13:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:21.932 12:13:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:21.932 12:13:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:21.932 12:13:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:21.932 12:13:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:21.932 12:13:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:21.932 Found net devices under 0000:31:00.1: cvl_0_1 00:17:21.932 12:13:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:21.932 12:13:22 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:21.932 12:13:22 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:21.932 12:13:22 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:21.932 12:13:22 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:21.932 12:13:22 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:21.932 12:13:22 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:21.932 12:13:22 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:21.932 12:13:22 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:21.932 12:13:22 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:21.932 12:13:22 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:21.932 12:13:22 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:21.932 12:13:22 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:21.932 12:13:22 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:21.932 12:13:22 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:21.932 12:13:22 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:21.932 12:13:22 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:21.932 12:13:22 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:21.932 12:13:22 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:21.932 12:13:23 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:21.932 12:13:23 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:21.932 12:13:23 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:21.932 12:13:23 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:22.192 12:13:23 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:22.192 12:13:23 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:22.192 12:13:23 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:22.192 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:22.192 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.415 ms 00:17:22.192 00:17:22.192 --- 10.0.0.2 ping statistics --- 00:17:22.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:22.192 rtt min/avg/max/mdev = 0.415/0.415/0.415/0.000 ms 00:17:22.192 12:13:23 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:22.192 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:22.192 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:17:22.192 00:17:22.192 --- 10.0.0.1 ping statistics --- 00:17:22.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:22.192 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:17:22.192 12:13:23 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:22.192 12:13:23 -- nvmf/common.sh@411 -- # return 0 00:17:22.192 12:13:23 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:22.192 12:13:23 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:22.192 12:13:23 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:22.192 12:13:23 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:22.192 12:13:23 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:22.192 12:13:23 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:22.192 12:13:23 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:22.192 12:13:23 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:22.192 12:13:23 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:22.192 12:13:23 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:22.192 12:13:23 -- common/autotest_common.sh@10 -- # set +x 00:17:22.192 12:13:23 -- nvmf/common.sh@470 -- # nvmfpid=3410412 00:17:22.192 12:13:23 -- nvmf/common.sh@471 -- # waitforlisten 3410412 00:17:22.192 12:13:23 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:22.192 12:13:23 -- common/autotest_common.sh@817 -- # '[' -z 3410412 ']' 00:17:22.192 12:13:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.192 12:13:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:22.192 12:13:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.192 12:13:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:22.192 12:13:23 -- common/autotest_common.sh@10 -- # set +x 00:17:22.192 [2024-04-26 12:13:23.324627] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:17:22.192 [2024-04-26 12:13:23.324700] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:22.452 [2024-04-26 12:13:23.422543] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:22.452 [2024-04-26 12:13:23.523831] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:22.452 [2024-04-26 12:13:23.523894] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:22.452 [2024-04-26 12:13:23.523902] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:22.452 [2024-04-26 12:13:23.523910] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:22.452 [2024-04-26 12:13:23.523916] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:22.452 [2024-04-26 12:13:23.524116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:22.452 [2024-04-26 12:13:23.524280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:22.452 [2024-04-26 12:13:23.524443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:22.452 [2024-04-26 12:13:23.524443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:23.022 12:13:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:23.022 12:13:24 -- common/autotest_common.sh@850 -- # return 0 00:17:23.022 12:13:24 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:23.022 12:13:24 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:23.022 12:13:24 -- common/autotest_common.sh@10 -- # set +x 00:17:23.022 12:13:24 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:23.022 12:13:24 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:23.022 12:13:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:23.022 12:13:24 -- common/autotest_common.sh@10 -- # set +x 00:17:23.022 [2024-04-26 12:13:24.173222] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:23.022 12:13:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:23.022 12:13:24 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:23.022 12:13:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:23.022 12:13:24 -- common/autotest_common.sh@10 -- # set +x 00:17:23.022 Malloc0 00:17:23.022 12:13:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:23.022 12:13:24 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:23.022 12:13:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:23.022 12:13:24 -- common/autotest_common.sh@10 -- # set +x 00:17:23.022 12:13:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:23.022 12:13:24 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:23.022 12:13:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:23.022 12:13:24 -- common/autotest_common.sh@10 -- # set +x 00:17:23.022 12:13:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:23.022 12:13:24 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:23.022 12:13:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:23.022 12:13:24 -- common/autotest_common.sh@10 -- # set +x 00:17:23.023 [2024-04-26 12:13:24.226615] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:23.023 12:13:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:23.023 12:13:24 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:23.023 12:13:24 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:23.023 12:13:24 -- nvmf/common.sh@521 -- # config=() 00:17:23.023 12:13:24 -- nvmf/common.sh@521 -- # local subsystem config 00:17:23.023 12:13:24 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:23.023 12:13:24 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:23.023 { 00:17:23.023 "params": { 00:17:23.023 "name": "Nvme$subsystem", 00:17:23.023 "trtype": "$TEST_TRANSPORT", 00:17:23.023 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:23.023 "adrfam": "ipv4", 00:17:23.023 "trsvcid": "$NVMF_PORT", 00:17:23.023 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:23.023 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:23.023 "hdgst": ${hdgst:-false}, 00:17:23.023 "ddgst": ${ddgst:-false} 00:17:23.023 }, 00:17:23.023 "method": "bdev_nvme_attach_controller" 00:17:23.023 } 00:17:23.023 EOF 00:17:23.023 )") 00:17:23.023 12:13:24 -- nvmf/common.sh@543 -- # cat 00:17:23.283 12:13:24 -- nvmf/common.sh@545 -- # jq . 00:17:23.283 12:13:24 -- nvmf/common.sh@546 -- # IFS=, 00:17:23.283 12:13:24 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:17:23.283 "params": { 00:17:23.283 "name": "Nvme1", 00:17:23.283 "trtype": "tcp", 00:17:23.283 "traddr": "10.0.0.2", 00:17:23.283 "adrfam": "ipv4", 00:17:23.283 "trsvcid": "4420", 00:17:23.283 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:23.283 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:23.283 "hdgst": false, 00:17:23.283 "ddgst": false 00:17:23.283 }, 00:17:23.283 "method": "bdev_nvme_attach_controller" 00:17:23.283 }' 00:17:23.283 [2024-04-26 12:13:24.279417] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:17:23.283 [2024-04-26 12:13:24.279482] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3410760 ] 00:17:23.283 [2024-04-26 12:13:24.349291] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:23.283 [2024-04-26 12:13:24.442712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:23.283 [2024-04-26 12:13:24.442803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:23.283 [2024-04-26 12:13:24.442807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:23.542 I/O targets: 00:17:23.542 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:23.542 00:17:23.542 00:17:23.542 CUnit - A unit testing framework for C - Version 2.1-3 00:17:23.542 http://cunit.sourceforge.net/ 00:17:23.542 00:17:23.542 00:17:23.542 Suite: bdevio tests on: Nvme1n1 00:17:23.542 Test: blockdev write read block ...passed 00:17:23.801 Test: blockdev write zeroes read block ...passed 00:17:23.801 Test: blockdev write zeroes read no split ...passed 00:17:23.801 Test: blockdev write zeroes read split ...passed 00:17:23.801 Test: blockdev write zeroes read split partial ...passed 00:17:23.801 Test: blockdev reset ...[2024-04-26 12:13:24.918053] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:23.801 [2024-04-26 12:13:24.918118] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615fa0 (9): Bad file descriptor 00:17:23.801 [2024-04-26 12:13:24.976281] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:23.801 passed 00:17:23.801 Test: blockdev write read 8 blocks ...passed 00:17:24.060 Test: blockdev write read size > 128k ...passed 00:17:24.060 Test: blockdev write read invalid size ...passed 00:17:24.060 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:24.060 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:24.060 Test: blockdev write read max offset ...passed 00:17:24.060 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:24.060 Test: blockdev writev readv 8 blocks ...passed 00:17:24.060 Test: blockdev writev readv 30 x 1block ...passed 00:17:24.060 Test: blockdev writev readv block ...passed 00:17:24.060 Test: blockdev writev readv size > 128k ...passed 00:17:24.321 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:24.321 Test: blockdev comparev and writev ...[2024-04-26 12:13:25.284502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:24.321 [2024-04-26 12:13:25.284526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.321 [2024-04-26 12:13:25.284536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:24.321 [2024-04-26 12:13:25.284542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:24.321 [2024-04-26 12:13:25.285027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:24.321 [2024-04-26 12:13:25.285036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:24.321 [2024-04-26 12:13:25.285045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:24.321 [2024-04-26 12:13:25.285050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:24.321 [2024-04-26 12:13:25.285511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:24.321 [2024-04-26 12:13:25.285519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:24.321 [2024-04-26 12:13:25.285528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:24.321 [2024-04-26 12:13:25.285533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:24.321 [2024-04-26 12:13:25.286060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:24.321 [2024-04-26 12:13:25.286068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:24.321 [2024-04-26 12:13:25.286077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:24.321 [2024-04-26 12:13:25.286082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:24.321 passed 00:17:24.321 Test: blockdev nvme passthru rw ...passed 00:17:24.321 Test: blockdev nvme passthru vendor specific ...[2024-04-26 12:13:25.369727] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:24.321 [2024-04-26 12:13:25.369737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:24.321 [2024-04-26 12:13:25.370061] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:24.321 [2024-04-26 12:13:25.370069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:24.321 [2024-04-26 12:13:25.370427] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:24.321 [2024-04-26 12:13:25.370434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:24.321 [2024-04-26 12:13:25.370781] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:24.321 [2024-04-26 12:13:25.370792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:24.321 passed 00:17:24.321 Test: blockdev nvme admin passthru ...passed 00:17:24.321 Test: blockdev copy ...passed 00:17:24.321 00:17:24.321 Run Summary: Type Total Ran Passed Failed Inactive 00:17:24.321 suites 1 1 n/a 0 0 00:17:24.321 tests 23 23 23 0 0 00:17:24.321 asserts 152 152 152 0 n/a 00:17:24.321 00:17:24.321 Elapsed time = 1.457 seconds 00:17:24.582 12:13:25 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:24.582 12:13:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:24.582 12:13:25 -- common/autotest_common.sh@10 -- # set +x 00:17:24.582 12:13:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:24.582 12:13:25 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:24.582 12:13:25 -- target/bdevio.sh@30 -- # nvmftestfini 00:17:24.582 12:13:25 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:24.582 12:13:25 -- nvmf/common.sh@117 -- # sync 00:17:24.582 12:13:25 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:24.582 12:13:25 -- nvmf/common.sh@120 -- # set +e 00:17:24.582 12:13:25 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:24.582 12:13:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:24.582 rmmod nvme_tcp 00:17:24.582 rmmod nvme_fabrics 00:17:24.582 rmmod nvme_keyring 00:17:24.582 12:13:25 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:24.582 12:13:25 -- nvmf/common.sh@124 -- # set -e 00:17:24.582 12:13:25 -- nvmf/common.sh@125 -- # return 0 00:17:24.582 12:13:25 -- nvmf/common.sh@478 -- # '[' -n 3410412 ']' 00:17:24.582 12:13:25 -- nvmf/common.sh@479 -- # killprocess 3410412 00:17:24.582 12:13:25 -- common/autotest_common.sh@936 -- # '[' -z 3410412 ']' 00:17:24.582 12:13:25 -- common/autotest_common.sh@940 -- # kill -0 3410412 00:17:24.582 12:13:25 -- common/autotest_common.sh@941 -- # uname 00:17:24.582 12:13:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:24.582 12:13:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3410412 00:17:24.843 12:13:25 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:17:24.843 12:13:25 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:17:24.843 12:13:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3410412' 00:17:24.843 killing process with pid 3410412 00:17:24.843 12:13:25 -- common/autotest_common.sh@955 -- # kill 3410412 00:17:24.843 12:13:25 -- common/autotest_common.sh@960 -- # wait 3410412 00:17:24.843 12:13:26 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:24.843 12:13:26 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:24.843 12:13:26 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:24.843 12:13:26 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:24.843 12:13:26 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:24.843 12:13:26 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:24.843 12:13:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:24.843 12:13:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:27.388 12:13:28 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:27.388 00:17:27.388 real 0m12.195s 00:17:27.388 user 0m14.708s 00:17:27.388 sys 0m6.226s 00:17:27.388 12:13:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:27.388 12:13:28 -- common/autotest_common.sh@10 -- # set +x 00:17:27.388 ************************************ 00:17:27.388 END TEST nvmf_bdevio_no_huge 00:17:27.388 ************************************ 00:17:27.388 12:13:28 -- nvmf/nvmf.sh@60 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:27.388 12:13:28 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:27.388 12:13:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:27.388 12:13:28 -- common/autotest_common.sh@10 -- # set +x 00:17:27.388 ************************************ 00:17:27.388 START TEST nvmf_tls 00:17:27.388 ************************************ 00:17:27.388 12:13:28 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:27.388 * Looking for test storage... 00:17:27.388 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:27.388 12:13:28 -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:27.388 12:13:28 -- nvmf/common.sh@7 -- # uname -s 00:17:27.388 12:13:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:27.388 12:13:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:27.388 12:13:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:27.388 12:13:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:27.388 12:13:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:27.388 12:13:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:27.388 12:13:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:27.388 12:13:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:27.388 12:13:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:27.389 12:13:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:27.389 12:13:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:27.389 12:13:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:27.389 12:13:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:27.389 12:13:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:27.389 12:13:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:27.389 12:13:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:27.389 12:13:28 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:27.389 12:13:28 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:27.389 12:13:28 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:27.389 12:13:28 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:27.389 12:13:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.389 12:13:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.389 12:13:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.389 12:13:28 -- paths/export.sh@5 -- # export PATH 00:17:27.389 12:13:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.389 12:13:28 -- nvmf/common.sh@47 -- # : 0 00:17:27.389 12:13:28 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:27.389 12:13:28 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:27.389 12:13:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:27.389 12:13:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:27.389 12:13:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:27.389 12:13:28 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:27.389 12:13:28 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:27.389 12:13:28 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:27.389 12:13:28 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:27.389 12:13:28 -- target/tls.sh@62 -- # nvmftestinit 00:17:27.389 12:13:28 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:27.389 12:13:28 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:27.389 12:13:28 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:27.389 12:13:28 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:27.389 12:13:28 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:27.389 12:13:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:27.389 12:13:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:27.389 12:13:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:27.389 12:13:28 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:27.389 12:13:28 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:27.389 12:13:28 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:27.389 12:13:28 -- common/autotest_common.sh@10 -- # set +x 00:17:35.566 12:13:35 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:35.566 12:13:35 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:35.566 12:13:35 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:35.566 12:13:35 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:35.566 12:13:35 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:35.566 12:13:35 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:35.566 12:13:35 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:35.566 12:13:35 -- nvmf/common.sh@295 -- # net_devs=() 00:17:35.566 12:13:35 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:35.566 12:13:35 -- nvmf/common.sh@296 -- # e810=() 00:17:35.566 12:13:35 -- nvmf/common.sh@296 -- # local -ga e810 00:17:35.566 12:13:35 -- nvmf/common.sh@297 -- # x722=() 00:17:35.566 12:13:35 -- nvmf/common.sh@297 -- # local -ga x722 00:17:35.566 12:13:35 -- nvmf/common.sh@298 -- # mlx=() 00:17:35.566 12:13:35 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:35.566 12:13:35 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:35.567 12:13:35 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:35.567 12:13:35 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:35.567 12:13:35 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:35.567 12:13:35 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:35.567 12:13:35 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:35.567 12:13:35 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:35.567 12:13:35 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:35.567 12:13:35 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:35.567 12:13:35 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:35.567 12:13:35 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:35.567 12:13:35 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:35.567 12:13:35 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:35.567 12:13:35 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:35.567 12:13:35 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:35.567 12:13:35 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:35.567 12:13:35 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:35.567 12:13:35 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:35.567 12:13:35 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:35.567 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:35.567 12:13:35 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:35.567 12:13:35 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:35.567 12:13:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:35.567 12:13:35 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:35.567 12:13:35 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:35.567 12:13:35 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:35.567 12:13:35 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:35.567 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:35.567 12:13:35 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:35.567 12:13:35 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:35.567 12:13:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:35.567 12:13:35 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:35.567 12:13:35 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:35.567 12:13:35 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:35.567 12:13:35 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:35.567 12:13:35 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:35.567 12:13:35 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:35.567 12:13:35 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:35.567 12:13:35 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:35.567 12:13:35 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:35.567 12:13:35 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:35.567 Found net devices under 0000:31:00.0: cvl_0_0 00:17:35.567 12:13:35 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:35.567 12:13:35 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:35.567 12:13:35 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:35.567 12:13:35 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:35.567 12:13:35 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:35.567 12:13:35 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:35.567 Found net devices under 0000:31:00.1: cvl_0_1 00:17:35.567 12:13:35 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:35.567 12:13:35 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:35.567 12:13:35 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:35.567 12:13:35 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:35.567 12:13:35 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:35.567 12:13:35 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:35.567 12:13:35 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:35.567 12:13:35 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:35.567 12:13:35 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:35.567 12:13:35 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:35.567 12:13:35 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:35.567 12:13:35 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:35.567 12:13:35 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:35.567 12:13:35 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:35.567 12:13:35 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:35.567 12:13:35 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:35.567 12:13:35 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:35.567 12:13:35 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:35.567 12:13:35 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:35.567 12:13:35 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:35.567 12:13:35 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:35.567 12:13:35 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:35.567 12:13:35 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:35.567 12:13:35 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:35.567 12:13:35 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:35.567 12:13:35 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:35.567 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:35.567 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.589 ms 00:17:35.567 00:17:35.567 --- 10.0.0.2 ping statistics --- 00:17:35.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.567 rtt min/avg/max/mdev = 0.589/0.589/0.589/0.000 ms 00:17:35.567 12:13:35 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:35.567 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:35.567 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.255 ms 00:17:35.567 00:17:35.567 --- 10.0.0.1 ping statistics --- 00:17:35.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.567 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:17:35.567 12:13:35 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:35.567 12:13:35 -- nvmf/common.sh@411 -- # return 0 00:17:35.567 12:13:35 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:35.567 12:13:35 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:35.567 12:13:35 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:35.567 12:13:35 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:35.567 12:13:35 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:35.567 12:13:35 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:35.567 12:13:35 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:35.567 12:13:35 -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:35.567 12:13:35 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:35.567 12:13:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:35.567 12:13:35 -- common/autotest_common.sh@10 -- # set +x 00:17:35.567 12:13:35 -- nvmf/common.sh@470 -- # nvmfpid=3415275 00:17:35.567 12:13:35 -- nvmf/common.sh@471 -- # waitforlisten 3415275 00:17:35.567 12:13:35 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:35.567 12:13:35 -- common/autotest_common.sh@817 -- # '[' -z 3415275 ']' 00:17:35.567 12:13:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:35.567 12:13:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:35.567 12:13:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:35.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:35.567 12:13:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:35.567 12:13:35 -- common/autotest_common.sh@10 -- # set +x 00:17:35.567 [2024-04-26 12:13:35.862735] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:17:35.567 [2024-04-26 12:13:35.862796] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:35.567 EAL: No free 2048 kB hugepages reported on node 1 00:17:35.567 [2024-04-26 12:13:35.953104] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.567 [2024-04-26 12:13:36.045356] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:35.567 [2024-04-26 12:13:36.045418] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:35.567 [2024-04-26 12:13:36.045427] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:35.567 [2024-04-26 12:13:36.045434] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:35.567 [2024-04-26 12:13:36.045440] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:35.567 [2024-04-26 12:13:36.045477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:35.567 12:13:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:35.567 12:13:36 -- common/autotest_common.sh@850 -- # return 0 00:17:35.567 12:13:36 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:35.567 12:13:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:35.567 12:13:36 -- common/autotest_common.sh@10 -- # set +x 00:17:35.567 12:13:36 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:35.567 12:13:36 -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:17:35.567 12:13:36 -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:35.828 true 00:17:35.828 12:13:36 -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:35.828 12:13:36 -- target/tls.sh@73 -- # jq -r .tls_version 00:17:35.828 12:13:37 -- target/tls.sh@73 -- # version=0 00:17:35.828 12:13:37 -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:17:35.828 12:13:37 -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:36.088 12:13:37 -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:36.088 12:13:37 -- target/tls.sh@81 -- # jq -r .tls_version 00:17:36.349 12:13:37 -- target/tls.sh@81 -- # version=13 00:17:36.349 12:13:37 -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:17:36.349 12:13:37 -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:36.349 12:13:37 -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:36.349 12:13:37 -- target/tls.sh@89 -- # jq -r .tls_version 00:17:36.610 12:13:37 -- target/tls.sh@89 -- # version=7 00:17:36.610 12:13:37 -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:17:36.610 12:13:37 -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:36.610 12:13:37 -- target/tls.sh@96 -- # jq -r .enable_ktls 00:17:36.871 12:13:37 -- target/tls.sh@96 -- # ktls=false 00:17:36.871 12:13:37 -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:17:36.871 12:13:37 -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:36.871 12:13:38 -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:36.871 12:13:38 -- target/tls.sh@104 -- # jq -r .enable_ktls 00:17:37.131 12:13:38 -- target/tls.sh@104 -- # ktls=true 00:17:37.131 12:13:38 -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:17:37.131 12:13:38 -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:37.390 12:13:38 -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:37.390 12:13:38 -- target/tls.sh@112 -- # jq -r .enable_ktls 00:17:37.390 12:13:38 -- target/tls.sh@112 -- # ktls=false 00:17:37.390 12:13:38 -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:17:37.390 12:13:38 -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:37.390 12:13:38 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:37.390 12:13:38 -- nvmf/common.sh@691 -- # local prefix key digest 00:17:37.390 12:13:38 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:17:37.390 12:13:38 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:17:37.390 12:13:38 -- nvmf/common.sh@693 -- # digest=1 00:17:37.390 12:13:38 -- nvmf/common.sh@694 -- # python - 00:17:37.390 12:13:38 -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:37.390 12:13:38 -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:37.390 12:13:38 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:37.390 12:13:38 -- nvmf/common.sh@691 -- # local prefix key digest 00:17:37.390 12:13:38 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:17:37.390 12:13:38 -- nvmf/common.sh@693 -- # key=ffeeddccbbaa99887766554433221100 00:17:37.390 12:13:38 -- nvmf/common.sh@693 -- # digest=1 00:17:37.390 12:13:38 -- nvmf/common.sh@694 -- # python - 00:17:37.650 12:13:38 -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:37.650 12:13:38 -- target/tls.sh@121 -- # mktemp 00:17:37.650 12:13:38 -- target/tls.sh@121 -- # key_path=/tmp/tmp.sNP5q1q1VL 00:17:37.650 12:13:38 -- target/tls.sh@122 -- # mktemp 00:17:37.650 12:13:38 -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.TdF0RkdNEb 00:17:37.650 12:13:38 -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:37.650 12:13:38 -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:37.650 12:13:38 -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.sNP5q1q1VL 00:17:37.650 12:13:38 -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.TdF0RkdNEb 00:17:37.650 12:13:38 -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:37.650 12:13:38 -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:17:37.909 12:13:38 -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.sNP5q1q1VL 00:17:37.909 12:13:38 -- target/tls.sh@49 -- # local key=/tmp/tmp.sNP5q1q1VL 00:17:37.909 12:13:38 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:38.169 [2024-04-26 12:13:39.136940] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:38.169 12:13:39 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:38.169 12:13:39 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:38.428 [2024-04-26 12:13:39.433657] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:38.428 [2024-04-26 12:13:39.433831] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:38.428 12:13:39 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:38.428 malloc0 00:17:38.428 12:13:39 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:38.689 12:13:39 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.sNP5q1q1VL 00:17:38.689 [2024-04-26 12:13:39.880630] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:38.689 12:13:39 -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.sNP5q1q1VL 00:17:38.949 EAL: No free 2048 kB hugepages reported on node 1 00:17:48.938 Initializing NVMe Controllers 00:17:48.938 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:48.938 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:48.938 Initialization complete. Launching workers. 00:17:48.938 ======================================================== 00:17:48.938 Latency(us) 00:17:48.938 Device Information : IOPS MiB/s Average min max 00:17:48.938 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18756.39 73.27 3412.21 1218.77 4004.35 00:17:48.938 ======================================================== 00:17:48.938 Total : 18756.39 73.27 3412.21 1218.77 4004.35 00:17:48.938 00:17:48.938 12:13:49 -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.sNP5q1q1VL 00:17:48.938 12:13:49 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:48.938 12:13:49 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:48.938 12:13:49 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:48.938 12:13:49 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.sNP5q1q1VL' 00:17:48.938 12:13:49 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:48.938 12:13:50 -- target/tls.sh@28 -- # bdevperf_pid=3418226 00:17:48.938 12:13:50 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:48.938 12:13:50 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:48.938 12:13:50 -- target/tls.sh@31 -- # waitforlisten 3418226 /var/tmp/bdevperf.sock 00:17:48.938 12:13:50 -- common/autotest_common.sh@817 -- # '[' -z 3418226 ']' 00:17:48.938 12:13:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:48.938 12:13:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:48.938 12:13:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:48.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:48.938 12:13:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:48.938 12:13:50 -- common/autotest_common.sh@10 -- # set +x 00:17:48.938 [2024-04-26 12:13:50.030353] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:17:48.938 [2024-04-26 12:13:50.030408] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3418226 ] 00:17:48.938 EAL: No free 2048 kB hugepages reported on node 1 00:17:48.938 [2024-04-26 12:13:50.080138] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:48.938 [2024-04-26 12:13:50.130771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:49.198 12:13:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:49.198 12:13:50 -- common/autotest_common.sh@850 -- # return 0 00:17:49.198 12:13:50 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.sNP5q1q1VL 00:17:49.198 [2024-04-26 12:13:50.342151] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:49.198 [2024-04-26 12:13:50.342213] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:49.198 TLSTESTn1 00:17:49.458 12:13:50 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:49.458 Running I/O for 10 seconds... 00:17:59.452 00:17:59.452 Latency(us) 00:17:59.452 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:59.452 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:59.452 Verification LBA range: start 0x0 length 0x2000 00:17:59.452 TLSTESTn1 : 10.04 5872.51 22.94 0.00 0.00 21741.63 4642.13 31894.19 00:17:59.452 =================================================================================================================== 00:17:59.452 Total : 5872.51 22.94 0.00 0.00 21741.63 4642.13 31894.19 00:17:59.452 0 00:17:59.452 12:14:00 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:59.452 12:14:00 -- target/tls.sh@45 -- # killprocess 3418226 00:17:59.452 12:14:00 -- common/autotest_common.sh@936 -- # '[' -z 3418226 ']' 00:17:59.452 12:14:00 -- common/autotest_common.sh@940 -- # kill -0 3418226 00:17:59.452 12:14:00 -- common/autotest_common.sh@941 -- # uname 00:17:59.452 12:14:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:59.452 12:14:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3418226 00:17:59.452 12:14:00 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:59.452 12:14:00 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:59.452 12:14:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3418226' 00:17:59.452 killing process with pid 3418226 00:17:59.452 12:14:00 -- common/autotest_common.sh@955 -- # kill 3418226 00:17:59.452 Received shutdown signal, test time was about 10.000000 seconds 00:17:59.452 00:17:59.452 Latency(us) 00:17:59.452 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:59.452 =================================================================================================================== 00:17:59.452 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:59.452 [2024-04-26 12:14:00.653738] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:59.452 12:14:00 -- common/autotest_common.sh@960 -- # wait 3418226 00:17:59.713 12:14:00 -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.TdF0RkdNEb 00:17:59.713 12:14:00 -- common/autotest_common.sh@638 -- # local es=0 00:17:59.713 12:14:00 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.TdF0RkdNEb 00:17:59.713 12:14:00 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:17:59.713 12:14:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:59.713 12:14:00 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:17:59.713 12:14:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:59.713 12:14:00 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.TdF0RkdNEb 00:17:59.713 12:14:00 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:59.713 12:14:00 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:59.713 12:14:00 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:59.713 12:14:00 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.TdF0RkdNEb' 00:17:59.713 12:14:00 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:59.713 12:14:00 -- target/tls.sh@28 -- # bdevperf_pid=3420237 00:17:59.713 12:14:00 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:59.713 12:14:00 -- target/tls.sh@31 -- # waitforlisten 3420237 /var/tmp/bdevperf.sock 00:17:59.713 12:14:00 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:59.713 12:14:00 -- common/autotest_common.sh@817 -- # '[' -z 3420237 ']' 00:17:59.713 12:14:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:59.713 12:14:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:59.713 12:14:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:59.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:59.713 12:14:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:59.713 12:14:00 -- common/autotest_common.sh@10 -- # set +x 00:17:59.713 [2024-04-26 12:14:00.816768] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:17:59.713 [2024-04-26 12:14:00.816820] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3420237 ] 00:17:59.713 EAL: No free 2048 kB hugepages reported on node 1 00:17:59.713 [2024-04-26 12:14:00.867366] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.713 [2024-04-26 12:14:00.916923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:00.656 12:14:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:00.656 12:14:01 -- common/autotest_common.sh@850 -- # return 0 00:18:00.656 12:14:01 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.TdF0RkdNEb 00:18:00.656 [2024-04-26 12:14:01.729906] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:00.656 [2024-04-26 12:14:01.729969] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:00.656 [2024-04-26 12:14:01.738222] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:00.656 [2024-04-26 12:14:01.738911] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bdbf0 (107): Transport endpoint is not connected 00:18:00.656 [2024-04-26 12:14:01.739906] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bdbf0 (9): Bad file descriptor 00:18:00.656 [2024-04-26 12:14:01.740908] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:00.656 [2024-04-26 12:14:01.740914] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:00.656 [2024-04-26 12:14:01.740919] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:00.656 request: 00:18:00.656 { 00:18:00.656 "name": "TLSTEST", 00:18:00.656 "trtype": "tcp", 00:18:00.656 "traddr": "10.0.0.2", 00:18:00.656 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:00.656 "adrfam": "ipv4", 00:18:00.656 "trsvcid": "4420", 00:18:00.656 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:00.656 "psk": "/tmp/tmp.TdF0RkdNEb", 00:18:00.656 "method": "bdev_nvme_attach_controller", 00:18:00.656 "req_id": 1 00:18:00.656 } 00:18:00.656 Got JSON-RPC error response 00:18:00.656 response: 00:18:00.656 { 00:18:00.656 "code": -32602, 00:18:00.656 "message": "Invalid parameters" 00:18:00.656 } 00:18:00.656 12:14:01 -- target/tls.sh@36 -- # killprocess 3420237 00:18:00.656 12:14:01 -- common/autotest_common.sh@936 -- # '[' -z 3420237 ']' 00:18:00.656 12:14:01 -- common/autotest_common.sh@940 -- # kill -0 3420237 00:18:00.656 12:14:01 -- common/autotest_common.sh@941 -- # uname 00:18:00.656 12:14:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:00.656 12:14:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3420237 00:18:00.656 12:14:01 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:00.656 12:14:01 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:00.656 12:14:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3420237' 00:18:00.656 killing process with pid 3420237 00:18:00.656 12:14:01 -- common/autotest_common.sh@955 -- # kill 3420237 00:18:00.656 Received shutdown signal, test time was about 10.000000 seconds 00:18:00.656 00:18:00.656 Latency(us) 00:18:00.656 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:00.656 =================================================================================================================== 00:18:00.656 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:00.656 [2024-04-26 12:14:01.827017] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:00.656 12:14:01 -- common/autotest_common.sh@960 -- # wait 3420237 00:18:00.917 12:14:01 -- target/tls.sh@37 -- # return 1 00:18:00.917 12:14:01 -- common/autotest_common.sh@641 -- # es=1 00:18:00.917 12:14:01 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:00.917 12:14:01 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:00.917 12:14:01 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:00.917 12:14:01 -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.sNP5q1q1VL 00:18:00.917 12:14:01 -- common/autotest_common.sh@638 -- # local es=0 00:18:00.917 12:14:01 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.sNP5q1q1VL 00:18:00.917 12:14:01 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:18:00.917 12:14:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:00.917 12:14:01 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:18:00.917 12:14:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:00.917 12:14:01 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.sNP5q1q1VL 00:18:00.917 12:14:01 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:00.917 12:14:01 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:00.917 12:14:01 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:00.917 12:14:01 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.sNP5q1q1VL' 00:18:00.917 12:14:01 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:00.917 12:14:01 -- target/tls.sh@28 -- # bdevperf_pid=3420536 00:18:00.917 12:14:01 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:00.917 12:14:01 -- target/tls.sh@31 -- # waitforlisten 3420536 /var/tmp/bdevperf.sock 00:18:00.917 12:14:01 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:00.917 12:14:01 -- common/autotest_common.sh@817 -- # '[' -z 3420536 ']' 00:18:00.917 12:14:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:00.917 12:14:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:00.917 12:14:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:00.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:00.917 12:14:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:00.917 12:14:01 -- common/autotest_common.sh@10 -- # set +x 00:18:00.917 [2024-04-26 12:14:01.982046] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:18:00.917 [2024-04-26 12:14:01.982099] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3420536 ] 00:18:00.917 EAL: No free 2048 kB hugepages reported on node 1 00:18:00.917 [2024-04-26 12:14:02.032581] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.917 [2024-04-26 12:14:02.082561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:01.861 12:14:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:01.861 12:14:02 -- common/autotest_common.sh@850 -- # return 0 00:18:01.861 12:14:02 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.sNP5q1q1VL 00:18:01.861 [2024-04-26 12:14:02.887400] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:01.861 [2024-04-26 12:14:02.887466] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:01.861 [2024-04-26 12:14:02.891770] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:01.861 [2024-04-26 12:14:02.891789] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:01.861 [2024-04-26 12:14:02.891809] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:01.861 [2024-04-26 12:14:02.892457] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2383bf0 (107): Transport endpoint is not connected 00:18:01.861 [2024-04-26 12:14:02.893452] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2383bf0 (9): Bad file descriptor 00:18:01.861 [2024-04-26 12:14:02.894453] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:01.861 [2024-04-26 12:14:02.894460] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:01.861 [2024-04-26 12:14:02.894465] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:01.861 request: 00:18:01.861 { 00:18:01.861 "name": "TLSTEST", 00:18:01.861 "trtype": "tcp", 00:18:01.861 "traddr": "10.0.0.2", 00:18:01.861 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:01.861 "adrfam": "ipv4", 00:18:01.861 "trsvcid": "4420", 00:18:01.861 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:01.861 "psk": "/tmp/tmp.sNP5q1q1VL", 00:18:01.861 "method": "bdev_nvme_attach_controller", 00:18:01.861 "req_id": 1 00:18:01.861 } 00:18:01.861 Got JSON-RPC error response 00:18:01.861 response: 00:18:01.861 { 00:18:01.861 "code": -32602, 00:18:01.861 "message": "Invalid parameters" 00:18:01.861 } 00:18:01.861 12:14:02 -- target/tls.sh@36 -- # killprocess 3420536 00:18:01.861 12:14:02 -- common/autotest_common.sh@936 -- # '[' -z 3420536 ']' 00:18:01.861 12:14:02 -- common/autotest_common.sh@940 -- # kill -0 3420536 00:18:01.861 12:14:02 -- common/autotest_common.sh@941 -- # uname 00:18:01.861 12:14:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:01.861 12:14:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3420536 00:18:01.861 12:14:02 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:01.861 12:14:02 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:01.861 12:14:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3420536' 00:18:01.861 killing process with pid 3420536 00:18:01.861 12:14:02 -- common/autotest_common.sh@955 -- # kill 3420536 00:18:01.861 Received shutdown signal, test time was about 10.000000 seconds 00:18:01.861 00:18:01.861 Latency(us) 00:18:01.861 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:01.861 =================================================================================================================== 00:18:01.861 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:01.862 [2024-04-26 12:14:02.976711] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:01.862 12:14:02 -- common/autotest_common.sh@960 -- # wait 3420536 00:18:01.862 12:14:03 -- target/tls.sh@37 -- # return 1 00:18:01.862 12:14:03 -- common/autotest_common.sh@641 -- # es=1 00:18:01.862 12:14:03 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:01.862 12:14:03 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:01.862 12:14:03 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:01.862 12:14:03 -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.sNP5q1q1VL 00:18:01.862 12:14:03 -- common/autotest_common.sh@638 -- # local es=0 00:18:01.862 12:14:03 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.sNP5q1q1VL 00:18:01.862 12:14:03 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:18:02.123 12:14:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:02.123 12:14:03 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:18:02.123 12:14:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:02.123 12:14:03 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.sNP5q1q1VL 00:18:02.123 12:14:03 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:02.123 12:14:03 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:02.123 12:14:03 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:02.123 12:14:03 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.sNP5q1q1VL' 00:18:02.123 12:14:03 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:02.123 12:14:03 -- target/tls.sh@28 -- # bdevperf_pid=3420613 00:18:02.123 12:14:03 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:02.123 12:14:03 -- target/tls.sh@31 -- # waitforlisten 3420613 /var/tmp/bdevperf.sock 00:18:02.123 12:14:03 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:02.123 12:14:03 -- common/autotest_common.sh@817 -- # '[' -z 3420613 ']' 00:18:02.123 12:14:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:02.123 12:14:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:02.123 12:14:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:02.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:02.123 12:14:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:02.123 12:14:03 -- common/autotest_common.sh@10 -- # set +x 00:18:02.123 [2024-04-26 12:14:03.139593] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:18:02.123 [2024-04-26 12:14:03.139648] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3420613 ] 00:18:02.123 EAL: No free 2048 kB hugepages reported on node 1 00:18:02.123 [2024-04-26 12:14:03.190344] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.123 [2024-04-26 12:14:03.240450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:02.695 12:14:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:02.695 12:14:03 -- common/autotest_common.sh@850 -- # return 0 00:18:02.695 12:14:03 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.sNP5q1q1VL 00:18:02.954 [2024-04-26 12:14:04.041398] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:02.954 [2024-04-26 12:14:04.041459] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:02.954 [2024-04-26 12:14:04.045635] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:02.954 [2024-04-26 12:14:04.045654] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:02.954 [2024-04-26 12:14:04.045673] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:02.954 [2024-04-26 12:14:04.046361] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16b4bf0 (107): Transport endpoint is not connected 00:18:02.955 [2024-04-26 12:14:04.047355] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16b4bf0 (9): Bad file descriptor 00:18:02.955 [2024-04-26 12:14:04.048357] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:18:02.955 [2024-04-26 12:14:04.048365] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:02.955 [2024-04-26 12:14:04.048373] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:18:02.955 request: 00:18:02.955 { 00:18:02.955 "name": "TLSTEST", 00:18:02.955 "trtype": "tcp", 00:18:02.955 "traddr": "10.0.0.2", 00:18:02.955 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:02.955 "adrfam": "ipv4", 00:18:02.955 "trsvcid": "4420", 00:18:02.955 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:02.955 "psk": "/tmp/tmp.sNP5q1q1VL", 00:18:02.955 "method": "bdev_nvme_attach_controller", 00:18:02.955 "req_id": 1 00:18:02.955 } 00:18:02.955 Got JSON-RPC error response 00:18:02.955 response: 00:18:02.955 { 00:18:02.955 "code": -32602, 00:18:02.955 "message": "Invalid parameters" 00:18:02.955 } 00:18:02.955 12:14:04 -- target/tls.sh@36 -- # killprocess 3420613 00:18:02.955 12:14:04 -- common/autotest_common.sh@936 -- # '[' -z 3420613 ']' 00:18:02.955 12:14:04 -- common/autotest_common.sh@940 -- # kill -0 3420613 00:18:02.955 12:14:04 -- common/autotest_common.sh@941 -- # uname 00:18:02.955 12:14:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:02.955 12:14:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3420613 00:18:02.955 12:14:04 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:02.955 12:14:04 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:02.955 12:14:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3420613' 00:18:02.955 killing process with pid 3420613 00:18:02.955 12:14:04 -- common/autotest_common.sh@955 -- # kill 3420613 00:18:02.955 Received shutdown signal, test time was about 10.000000 seconds 00:18:02.955 00:18:02.955 Latency(us) 00:18:02.955 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:02.955 =================================================================================================================== 00:18:02.955 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:02.955 [2024-04-26 12:14:04.132788] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:02.955 12:14:04 -- common/autotest_common.sh@960 -- # wait 3420613 00:18:03.215 12:14:04 -- target/tls.sh@37 -- # return 1 00:18:03.215 12:14:04 -- common/autotest_common.sh@641 -- # es=1 00:18:03.215 12:14:04 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:03.215 12:14:04 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:03.215 12:14:04 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:03.215 12:14:04 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:03.215 12:14:04 -- common/autotest_common.sh@638 -- # local es=0 00:18:03.215 12:14:04 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:03.215 12:14:04 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:18:03.215 12:14:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:03.215 12:14:04 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:18:03.215 12:14:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:03.215 12:14:04 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:03.215 12:14:04 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:03.215 12:14:04 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:03.215 12:14:04 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:03.215 12:14:04 -- target/tls.sh@23 -- # psk= 00:18:03.215 12:14:04 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:03.215 12:14:04 -- target/tls.sh@28 -- # bdevperf_pid=3420937 00:18:03.215 12:14:04 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:03.215 12:14:04 -- target/tls.sh@31 -- # waitforlisten 3420937 /var/tmp/bdevperf.sock 00:18:03.215 12:14:04 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:03.215 12:14:04 -- common/autotest_common.sh@817 -- # '[' -z 3420937 ']' 00:18:03.215 12:14:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:03.215 12:14:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:03.215 12:14:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:03.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:03.215 12:14:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:03.215 12:14:04 -- common/autotest_common.sh@10 -- # set +x 00:18:03.215 [2024-04-26 12:14:04.297086] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:18:03.215 [2024-04-26 12:14:04.297152] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3420937 ] 00:18:03.215 EAL: No free 2048 kB hugepages reported on node 1 00:18:03.215 [2024-04-26 12:14:04.349331] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.215 [2024-04-26 12:14:04.398788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:04.153 12:14:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:04.153 12:14:05 -- common/autotest_common.sh@850 -- # return 0 00:18:04.153 12:14:05 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:04.153 [2024-04-26 12:14:05.206594] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:04.153 [2024-04-26 12:14:05.208201] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5cd590 (9): Bad file descriptor 00:18:04.153 [2024-04-26 12:14:05.209201] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:04.153 [2024-04-26 12:14:05.209209] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:04.153 [2024-04-26 12:14:05.209214] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:04.153 request: 00:18:04.153 { 00:18:04.153 "name": "TLSTEST", 00:18:04.153 "trtype": "tcp", 00:18:04.153 "traddr": "10.0.0.2", 00:18:04.153 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:04.153 "adrfam": "ipv4", 00:18:04.153 "trsvcid": "4420", 00:18:04.153 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:04.153 "method": "bdev_nvme_attach_controller", 00:18:04.153 "req_id": 1 00:18:04.153 } 00:18:04.153 Got JSON-RPC error response 00:18:04.153 response: 00:18:04.153 { 00:18:04.153 "code": -32602, 00:18:04.153 "message": "Invalid parameters" 00:18:04.153 } 00:18:04.153 12:14:05 -- target/tls.sh@36 -- # killprocess 3420937 00:18:04.153 12:14:05 -- common/autotest_common.sh@936 -- # '[' -z 3420937 ']' 00:18:04.153 12:14:05 -- common/autotest_common.sh@940 -- # kill -0 3420937 00:18:04.153 12:14:05 -- common/autotest_common.sh@941 -- # uname 00:18:04.153 12:14:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:04.153 12:14:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3420937 00:18:04.153 12:14:05 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:04.153 12:14:05 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:04.153 12:14:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3420937' 00:18:04.153 killing process with pid 3420937 00:18:04.153 12:14:05 -- common/autotest_common.sh@955 -- # kill 3420937 00:18:04.153 Received shutdown signal, test time was about 10.000000 seconds 00:18:04.153 00:18:04.153 Latency(us) 00:18:04.153 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:04.153 =================================================================================================================== 00:18:04.153 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:04.153 12:14:05 -- common/autotest_common.sh@960 -- # wait 3420937 00:18:04.414 12:14:05 -- target/tls.sh@37 -- # return 1 00:18:04.414 12:14:05 -- common/autotest_common.sh@641 -- # es=1 00:18:04.414 12:14:05 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:04.414 12:14:05 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:04.414 12:14:05 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:04.414 12:14:05 -- target/tls.sh@158 -- # killprocess 3415275 00:18:04.414 12:14:05 -- common/autotest_common.sh@936 -- # '[' -z 3415275 ']' 00:18:04.414 12:14:05 -- common/autotest_common.sh@940 -- # kill -0 3415275 00:18:04.414 12:14:05 -- common/autotest_common.sh@941 -- # uname 00:18:04.414 12:14:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:04.414 12:14:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3415275 00:18:04.414 12:14:05 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:04.414 12:14:05 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:04.414 12:14:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3415275' 00:18:04.414 killing process with pid 3415275 00:18:04.414 12:14:05 -- common/autotest_common.sh@955 -- # kill 3415275 00:18:04.414 [2024-04-26 12:14:05.454853] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:04.414 12:14:05 -- common/autotest_common.sh@960 -- # wait 3415275 00:18:04.414 12:14:05 -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:04.414 12:14:05 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:04.414 12:14:05 -- nvmf/common.sh@691 -- # local prefix key digest 00:18:04.414 12:14:05 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:18:04.414 12:14:05 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:04.414 12:14:05 -- nvmf/common.sh@693 -- # digest=2 00:18:04.414 12:14:05 -- nvmf/common.sh@694 -- # python - 00:18:04.414 12:14:05 -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:04.414 12:14:05 -- target/tls.sh@160 -- # mktemp 00:18:04.414 12:14:05 -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.0wbser70b5 00:18:04.414 12:14:05 -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:04.414 12:14:05 -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.0wbser70b5 00:18:04.414 12:14:05 -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:18:04.414 12:14:05 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:04.414 12:14:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:04.414 12:14:05 -- common/autotest_common.sh@10 -- # set +x 00:18:04.674 12:14:05 -- nvmf/common.sh@470 -- # nvmfpid=3421294 00:18:04.674 12:14:05 -- nvmf/common.sh@471 -- # waitforlisten 3421294 00:18:04.674 12:14:05 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:04.674 12:14:05 -- common/autotest_common.sh@817 -- # '[' -z 3421294 ']' 00:18:04.674 12:14:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:04.674 12:14:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:04.674 12:14:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:04.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:04.674 12:14:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:04.674 12:14:05 -- common/autotest_common.sh@10 -- # set +x 00:18:04.674 [2024-04-26 12:14:05.694251] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:18:04.674 [2024-04-26 12:14:05.694349] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:04.674 EAL: No free 2048 kB hugepages reported on node 1 00:18:04.674 [2024-04-26 12:14:05.781456] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.674 [2024-04-26 12:14:05.838977] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:04.674 [2024-04-26 12:14:05.839011] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:04.674 [2024-04-26 12:14:05.839016] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:04.674 [2024-04-26 12:14:05.839021] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:04.674 [2024-04-26 12:14:05.839025] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:04.674 [2024-04-26 12:14:05.839043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:05.614 12:14:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:05.614 12:14:06 -- common/autotest_common.sh@850 -- # return 0 00:18:05.614 12:14:06 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:05.614 12:14:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:05.614 12:14:06 -- common/autotest_common.sh@10 -- # set +x 00:18:05.614 12:14:06 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:05.614 12:14:06 -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.0wbser70b5 00:18:05.614 12:14:06 -- target/tls.sh@49 -- # local key=/tmp/tmp.0wbser70b5 00:18:05.614 12:14:06 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:05.614 [2024-04-26 12:14:06.673284] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:05.614 12:14:06 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:05.874 12:14:06 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:05.874 [2024-04-26 12:14:06.982046] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:05.874 [2024-04-26 12:14:06.982220] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:05.874 12:14:06 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:06.133 malloc0 00:18:06.134 12:14:07 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:06.134 12:14:07 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.0wbser70b5 00:18:06.394 [2024-04-26 12:14:07.425149] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:06.394 12:14:07 -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0wbser70b5 00:18:06.394 12:14:07 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:06.394 12:14:07 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:06.394 12:14:07 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:06.394 12:14:07 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.0wbser70b5' 00:18:06.394 12:14:07 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:06.394 12:14:07 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:06.394 12:14:07 -- target/tls.sh@28 -- # bdevperf_pid=3421652 00:18:06.394 12:14:07 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:06.394 12:14:07 -- target/tls.sh@31 -- # waitforlisten 3421652 /var/tmp/bdevperf.sock 00:18:06.394 12:14:07 -- common/autotest_common.sh@817 -- # '[' -z 3421652 ']' 00:18:06.394 12:14:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:06.394 12:14:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:06.394 12:14:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:06.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:06.394 12:14:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:06.394 12:14:07 -- common/autotest_common.sh@10 -- # set +x 00:18:06.394 [2024-04-26 12:14:07.470049] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:18:06.394 [2024-04-26 12:14:07.470096] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3421652 ] 00:18:06.394 EAL: No free 2048 kB hugepages reported on node 1 00:18:06.394 [2024-04-26 12:14:07.519308] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.394 [2024-04-26 12:14:07.569623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:06.654 12:14:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:06.654 12:14:07 -- common/autotest_common.sh@850 -- # return 0 00:18:06.654 12:14:07 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.0wbser70b5 00:18:06.654 [2024-04-26 12:14:07.789016] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:06.654 [2024-04-26 12:14:07.789069] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:06.654 TLSTESTn1 00:18:06.913 12:14:07 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:06.913 Running I/O for 10 seconds... 00:18:16.997 00:18:16.997 Latency(us) 00:18:16.997 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:16.997 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:16.997 Verification LBA range: start 0x0 length 0x2000 00:18:16.997 TLSTESTn1 : 10.02 5769.42 22.54 0.00 0.00 22155.42 5488.64 29272.75 00:18:16.997 =================================================================================================================== 00:18:16.997 Total : 5769.42 22.54 0.00 0.00 22155.42 5488.64 29272.75 00:18:16.997 0 00:18:16.997 12:14:18 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:16.997 12:14:18 -- target/tls.sh@45 -- # killprocess 3421652 00:18:16.997 12:14:18 -- common/autotest_common.sh@936 -- # '[' -z 3421652 ']' 00:18:16.997 12:14:18 -- common/autotest_common.sh@940 -- # kill -0 3421652 00:18:16.997 12:14:18 -- common/autotest_common.sh@941 -- # uname 00:18:16.997 12:14:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:16.997 12:14:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3421652 00:18:16.997 12:14:18 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:16.997 12:14:18 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:16.997 12:14:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3421652' 00:18:16.997 killing process with pid 3421652 00:18:16.997 12:14:18 -- common/autotest_common.sh@955 -- # kill 3421652 00:18:16.997 Received shutdown signal, test time was about 10.000000 seconds 00:18:16.997 00:18:16.997 Latency(us) 00:18:16.997 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:16.997 =================================================================================================================== 00:18:16.997 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:16.997 [2024-04-26 12:14:18.078843] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:16.997 12:14:18 -- common/autotest_common.sh@960 -- # wait 3421652 00:18:16.997 12:14:18 -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.0wbser70b5 00:18:16.997 12:14:18 -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0wbser70b5 00:18:16.997 12:14:18 -- common/autotest_common.sh@638 -- # local es=0 00:18:16.997 12:14:18 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0wbser70b5 00:18:16.997 12:14:18 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:18:16.997 12:14:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:16.997 12:14:18 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:18:16.997 12:14:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:16.998 12:14:18 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0wbser70b5 00:18:16.998 12:14:18 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:16.998 12:14:18 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:16.998 12:14:18 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:16.998 12:14:18 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.0wbser70b5' 00:18:16.998 12:14:18 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:16.998 12:14:18 -- target/tls.sh@28 -- # bdevperf_pid=3423664 00:18:16.998 12:14:18 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:16.998 12:14:18 -- target/tls.sh@31 -- # waitforlisten 3423664 /var/tmp/bdevperf.sock 00:18:16.998 12:14:18 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:16.998 12:14:18 -- common/autotest_common.sh@817 -- # '[' -z 3423664 ']' 00:18:16.998 12:14:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:16.998 12:14:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:16.998 12:14:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:16.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:16.998 12:14:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:16.998 12:14:18 -- common/autotest_common.sh@10 -- # set +x 00:18:17.258 [2024-04-26 12:14:18.252952] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:18:17.258 [2024-04-26 12:14:18.253011] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3423664 ] 00:18:17.258 EAL: No free 2048 kB hugepages reported on node 1 00:18:17.258 [2024-04-26 12:14:18.304754] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.258 [2024-04-26 12:14:18.353868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:17.831 12:14:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:17.831 12:14:19 -- common/autotest_common.sh@850 -- # return 0 00:18:17.831 12:14:19 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.0wbser70b5 00:18:18.093 [2024-04-26 12:14:19.158716] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:18.093 [2024-04-26 12:14:19.158766] bdev_nvme.c:6071:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:18.093 [2024-04-26 12:14:19.158771] bdev_nvme.c:6180:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.0wbser70b5 00:18:18.093 request: 00:18:18.093 { 00:18:18.093 "name": "TLSTEST", 00:18:18.093 "trtype": "tcp", 00:18:18.093 "traddr": "10.0.0.2", 00:18:18.093 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:18.093 "adrfam": "ipv4", 00:18:18.093 "trsvcid": "4420", 00:18:18.093 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:18.093 "psk": "/tmp/tmp.0wbser70b5", 00:18:18.093 "method": "bdev_nvme_attach_controller", 00:18:18.093 "req_id": 1 00:18:18.093 } 00:18:18.093 Got JSON-RPC error response 00:18:18.093 response: 00:18:18.093 { 00:18:18.093 "code": -1, 00:18:18.093 "message": "Operation not permitted" 00:18:18.093 } 00:18:18.093 12:14:19 -- target/tls.sh@36 -- # killprocess 3423664 00:18:18.093 12:14:19 -- common/autotest_common.sh@936 -- # '[' -z 3423664 ']' 00:18:18.093 12:14:19 -- common/autotest_common.sh@940 -- # kill -0 3423664 00:18:18.093 12:14:19 -- common/autotest_common.sh@941 -- # uname 00:18:18.093 12:14:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:18.093 12:14:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3423664 00:18:18.093 12:14:19 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:18.093 12:14:19 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:18.093 12:14:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3423664' 00:18:18.093 killing process with pid 3423664 00:18:18.093 12:14:19 -- common/autotest_common.sh@955 -- # kill 3423664 00:18:18.093 Received shutdown signal, test time was about 10.000000 seconds 00:18:18.093 00:18:18.093 Latency(us) 00:18:18.093 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:18.093 =================================================================================================================== 00:18:18.093 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:18.093 12:14:19 -- common/autotest_common.sh@960 -- # wait 3423664 00:18:18.354 12:14:19 -- target/tls.sh@37 -- # return 1 00:18:18.354 12:14:19 -- common/autotest_common.sh@641 -- # es=1 00:18:18.354 12:14:19 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:18.354 12:14:19 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:18.354 12:14:19 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:18.354 12:14:19 -- target/tls.sh@174 -- # killprocess 3421294 00:18:18.354 12:14:19 -- common/autotest_common.sh@936 -- # '[' -z 3421294 ']' 00:18:18.354 12:14:19 -- common/autotest_common.sh@940 -- # kill -0 3421294 00:18:18.354 12:14:19 -- common/autotest_common.sh@941 -- # uname 00:18:18.354 12:14:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:18.354 12:14:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3421294 00:18:18.354 12:14:19 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:18.354 12:14:19 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:18.354 12:14:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3421294' 00:18:18.354 killing process with pid 3421294 00:18:18.354 12:14:19 -- common/autotest_common.sh@955 -- # kill 3421294 00:18:18.354 [2024-04-26 12:14:19.405506] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:18.354 12:14:19 -- common/autotest_common.sh@960 -- # wait 3421294 00:18:18.354 12:14:19 -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:18:18.354 12:14:19 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:18.354 12:14:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:18.354 12:14:19 -- common/autotest_common.sh@10 -- # set +x 00:18:18.354 12:14:19 -- nvmf/common.sh@470 -- # nvmfpid=3424010 00:18:18.354 12:14:19 -- nvmf/common.sh@471 -- # waitforlisten 3424010 00:18:18.354 12:14:19 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:18.354 12:14:19 -- common/autotest_common.sh@817 -- # '[' -z 3424010 ']' 00:18:18.354 12:14:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:18.354 12:14:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:18.354 12:14:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:18.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:18.354 12:14:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:18.354 12:14:19 -- common/autotest_common.sh@10 -- # set +x 00:18:18.615 [2024-04-26 12:14:19.582291] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:18:18.615 [2024-04-26 12:14:19.582342] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:18.615 EAL: No free 2048 kB hugepages reported on node 1 00:18:18.615 [2024-04-26 12:14:19.662077] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.615 [2024-04-26 12:14:19.714801] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:18.615 [2024-04-26 12:14:19.714841] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:18.615 [2024-04-26 12:14:19.714847] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:18.615 [2024-04-26 12:14:19.714852] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:18.615 [2024-04-26 12:14:19.714856] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:18.615 [2024-04-26 12:14:19.714877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:19.184 12:14:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:19.184 12:14:20 -- common/autotest_common.sh@850 -- # return 0 00:18:19.184 12:14:20 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:19.184 12:14:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:19.184 12:14:20 -- common/autotest_common.sh@10 -- # set +x 00:18:19.184 12:14:20 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:19.184 12:14:20 -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.0wbser70b5 00:18:19.184 12:14:20 -- common/autotest_common.sh@638 -- # local es=0 00:18:19.184 12:14:20 -- common/autotest_common.sh@640 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.0wbser70b5 00:18:19.184 12:14:20 -- common/autotest_common.sh@626 -- # local arg=setup_nvmf_tgt 00:18:19.184 12:14:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:19.184 12:14:20 -- common/autotest_common.sh@630 -- # type -t setup_nvmf_tgt 00:18:19.184 12:14:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:19.184 12:14:20 -- common/autotest_common.sh@641 -- # setup_nvmf_tgt /tmp/tmp.0wbser70b5 00:18:19.184 12:14:20 -- target/tls.sh@49 -- # local key=/tmp/tmp.0wbser70b5 00:18:19.184 12:14:20 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:19.445 [2024-04-26 12:14:20.524841] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:19.445 12:14:20 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:19.706 12:14:20 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:19.706 [2024-04-26 12:14:20.829588] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:19.706 [2024-04-26 12:14:20.829765] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:19.706 12:14:20 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:19.966 malloc0 00:18:19.966 12:14:20 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:19.966 12:14:21 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.0wbser70b5 00:18:20.251 [2024-04-26 12:14:21.276467] tcp.c:3562:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:20.251 [2024-04-26 12:14:21.276488] tcp.c:3648:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:18:20.251 [2024-04-26 12:14:21.276505] subsystem.c: 971:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:18:20.251 request: 00:18:20.251 { 00:18:20.251 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:20.251 "host": "nqn.2016-06.io.spdk:host1", 00:18:20.251 "psk": "/tmp/tmp.0wbser70b5", 00:18:20.251 "method": "nvmf_subsystem_add_host", 00:18:20.251 "req_id": 1 00:18:20.251 } 00:18:20.251 Got JSON-RPC error response 00:18:20.251 response: 00:18:20.251 { 00:18:20.251 "code": -32603, 00:18:20.251 "message": "Internal error" 00:18:20.251 } 00:18:20.251 12:14:21 -- common/autotest_common.sh@641 -- # es=1 00:18:20.251 12:14:21 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:20.251 12:14:21 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:20.251 12:14:21 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:20.251 12:14:21 -- target/tls.sh@180 -- # killprocess 3424010 00:18:20.251 12:14:21 -- common/autotest_common.sh@936 -- # '[' -z 3424010 ']' 00:18:20.251 12:14:21 -- common/autotest_common.sh@940 -- # kill -0 3424010 00:18:20.251 12:14:21 -- common/autotest_common.sh@941 -- # uname 00:18:20.251 12:14:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:20.251 12:14:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3424010 00:18:20.251 12:14:21 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:20.251 12:14:21 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:20.251 12:14:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3424010' 00:18:20.251 killing process with pid 3424010 00:18:20.251 12:14:21 -- common/autotest_common.sh@955 -- # kill 3424010 00:18:20.251 12:14:21 -- common/autotest_common.sh@960 -- # wait 3424010 00:18:20.251 12:14:21 -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.0wbser70b5 00:18:20.251 12:14:21 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:18:20.251 12:14:21 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:20.251 12:14:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:20.251 12:14:21 -- common/autotest_common.sh@10 -- # set +x 00:18:20.251 12:14:21 -- nvmf/common.sh@470 -- # nvmfpid=3424380 00:18:20.251 12:14:21 -- nvmf/common.sh@471 -- # waitforlisten 3424380 00:18:20.251 12:14:21 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:20.251 12:14:21 -- common/autotest_common.sh@817 -- # '[' -z 3424380 ']' 00:18:20.251 12:14:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:20.251 12:14:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:20.252 12:14:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:20.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:20.252 12:14:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:20.252 12:14:21 -- common/autotest_common.sh@10 -- # set +x 00:18:20.513 [2024-04-26 12:14:21.503615] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:18:20.513 [2024-04-26 12:14:21.503666] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:20.513 EAL: No free 2048 kB hugepages reported on node 1 00:18:20.513 [2024-04-26 12:14:21.585606] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.513 [2024-04-26 12:14:21.639241] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:20.513 [2024-04-26 12:14:21.639275] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:20.513 [2024-04-26 12:14:21.639281] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:20.513 [2024-04-26 12:14:21.639285] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:20.513 [2024-04-26 12:14:21.639289] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:20.513 [2024-04-26 12:14:21.639311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:21.084 12:14:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:21.084 12:14:22 -- common/autotest_common.sh@850 -- # return 0 00:18:21.084 12:14:22 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:21.084 12:14:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:21.084 12:14:22 -- common/autotest_common.sh@10 -- # set +x 00:18:21.084 12:14:22 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:21.084 12:14:22 -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.0wbser70b5 00:18:21.084 12:14:22 -- target/tls.sh@49 -- # local key=/tmp/tmp.0wbser70b5 00:18:21.084 12:14:22 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:21.344 [2024-04-26 12:14:22.425109] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:21.344 12:14:22 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:21.605 12:14:22 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:21.605 [2024-04-26 12:14:22.721829] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:21.605 [2024-04-26 12:14:22.722006] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:21.605 12:14:22 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:21.865 malloc0 00:18:21.865 12:14:22 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:21.865 12:14:23 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.0wbser70b5 00:18:22.126 [2024-04-26 12:14:23.144613] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:22.126 12:14:23 -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:22.126 12:14:23 -- target/tls.sh@188 -- # bdevperf_pid=3424746 00:18:22.126 12:14:23 -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:22.126 12:14:23 -- target/tls.sh@191 -- # waitforlisten 3424746 /var/tmp/bdevperf.sock 00:18:22.126 12:14:23 -- common/autotest_common.sh@817 -- # '[' -z 3424746 ']' 00:18:22.126 12:14:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:22.126 12:14:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:22.126 12:14:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:22.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:22.126 12:14:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:22.126 12:14:23 -- common/autotest_common.sh@10 -- # set +x 00:18:22.126 [2024-04-26 12:14:23.187380] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:18:22.126 [2024-04-26 12:14:23.187430] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3424746 ] 00:18:22.126 EAL: No free 2048 kB hugepages reported on node 1 00:18:22.126 [2024-04-26 12:14:23.238054] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.126 [2024-04-26 12:14:23.287991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:22.385 12:14:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:22.385 12:14:23 -- common/autotest_common.sh@850 -- # return 0 00:18:22.385 12:14:23 -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.0wbser70b5 00:18:22.385 [2024-04-26 12:14:23.503403] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:22.385 [2024-04-26 12:14:23.503472] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:22.385 TLSTESTn1 00:18:22.385 12:14:23 -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:22.645 12:14:23 -- target/tls.sh@196 -- # tgtconf='{ 00:18:22.645 "subsystems": [ 00:18:22.645 { 00:18:22.645 "subsystem": "keyring", 00:18:22.645 "config": [] 00:18:22.645 }, 00:18:22.645 { 00:18:22.645 "subsystem": "iobuf", 00:18:22.645 "config": [ 00:18:22.645 { 00:18:22.645 "method": "iobuf_set_options", 00:18:22.645 "params": { 00:18:22.645 "small_pool_count": 8192, 00:18:22.645 "large_pool_count": 1024, 00:18:22.645 "small_bufsize": 8192, 00:18:22.645 "large_bufsize": 135168 00:18:22.645 } 00:18:22.645 } 00:18:22.645 ] 00:18:22.645 }, 00:18:22.645 { 00:18:22.645 "subsystem": "sock", 00:18:22.645 "config": [ 00:18:22.645 { 00:18:22.645 "method": "sock_impl_set_options", 00:18:22.645 "params": { 00:18:22.645 "impl_name": "posix", 00:18:22.645 "recv_buf_size": 2097152, 00:18:22.645 "send_buf_size": 2097152, 00:18:22.645 "enable_recv_pipe": true, 00:18:22.645 "enable_quickack": false, 00:18:22.645 "enable_placement_id": 0, 00:18:22.645 "enable_zerocopy_send_server": true, 00:18:22.645 "enable_zerocopy_send_client": false, 00:18:22.645 "zerocopy_threshold": 0, 00:18:22.645 "tls_version": 0, 00:18:22.645 "enable_ktls": false 00:18:22.645 } 00:18:22.645 }, 00:18:22.645 { 00:18:22.645 "method": "sock_impl_set_options", 00:18:22.645 "params": { 00:18:22.645 "impl_name": "ssl", 00:18:22.645 "recv_buf_size": 4096, 00:18:22.645 "send_buf_size": 4096, 00:18:22.645 "enable_recv_pipe": true, 00:18:22.645 "enable_quickack": false, 00:18:22.645 "enable_placement_id": 0, 00:18:22.645 "enable_zerocopy_send_server": true, 00:18:22.645 "enable_zerocopy_send_client": false, 00:18:22.645 "zerocopy_threshold": 0, 00:18:22.645 "tls_version": 0, 00:18:22.645 "enable_ktls": false 00:18:22.645 } 00:18:22.645 } 00:18:22.645 ] 00:18:22.645 }, 00:18:22.645 { 00:18:22.645 "subsystem": "vmd", 00:18:22.645 "config": [] 00:18:22.645 }, 00:18:22.645 { 00:18:22.645 "subsystem": "accel", 00:18:22.645 "config": [ 00:18:22.645 { 00:18:22.645 "method": "accel_set_options", 00:18:22.645 "params": { 00:18:22.645 "small_cache_size": 128, 00:18:22.645 "large_cache_size": 16, 00:18:22.645 "task_count": 2048, 00:18:22.645 "sequence_count": 2048, 00:18:22.645 "buf_count": 2048 00:18:22.645 } 00:18:22.645 } 00:18:22.645 ] 00:18:22.645 }, 00:18:22.645 { 00:18:22.645 "subsystem": "bdev", 00:18:22.645 "config": [ 00:18:22.645 { 00:18:22.645 "method": "bdev_set_options", 00:18:22.645 "params": { 00:18:22.645 "bdev_io_pool_size": 65535, 00:18:22.645 "bdev_io_cache_size": 256, 00:18:22.645 "bdev_auto_examine": true, 00:18:22.646 "iobuf_small_cache_size": 128, 00:18:22.646 "iobuf_large_cache_size": 16 00:18:22.646 } 00:18:22.646 }, 00:18:22.646 { 00:18:22.646 "method": "bdev_raid_set_options", 00:18:22.646 "params": { 00:18:22.646 "process_window_size_kb": 1024 00:18:22.646 } 00:18:22.646 }, 00:18:22.646 { 00:18:22.646 "method": "bdev_iscsi_set_options", 00:18:22.646 "params": { 00:18:22.646 "timeout_sec": 30 00:18:22.646 } 00:18:22.646 }, 00:18:22.646 { 00:18:22.646 "method": "bdev_nvme_set_options", 00:18:22.646 "params": { 00:18:22.646 "action_on_timeout": "none", 00:18:22.646 "timeout_us": 0, 00:18:22.646 "timeout_admin_us": 0, 00:18:22.646 "keep_alive_timeout_ms": 10000, 00:18:22.646 "arbitration_burst": 0, 00:18:22.646 "low_priority_weight": 0, 00:18:22.646 "medium_priority_weight": 0, 00:18:22.646 "high_priority_weight": 0, 00:18:22.646 "nvme_adminq_poll_period_us": 10000, 00:18:22.646 "nvme_ioq_poll_period_us": 0, 00:18:22.646 "io_queue_requests": 0, 00:18:22.646 "delay_cmd_submit": true, 00:18:22.646 "transport_retry_count": 4, 00:18:22.646 "bdev_retry_count": 3, 00:18:22.646 "transport_ack_timeout": 0, 00:18:22.646 "ctrlr_loss_timeout_sec": 0, 00:18:22.646 "reconnect_delay_sec": 0, 00:18:22.646 "fast_io_fail_timeout_sec": 0, 00:18:22.646 "disable_auto_failback": false, 00:18:22.646 "generate_uuids": false, 00:18:22.646 "transport_tos": 0, 00:18:22.646 "nvme_error_stat": false, 00:18:22.646 "rdma_srq_size": 0, 00:18:22.646 "io_path_stat": false, 00:18:22.646 "allow_accel_sequence": false, 00:18:22.646 "rdma_max_cq_size": 0, 00:18:22.646 "rdma_cm_event_timeout_ms": 0, 00:18:22.646 "dhchap_digests": [ 00:18:22.646 "sha256", 00:18:22.646 "sha384", 00:18:22.646 "sha512" 00:18:22.646 ], 00:18:22.646 "dhchap_dhgroups": [ 00:18:22.646 "null", 00:18:22.646 "ffdhe2048", 00:18:22.646 "ffdhe3072", 00:18:22.646 "ffdhe4096", 00:18:22.646 "ffdhe6144", 00:18:22.646 "ffdhe8192" 00:18:22.646 ] 00:18:22.646 } 00:18:22.646 }, 00:18:22.646 { 00:18:22.646 "method": "bdev_nvme_set_hotplug", 00:18:22.646 "params": { 00:18:22.646 "period_us": 100000, 00:18:22.646 "enable": false 00:18:22.646 } 00:18:22.646 }, 00:18:22.646 { 00:18:22.646 "method": "bdev_malloc_create", 00:18:22.646 "params": { 00:18:22.646 "name": "malloc0", 00:18:22.646 "num_blocks": 8192, 00:18:22.646 "block_size": 4096, 00:18:22.646 "physical_block_size": 4096, 00:18:22.646 "uuid": "620e9719-d561-4afe-813a-ddee4ea16f5a", 00:18:22.646 "optimal_io_boundary": 0 00:18:22.646 } 00:18:22.646 }, 00:18:22.646 { 00:18:22.646 "method": "bdev_wait_for_examine" 00:18:22.646 } 00:18:22.646 ] 00:18:22.646 }, 00:18:22.646 { 00:18:22.646 "subsystem": "nbd", 00:18:22.646 "config": [] 00:18:22.646 }, 00:18:22.646 { 00:18:22.646 "subsystem": "scheduler", 00:18:22.646 "config": [ 00:18:22.646 { 00:18:22.646 "method": "framework_set_scheduler", 00:18:22.646 "params": { 00:18:22.646 "name": "static" 00:18:22.646 } 00:18:22.646 } 00:18:22.646 ] 00:18:22.646 }, 00:18:22.646 { 00:18:22.646 "subsystem": "nvmf", 00:18:22.646 "config": [ 00:18:22.646 { 00:18:22.646 "method": "nvmf_set_config", 00:18:22.646 "params": { 00:18:22.646 "discovery_filter": "match_any", 00:18:22.646 "admin_cmd_passthru": { 00:18:22.646 "identify_ctrlr": false 00:18:22.646 } 00:18:22.646 } 00:18:22.646 }, 00:18:22.646 { 00:18:22.646 "method": "nvmf_set_max_subsystems", 00:18:22.646 "params": { 00:18:22.646 "max_subsystems": 1024 00:18:22.646 } 00:18:22.646 }, 00:18:22.646 { 00:18:22.646 "method": "nvmf_set_crdt", 00:18:22.646 "params": { 00:18:22.646 "crdt1": 0, 00:18:22.646 "crdt2": 0, 00:18:22.646 "crdt3": 0 00:18:22.646 } 00:18:22.646 }, 00:18:22.646 { 00:18:22.646 "method": "nvmf_create_transport", 00:18:22.646 "params": { 00:18:22.646 "trtype": "TCP", 00:18:22.646 "max_queue_depth": 128, 00:18:22.646 "max_io_qpairs_per_ctrlr": 127, 00:18:22.646 "in_capsule_data_size": 4096, 00:18:22.646 "max_io_size": 131072, 00:18:22.646 "io_unit_size": 131072, 00:18:22.646 "max_aq_depth": 128, 00:18:22.646 "num_shared_buffers": 511, 00:18:22.646 "buf_cache_size": 4294967295, 00:18:22.646 "dif_insert_or_strip": false, 00:18:22.646 "zcopy": false, 00:18:22.646 "c2h_success": false, 00:18:22.646 "sock_priority": 0, 00:18:22.646 "abort_timeout_sec": 1, 00:18:22.646 "ack_timeout": 0, 00:18:22.646 "data_wr_pool_size": 0 00:18:22.646 } 00:18:22.646 }, 00:18:22.646 { 00:18:22.646 "method": "nvmf_create_subsystem", 00:18:22.646 "params": { 00:18:22.646 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:22.646 "allow_any_host": false, 00:18:22.646 "serial_number": "SPDK00000000000001", 00:18:22.646 "model_number": "SPDK bdev Controller", 00:18:22.646 "max_namespaces": 10, 00:18:22.646 "min_cntlid": 1, 00:18:22.646 "max_cntlid": 65519, 00:18:22.646 "ana_reporting": false 00:18:22.646 } 00:18:22.646 }, 00:18:22.646 { 00:18:22.646 "method": "nvmf_subsystem_add_host", 00:18:22.646 "params": { 00:18:22.646 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:22.646 "host": "nqn.2016-06.io.spdk:host1", 00:18:22.646 "psk": "/tmp/tmp.0wbser70b5" 00:18:22.646 } 00:18:22.646 }, 00:18:22.646 { 00:18:22.646 "method": "nvmf_subsystem_add_ns", 00:18:22.646 "params": { 00:18:22.646 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:22.646 "namespace": { 00:18:22.646 "nsid": 1, 00:18:22.646 "bdev_name": "malloc0", 00:18:22.646 "nguid": "620E9719D5614AFE813ADDEE4EA16F5A", 00:18:22.646 "uuid": "620e9719-d561-4afe-813a-ddee4ea16f5a", 00:18:22.646 "no_auto_visible": false 00:18:22.646 } 00:18:22.646 } 00:18:22.646 }, 00:18:22.646 { 00:18:22.646 "method": "nvmf_subsystem_add_listener", 00:18:22.646 "params": { 00:18:22.646 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:22.646 "listen_address": { 00:18:22.646 "trtype": "TCP", 00:18:22.646 "adrfam": "IPv4", 00:18:22.646 "traddr": "10.0.0.2", 00:18:22.646 "trsvcid": "4420" 00:18:22.646 }, 00:18:22.646 "secure_channel": true 00:18:22.646 } 00:18:22.646 } 00:18:22.646 ] 00:18:22.646 } 00:18:22.646 ] 00:18:22.646 }' 00:18:22.646 12:14:23 -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:22.907 12:14:24 -- target/tls.sh@197 -- # bdevperfconf='{ 00:18:22.907 "subsystems": [ 00:18:22.907 { 00:18:22.907 "subsystem": "keyring", 00:18:22.907 "config": [] 00:18:22.907 }, 00:18:22.907 { 00:18:22.907 "subsystem": "iobuf", 00:18:22.907 "config": [ 00:18:22.907 { 00:18:22.907 "method": "iobuf_set_options", 00:18:22.907 "params": { 00:18:22.907 "small_pool_count": 8192, 00:18:22.907 "large_pool_count": 1024, 00:18:22.907 "small_bufsize": 8192, 00:18:22.907 "large_bufsize": 135168 00:18:22.907 } 00:18:22.907 } 00:18:22.907 ] 00:18:22.907 }, 00:18:22.907 { 00:18:22.907 "subsystem": "sock", 00:18:22.907 "config": [ 00:18:22.907 { 00:18:22.907 "method": "sock_impl_set_options", 00:18:22.907 "params": { 00:18:22.907 "impl_name": "posix", 00:18:22.907 "recv_buf_size": 2097152, 00:18:22.907 "send_buf_size": 2097152, 00:18:22.907 "enable_recv_pipe": true, 00:18:22.907 "enable_quickack": false, 00:18:22.907 "enable_placement_id": 0, 00:18:22.907 "enable_zerocopy_send_server": true, 00:18:22.907 "enable_zerocopy_send_client": false, 00:18:22.907 "zerocopy_threshold": 0, 00:18:22.907 "tls_version": 0, 00:18:22.907 "enable_ktls": false 00:18:22.907 } 00:18:22.907 }, 00:18:22.907 { 00:18:22.907 "method": "sock_impl_set_options", 00:18:22.907 "params": { 00:18:22.907 "impl_name": "ssl", 00:18:22.907 "recv_buf_size": 4096, 00:18:22.907 "send_buf_size": 4096, 00:18:22.907 "enable_recv_pipe": true, 00:18:22.907 "enable_quickack": false, 00:18:22.907 "enable_placement_id": 0, 00:18:22.907 "enable_zerocopy_send_server": true, 00:18:22.907 "enable_zerocopy_send_client": false, 00:18:22.907 "zerocopy_threshold": 0, 00:18:22.908 "tls_version": 0, 00:18:22.908 "enable_ktls": false 00:18:22.908 } 00:18:22.908 } 00:18:22.908 ] 00:18:22.908 }, 00:18:22.908 { 00:18:22.908 "subsystem": "vmd", 00:18:22.908 "config": [] 00:18:22.908 }, 00:18:22.908 { 00:18:22.908 "subsystem": "accel", 00:18:22.908 "config": [ 00:18:22.908 { 00:18:22.908 "method": "accel_set_options", 00:18:22.908 "params": { 00:18:22.908 "small_cache_size": 128, 00:18:22.908 "large_cache_size": 16, 00:18:22.908 "task_count": 2048, 00:18:22.908 "sequence_count": 2048, 00:18:22.908 "buf_count": 2048 00:18:22.908 } 00:18:22.908 } 00:18:22.908 ] 00:18:22.908 }, 00:18:22.908 { 00:18:22.908 "subsystem": "bdev", 00:18:22.908 "config": [ 00:18:22.908 { 00:18:22.908 "method": "bdev_set_options", 00:18:22.908 "params": { 00:18:22.908 "bdev_io_pool_size": 65535, 00:18:22.908 "bdev_io_cache_size": 256, 00:18:22.908 "bdev_auto_examine": true, 00:18:22.908 "iobuf_small_cache_size": 128, 00:18:22.908 "iobuf_large_cache_size": 16 00:18:22.908 } 00:18:22.908 }, 00:18:22.908 { 00:18:22.908 "method": "bdev_raid_set_options", 00:18:22.908 "params": { 00:18:22.908 "process_window_size_kb": 1024 00:18:22.908 } 00:18:22.908 }, 00:18:22.908 { 00:18:22.908 "method": "bdev_iscsi_set_options", 00:18:22.908 "params": { 00:18:22.908 "timeout_sec": 30 00:18:22.908 } 00:18:22.908 }, 00:18:22.908 { 00:18:22.908 "method": "bdev_nvme_set_options", 00:18:22.908 "params": { 00:18:22.908 "action_on_timeout": "none", 00:18:22.908 "timeout_us": 0, 00:18:22.908 "timeout_admin_us": 0, 00:18:22.908 "keep_alive_timeout_ms": 10000, 00:18:22.908 "arbitration_burst": 0, 00:18:22.908 "low_priority_weight": 0, 00:18:22.908 "medium_priority_weight": 0, 00:18:22.908 "high_priority_weight": 0, 00:18:22.908 "nvme_adminq_poll_period_us": 10000, 00:18:22.908 "nvme_ioq_poll_period_us": 0, 00:18:22.908 "io_queue_requests": 512, 00:18:22.908 "delay_cmd_submit": true, 00:18:22.908 "transport_retry_count": 4, 00:18:22.908 "bdev_retry_count": 3, 00:18:22.908 "transport_ack_timeout": 0, 00:18:22.908 "ctrlr_loss_timeout_sec": 0, 00:18:22.908 "reconnect_delay_sec": 0, 00:18:22.908 "fast_io_fail_timeout_sec": 0, 00:18:22.908 "disable_auto_failback": false, 00:18:22.908 "generate_uuids": false, 00:18:22.908 "transport_tos": 0, 00:18:22.908 "nvme_error_stat": false, 00:18:22.908 "rdma_srq_size": 0, 00:18:22.908 "io_path_stat": false, 00:18:22.908 "allow_accel_sequence": false, 00:18:22.908 "rdma_max_cq_size": 0, 00:18:22.908 "rdma_cm_event_timeout_ms": 0, 00:18:22.908 "dhchap_digests": [ 00:18:22.908 "sha256", 00:18:22.908 "sha384", 00:18:22.908 "sha512" 00:18:22.908 ], 00:18:22.908 "dhchap_dhgroups": [ 00:18:22.908 "null", 00:18:22.908 "ffdhe2048", 00:18:22.908 "ffdhe3072", 00:18:22.908 "ffdhe4096", 00:18:22.908 "ffdhe6144", 00:18:22.908 "ffdhe8192" 00:18:22.908 ] 00:18:22.908 } 00:18:22.908 }, 00:18:22.908 { 00:18:22.908 "method": "bdev_nvme_attach_controller", 00:18:22.908 "params": { 00:18:22.908 "name": "TLSTEST", 00:18:22.908 "trtype": "TCP", 00:18:22.908 "adrfam": "IPv4", 00:18:22.908 "traddr": "10.0.0.2", 00:18:22.908 "trsvcid": "4420", 00:18:22.908 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:22.908 "prchk_reftag": false, 00:18:22.908 "prchk_guard": false, 00:18:22.908 "ctrlr_loss_timeout_sec": 0, 00:18:22.908 "reconnect_delay_sec": 0, 00:18:22.908 "fast_io_fail_timeout_sec": 0, 00:18:22.908 "psk": "/tmp/tmp.0wbser70b5", 00:18:22.908 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:22.908 "hdgst": false, 00:18:22.908 "ddgst": false 00:18:22.908 } 00:18:22.908 }, 00:18:22.908 { 00:18:22.908 "method": "bdev_nvme_set_hotplug", 00:18:22.908 "params": { 00:18:22.908 "period_us": 100000, 00:18:22.908 "enable": false 00:18:22.908 } 00:18:22.908 }, 00:18:22.908 { 00:18:22.908 "method": "bdev_wait_for_examine" 00:18:22.908 } 00:18:22.908 ] 00:18:22.908 }, 00:18:22.908 { 00:18:22.908 "subsystem": "nbd", 00:18:22.908 "config": [] 00:18:22.908 } 00:18:22.908 ] 00:18:22.908 }' 00:18:22.908 12:14:24 -- target/tls.sh@199 -- # killprocess 3424746 00:18:22.908 12:14:24 -- common/autotest_common.sh@936 -- # '[' -z 3424746 ']' 00:18:22.908 12:14:24 -- common/autotest_common.sh@940 -- # kill -0 3424746 00:18:22.908 12:14:24 -- common/autotest_common.sh@941 -- # uname 00:18:22.908 12:14:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:22.908 12:14:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3424746 00:18:22.908 12:14:24 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:22.908 12:14:24 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:22.908 12:14:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3424746' 00:18:22.908 killing process with pid 3424746 00:18:22.908 12:14:24 -- common/autotest_common.sh@955 -- # kill 3424746 00:18:22.908 Received shutdown signal, test time was about 10.000000 seconds 00:18:22.908 00:18:22.908 Latency(us) 00:18:22.908 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:22.908 =================================================================================================================== 00:18:22.908 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:22.908 [2024-04-26 12:14:24.114748] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:22.908 12:14:24 -- common/autotest_common.sh@960 -- # wait 3424746 00:18:23.169 12:14:24 -- target/tls.sh@200 -- # killprocess 3424380 00:18:23.169 12:14:24 -- common/autotest_common.sh@936 -- # '[' -z 3424380 ']' 00:18:23.169 12:14:24 -- common/autotest_common.sh@940 -- # kill -0 3424380 00:18:23.169 12:14:24 -- common/autotest_common.sh@941 -- # uname 00:18:23.169 12:14:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:23.169 12:14:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3424380 00:18:23.169 12:14:24 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:23.169 12:14:24 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:23.169 12:14:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3424380' 00:18:23.169 killing process with pid 3424380 00:18:23.170 12:14:24 -- common/autotest_common.sh@955 -- # kill 3424380 00:18:23.170 [2024-04-26 12:14:24.280276] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:23.170 12:14:24 -- common/autotest_common.sh@960 -- # wait 3424380 00:18:23.431 12:14:24 -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:23.431 12:14:24 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:23.431 12:14:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:23.431 12:14:24 -- common/autotest_common.sh@10 -- # set +x 00:18:23.431 12:14:24 -- target/tls.sh@203 -- # echo '{ 00:18:23.431 "subsystems": [ 00:18:23.431 { 00:18:23.431 "subsystem": "keyring", 00:18:23.431 "config": [] 00:18:23.431 }, 00:18:23.431 { 00:18:23.431 "subsystem": "iobuf", 00:18:23.431 "config": [ 00:18:23.431 { 00:18:23.431 "method": "iobuf_set_options", 00:18:23.431 "params": { 00:18:23.431 "small_pool_count": 8192, 00:18:23.431 "large_pool_count": 1024, 00:18:23.431 "small_bufsize": 8192, 00:18:23.431 "large_bufsize": 135168 00:18:23.431 } 00:18:23.431 } 00:18:23.431 ] 00:18:23.431 }, 00:18:23.431 { 00:18:23.431 "subsystem": "sock", 00:18:23.431 "config": [ 00:18:23.431 { 00:18:23.431 "method": "sock_impl_set_options", 00:18:23.431 "params": { 00:18:23.431 "impl_name": "posix", 00:18:23.431 "recv_buf_size": 2097152, 00:18:23.431 "send_buf_size": 2097152, 00:18:23.431 "enable_recv_pipe": true, 00:18:23.431 "enable_quickack": false, 00:18:23.431 "enable_placement_id": 0, 00:18:23.431 "enable_zerocopy_send_server": true, 00:18:23.431 "enable_zerocopy_send_client": false, 00:18:23.431 "zerocopy_threshold": 0, 00:18:23.431 "tls_version": 0, 00:18:23.431 "enable_ktls": false 00:18:23.431 } 00:18:23.431 }, 00:18:23.431 { 00:18:23.431 "method": "sock_impl_set_options", 00:18:23.431 "params": { 00:18:23.431 "impl_name": "ssl", 00:18:23.431 "recv_buf_size": 4096, 00:18:23.431 "send_buf_size": 4096, 00:18:23.431 "enable_recv_pipe": true, 00:18:23.431 "enable_quickack": false, 00:18:23.431 "enable_placement_id": 0, 00:18:23.431 "enable_zerocopy_send_server": true, 00:18:23.431 "enable_zerocopy_send_client": false, 00:18:23.431 "zerocopy_threshold": 0, 00:18:23.431 "tls_version": 0, 00:18:23.431 "enable_ktls": false 00:18:23.431 } 00:18:23.431 } 00:18:23.431 ] 00:18:23.431 }, 00:18:23.431 { 00:18:23.431 "subsystem": "vmd", 00:18:23.431 "config": [] 00:18:23.431 }, 00:18:23.431 { 00:18:23.431 "subsystem": "accel", 00:18:23.431 "config": [ 00:18:23.431 { 00:18:23.431 "method": "accel_set_options", 00:18:23.431 "params": { 00:18:23.431 "small_cache_size": 128, 00:18:23.431 "large_cache_size": 16, 00:18:23.431 "task_count": 2048, 00:18:23.431 "sequence_count": 2048, 00:18:23.431 "buf_count": 2048 00:18:23.431 } 00:18:23.431 } 00:18:23.431 ] 00:18:23.431 }, 00:18:23.431 { 00:18:23.431 "subsystem": "bdev", 00:18:23.431 "config": [ 00:18:23.431 { 00:18:23.431 "method": "bdev_set_options", 00:18:23.431 "params": { 00:18:23.431 "bdev_io_pool_size": 65535, 00:18:23.431 "bdev_io_cache_size": 256, 00:18:23.431 "bdev_auto_examine": true, 00:18:23.431 "iobuf_small_cache_size": 128, 00:18:23.431 "iobuf_large_cache_size": 16 00:18:23.431 } 00:18:23.431 }, 00:18:23.431 { 00:18:23.431 "method": "bdev_raid_set_options", 00:18:23.431 "params": { 00:18:23.431 "process_window_size_kb": 1024 00:18:23.431 } 00:18:23.431 }, 00:18:23.431 { 00:18:23.432 "method": "bdev_iscsi_set_options", 00:18:23.432 "params": { 00:18:23.432 "timeout_sec": 30 00:18:23.432 } 00:18:23.432 }, 00:18:23.432 { 00:18:23.432 "method": "bdev_nvme_set_options", 00:18:23.432 "params": { 00:18:23.432 "action_on_timeout": "none", 00:18:23.432 "timeout_us": 0, 00:18:23.432 "timeout_admin_us": 0, 00:18:23.432 "keep_alive_timeout_ms": 10000, 00:18:23.432 "arbitration_burst": 0, 00:18:23.432 "low_priority_weight": 0, 00:18:23.432 "medium_priority_weight": 0, 00:18:23.432 "high_priority_weight": 0, 00:18:23.432 "nvme_adminq_poll_period_us": 10000, 00:18:23.432 "nvme_ioq_poll_period_us": 0, 00:18:23.432 "io_queue_requests": 0, 00:18:23.432 "delay_cmd_submit": true, 00:18:23.432 "transport_retry_count": 4, 00:18:23.432 "bdev_retry_count": 3, 00:18:23.432 "transport_ack_timeout": 0, 00:18:23.432 "ctrlr_loss_timeout_sec": 0, 00:18:23.432 "reconnect_delay_sec": 0, 00:18:23.432 "fast_io_fail_timeout_sec": 0, 00:18:23.432 "disable_auto_failback": false, 00:18:23.432 "generate_uuids": false, 00:18:23.432 "transport_tos": 0, 00:18:23.432 "nvme_error_stat": false, 00:18:23.432 "rdma_srq_size": 0, 00:18:23.432 "io_path_stat": false, 00:18:23.432 "allow_accel_sequence": false, 00:18:23.432 "rdma_max_cq_size": 0, 00:18:23.432 "rdma_cm_event_timeout_ms": 0, 00:18:23.432 "dhchap_digests": [ 00:18:23.432 "sha256", 00:18:23.432 "sha384", 00:18:23.432 "sha512" 00:18:23.432 ], 00:18:23.432 "dhchap_dhgroups": [ 00:18:23.432 "null", 00:18:23.432 "ffdhe2048", 00:18:23.432 "ffdhe3072", 00:18:23.432 "ffdhe4096", 00:18:23.432 "ffdhe6144", 00:18:23.432 "ffdhe8192" 00:18:23.432 ] 00:18:23.432 } 00:18:23.432 }, 00:18:23.432 { 00:18:23.432 "method": "bdev_nvme_set_hotplug", 00:18:23.432 "params": { 00:18:23.432 "period_us": 100000, 00:18:23.432 "enable": false 00:18:23.432 } 00:18:23.432 }, 00:18:23.432 { 00:18:23.432 "method": "bdev_malloc_create", 00:18:23.432 "params": { 00:18:23.432 "name": "malloc0", 00:18:23.432 "num_blocks": 8192, 00:18:23.432 "block_size": 4096, 00:18:23.432 "physical_block_size": 4096, 00:18:23.432 "uuid": "620e9719-d561-4afe-813a-ddee4ea16f5a", 00:18:23.432 "optimal_io_boundary": 0 00:18:23.432 } 00:18:23.432 }, 00:18:23.432 { 00:18:23.432 "method": "bdev_wait_for_examine" 00:18:23.432 } 00:18:23.432 ] 00:18:23.432 }, 00:18:23.432 { 00:18:23.432 "subsystem": "nbd", 00:18:23.432 "config": [] 00:18:23.432 }, 00:18:23.432 { 00:18:23.432 "subsystem": "scheduler", 00:18:23.432 "config": [ 00:18:23.432 { 00:18:23.432 "method": "framework_set_scheduler", 00:18:23.432 "params": { 00:18:23.432 "name": "static" 00:18:23.432 } 00:18:23.432 } 00:18:23.432 ] 00:18:23.432 }, 00:18:23.432 { 00:18:23.432 "subsystem": "nvmf", 00:18:23.432 "config": [ 00:18:23.432 { 00:18:23.432 "method": "nvmf_set_config", 00:18:23.432 "params": { 00:18:23.432 "discovery_filter": "match_any", 00:18:23.432 "admin_cmd_passthru": { 00:18:23.432 "identify_ctrlr": false 00:18:23.432 } 00:18:23.432 } 00:18:23.432 }, 00:18:23.432 { 00:18:23.432 "method": "nvmf_set_max_subsystems", 00:18:23.432 "params": { 00:18:23.432 "max_subsystems": 1024 00:18:23.432 } 00:18:23.432 }, 00:18:23.432 { 00:18:23.432 "method": "nvmf_set_crdt", 00:18:23.432 "params": { 00:18:23.432 "crdt1": 0, 00:18:23.432 "crdt2": 0, 00:18:23.432 "crdt3": 0 00:18:23.432 } 00:18:23.432 }, 00:18:23.432 { 00:18:23.432 "method": "nvmf_create_transport", 00:18:23.432 "params": { 00:18:23.432 "trtype": "TCP", 00:18:23.432 "max_queue_depth": 128, 00:18:23.432 "max_io_qpairs_per_ctrlr": 127, 00:18:23.432 "in_capsule_data_size": 4096, 00:18:23.432 "max_io_size": 131072, 00:18:23.432 "io_unit_size": 131072, 00:18:23.432 "max_aq_depth": 128, 00:18:23.432 "num_shared_buffers": 511, 00:18:23.432 "buf_cache_size": 4294967295, 00:18:23.432 "dif_insert_or_strip": false, 00:18:23.432 "zcopy": false, 00:18:23.432 "c2h_success": false, 00:18:23.432 "sock_priority": 0, 00:18:23.432 "abort_timeout_sec": 1, 00:18:23.432 "ack_timeout": 0, 00:18:23.432 "data_wr_pool_size": 0 00:18:23.432 } 00:18:23.432 }, 00:18:23.432 { 00:18:23.432 "method": "nvmf_create_subsystem", 00:18:23.432 "params": { 00:18:23.432 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:23.432 "allow_any_host": false, 00:18:23.432 "serial_number": "SPDK00000000000001", 00:18:23.432 "model_number": "SPDK bdev Controller", 00:18:23.432 "max_namespaces": 10, 00:18:23.432 "min_cntlid": 1, 00:18:23.432 "max_cntlid": 65519, 00:18:23.432 "ana_reporting": false 00:18:23.432 } 00:18:23.432 }, 00:18:23.432 { 00:18:23.432 "method": "nvmf_subsystem_add_host", 00:18:23.432 "params": { 00:18:23.432 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:23.432 "host": "nqn.2016-06.io.spdk:host1", 00:18:23.432 "psk": "/tmp/tmp.0wbser70b5" 00:18:23.432 } 00:18:23.432 }, 00:18:23.432 { 00:18:23.432 "method": "nvmf_subsystem_add_ns", 00:18:23.432 "params": { 00:18:23.432 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:23.432 "namespace": { 00:18:23.432 "nsid": 1, 00:18:23.432 "bdev_name": "malloc0", 00:18:23.432 "nguid": "620E9719D5614AFE813ADDEE4EA16F5A", 00:18:23.432 "uuid": "620e9719-d561-4afe-813a-ddee4ea16f5a", 00:18:23.432 "no_auto_visible": false 00:18:23.432 } 00:18:23.432 } 00:18:23.432 }, 00:18:23.432 { 00:18:23.432 "method": "nvmf_subsystem_add_listener", 00:18:23.432 "params": { 00:18:23.432 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:23.432 "listen_address": { 00:18:23.432 "trtype": "TCP", 00:18:23.432 "adrfam": "IPv4", 00:18:23.432 "traddr": "10.0.0.2", 00:18:23.432 "trsvcid": "4420" 00:18:23.432 }, 00:18:23.432 "secure_channel": true 00:18:23.432 } 00:18:23.432 } 00:18:23.432 ] 00:18:23.432 } 00:18:23.432 ] 00:18:23.432 }' 00:18:23.432 12:14:24 -- nvmf/common.sh@470 -- # nvmfpid=3425088 00:18:23.432 12:14:24 -- nvmf/common.sh@471 -- # waitforlisten 3425088 00:18:23.432 12:14:24 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:23.432 12:14:24 -- common/autotest_common.sh@817 -- # '[' -z 3425088 ']' 00:18:23.432 12:14:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:23.432 12:14:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:23.432 12:14:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:23.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:23.432 12:14:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:23.432 12:14:24 -- common/autotest_common.sh@10 -- # set +x 00:18:23.432 [2024-04-26 12:14:24.467797] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:18:23.432 [2024-04-26 12:14:24.467851] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:23.432 EAL: No free 2048 kB hugepages reported on node 1 00:18:23.432 [2024-04-26 12:14:24.547390] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.432 [2024-04-26 12:14:24.597064] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:23.432 [2024-04-26 12:14:24.597099] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:23.432 [2024-04-26 12:14:24.597105] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:23.432 [2024-04-26 12:14:24.597109] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:23.432 [2024-04-26 12:14:24.597113] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:23.432 [2024-04-26 12:14:24.597165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:23.692 [2024-04-26 12:14:24.772709] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:23.692 [2024-04-26 12:14:24.788682] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:23.692 [2024-04-26 12:14:24.804732] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:23.692 [2024-04-26 12:14:24.820139] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:24.265 12:14:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:24.265 12:14:25 -- common/autotest_common.sh@850 -- # return 0 00:18:24.265 12:14:25 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:24.265 12:14:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:24.265 12:14:25 -- common/autotest_common.sh@10 -- # set +x 00:18:24.265 12:14:25 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:24.265 12:14:25 -- target/tls.sh@207 -- # bdevperf_pid=3425129 00:18:24.265 12:14:25 -- target/tls.sh@208 -- # waitforlisten 3425129 /var/tmp/bdevperf.sock 00:18:24.265 12:14:25 -- common/autotest_common.sh@817 -- # '[' -z 3425129 ']' 00:18:24.265 12:14:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:24.265 12:14:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:24.265 12:14:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:24.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:24.265 12:14:25 -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:24.265 12:14:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:24.265 12:14:25 -- common/autotest_common.sh@10 -- # set +x 00:18:24.265 12:14:25 -- target/tls.sh@204 -- # echo '{ 00:18:24.265 "subsystems": [ 00:18:24.265 { 00:18:24.265 "subsystem": "keyring", 00:18:24.265 "config": [] 00:18:24.265 }, 00:18:24.265 { 00:18:24.265 "subsystem": "iobuf", 00:18:24.265 "config": [ 00:18:24.265 { 00:18:24.265 "method": "iobuf_set_options", 00:18:24.265 "params": { 00:18:24.265 "small_pool_count": 8192, 00:18:24.265 "large_pool_count": 1024, 00:18:24.265 "small_bufsize": 8192, 00:18:24.265 "large_bufsize": 135168 00:18:24.265 } 00:18:24.265 } 00:18:24.265 ] 00:18:24.265 }, 00:18:24.265 { 00:18:24.265 "subsystem": "sock", 00:18:24.265 "config": [ 00:18:24.265 { 00:18:24.265 "method": "sock_impl_set_options", 00:18:24.265 "params": { 00:18:24.265 "impl_name": "posix", 00:18:24.265 "recv_buf_size": 2097152, 00:18:24.265 "send_buf_size": 2097152, 00:18:24.265 "enable_recv_pipe": true, 00:18:24.265 "enable_quickack": false, 00:18:24.265 "enable_placement_id": 0, 00:18:24.265 "enable_zerocopy_send_server": true, 00:18:24.265 "enable_zerocopy_send_client": false, 00:18:24.265 "zerocopy_threshold": 0, 00:18:24.265 "tls_version": 0, 00:18:24.265 "enable_ktls": false 00:18:24.265 } 00:18:24.265 }, 00:18:24.265 { 00:18:24.265 "method": "sock_impl_set_options", 00:18:24.265 "params": { 00:18:24.265 "impl_name": "ssl", 00:18:24.265 "recv_buf_size": 4096, 00:18:24.265 "send_buf_size": 4096, 00:18:24.265 "enable_recv_pipe": true, 00:18:24.265 "enable_quickack": false, 00:18:24.265 "enable_placement_id": 0, 00:18:24.265 "enable_zerocopy_send_server": true, 00:18:24.265 "enable_zerocopy_send_client": false, 00:18:24.265 "zerocopy_threshold": 0, 00:18:24.265 "tls_version": 0, 00:18:24.265 "enable_ktls": false 00:18:24.265 } 00:18:24.265 } 00:18:24.265 ] 00:18:24.265 }, 00:18:24.265 { 00:18:24.265 "subsystem": "vmd", 00:18:24.265 "config": [] 00:18:24.265 }, 00:18:24.265 { 00:18:24.265 "subsystem": "accel", 00:18:24.265 "config": [ 00:18:24.265 { 00:18:24.265 "method": "accel_set_options", 00:18:24.265 "params": { 00:18:24.265 "small_cache_size": 128, 00:18:24.265 "large_cache_size": 16, 00:18:24.265 "task_count": 2048, 00:18:24.265 "sequence_count": 2048, 00:18:24.265 "buf_count": 2048 00:18:24.265 } 00:18:24.265 } 00:18:24.265 ] 00:18:24.265 }, 00:18:24.265 { 00:18:24.265 "subsystem": "bdev", 00:18:24.265 "config": [ 00:18:24.265 { 00:18:24.265 "method": "bdev_set_options", 00:18:24.265 "params": { 00:18:24.265 "bdev_io_pool_size": 65535, 00:18:24.265 "bdev_io_cache_size": 256, 00:18:24.265 "bdev_auto_examine": true, 00:18:24.265 "iobuf_small_cache_size": 128, 00:18:24.265 "iobuf_large_cache_size": 16 00:18:24.265 } 00:18:24.265 }, 00:18:24.265 { 00:18:24.265 "method": "bdev_raid_set_options", 00:18:24.265 "params": { 00:18:24.265 "process_window_size_kb": 1024 00:18:24.265 } 00:18:24.265 }, 00:18:24.265 { 00:18:24.265 "method": "bdev_iscsi_set_options", 00:18:24.265 "params": { 00:18:24.265 "timeout_sec": 30 00:18:24.265 } 00:18:24.265 }, 00:18:24.265 { 00:18:24.265 "method": "bdev_nvme_set_options", 00:18:24.265 "params": { 00:18:24.265 "action_on_timeout": "none", 00:18:24.265 "timeout_us": 0, 00:18:24.265 "timeout_admin_us": 0, 00:18:24.265 "keep_alive_timeout_ms": 10000, 00:18:24.265 "arbitration_burst": 0, 00:18:24.265 "low_priority_weight": 0, 00:18:24.265 "medium_priority_weight": 0, 00:18:24.265 "high_priority_weight": 0, 00:18:24.265 "nvme_adminq_poll_period_us": 10000, 00:18:24.265 "nvme_ioq_poll_period_us": 0, 00:18:24.265 "io_queue_requests": 512, 00:18:24.265 "delay_cmd_submit": true, 00:18:24.265 "transport_retry_count": 4, 00:18:24.265 "bdev_retry_count": 3, 00:18:24.265 "transport_ack_timeout": 0, 00:18:24.265 "ctrlr_loss_timeout_sec": 0, 00:18:24.265 "reconnect_delay_sec": 0, 00:18:24.265 "fast_io_fail_timeout_sec": 0, 00:18:24.265 "disable_auto_failback": false, 00:18:24.265 "generate_uuids": false, 00:18:24.265 "transport_tos": 0, 00:18:24.265 "nvme_error_stat": false, 00:18:24.265 "rdma_srq_size": 0, 00:18:24.265 "io_path_stat": false, 00:18:24.265 "allow_accel_sequence": false, 00:18:24.265 "rdma_max_cq_size": 0, 00:18:24.265 "rdma_cm_event_timeout_ms": 0, 00:18:24.265 "dhchap_digests": [ 00:18:24.265 "sha256", 00:18:24.265 "sha384", 00:18:24.265 "sha512" 00:18:24.265 ], 00:18:24.265 "dhchap_dhgroups": [ 00:18:24.265 "null", 00:18:24.265 "ffdhe2048", 00:18:24.265 "ffdhe3072", 00:18:24.265 "ffdhe4096", 00:18:24.265 "ffdhe6144", 00:18:24.265 "ffdhe8192" 00:18:24.265 ] 00:18:24.265 } 00:18:24.265 }, 00:18:24.265 { 00:18:24.265 "method": "bdev_nvme_attach_controller", 00:18:24.265 "params": { 00:18:24.265 "name": "TLSTEST", 00:18:24.265 "trtype": "TCP", 00:18:24.265 "adrfam": "IPv4", 00:18:24.265 "traddr": "10.0.0.2", 00:18:24.265 "trsvcid": "4420", 00:18:24.265 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:24.265 "prchk_reftag": false, 00:18:24.265 "prchk_guard": false, 00:18:24.265 "ctrlr_loss_timeout_sec": 0, 00:18:24.265 "reconnect_delay_sec": 0, 00:18:24.265 "fast_io_fail_timeout_sec": 0, 00:18:24.265 "psk": "/tmp/tmp.0wbser70b5", 00:18:24.265 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:24.265 "hdgst": false, 00:18:24.265 "ddgst": false 00:18:24.265 } 00:18:24.265 }, 00:18:24.265 { 00:18:24.265 "method": "bdev_nvme_set_hotplug", 00:18:24.265 "params": { 00:18:24.265 "period_us": 100000, 00:18:24.265 "enable": false 00:18:24.265 } 00:18:24.265 }, 00:18:24.265 { 00:18:24.265 "method": "bdev_wait_for_examine" 00:18:24.265 } 00:18:24.265 ] 00:18:24.265 }, 00:18:24.265 { 00:18:24.265 "subsystem": "nbd", 00:18:24.265 "config": [] 00:18:24.265 } 00:18:24.265 ] 00:18:24.265 }' 00:18:24.265 [2024-04-26 12:14:25.302044] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:18:24.265 [2024-04-26 12:14:25.302093] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3425129 ] 00:18:24.265 EAL: No free 2048 kB hugepages reported on node 1 00:18:24.265 [2024-04-26 12:14:25.352984] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.265 [2024-04-26 12:14:25.403661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:24.526 [2024-04-26 12:14:25.520238] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:24.526 [2024-04-26 12:14:25.520302] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:25.096 12:14:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:25.096 12:14:26 -- common/autotest_common.sh@850 -- # return 0 00:18:25.096 12:14:26 -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:25.096 Running I/O for 10 seconds... 00:18:35.094 00:18:35.094 Latency(us) 00:18:35.094 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.094 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:35.094 Verification LBA range: start 0x0 length 0x2000 00:18:35.094 TLSTESTn1 : 10.02 5826.59 22.76 0.00 0.00 21935.75 5461.33 29054.29 00:18:35.094 =================================================================================================================== 00:18:35.094 Total : 5826.59 22.76 0.00 0.00 21935.75 5461.33 29054.29 00:18:35.094 0 00:18:35.094 12:14:36 -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:35.094 12:14:36 -- target/tls.sh@214 -- # killprocess 3425129 00:18:35.094 12:14:36 -- common/autotest_common.sh@936 -- # '[' -z 3425129 ']' 00:18:35.094 12:14:36 -- common/autotest_common.sh@940 -- # kill -0 3425129 00:18:35.094 12:14:36 -- common/autotest_common.sh@941 -- # uname 00:18:35.094 12:14:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:35.094 12:14:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3425129 00:18:35.094 12:14:36 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:35.094 12:14:36 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:35.094 12:14:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3425129' 00:18:35.094 killing process with pid 3425129 00:18:35.094 12:14:36 -- common/autotest_common.sh@955 -- # kill 3425129 00:18:35.094 Received shutdown signal, test time was about 10.000000 seconds 00:18:35.094 00:18:35.094 Latency(us) 00:18:35.094 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.094 =================================================================================================================== 00:18:35.095 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:35.095 [2024-04-26 12:14:36.285727] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:35.095 12:14:36 -- common/autotest_common.sh@960 -- # wait 3425129 00:18:35.356 12:14:36 -- target/tls.sh@215 -- # killprocess 3425088 00:18:35.356 12:14:36 -- common/autotest_common.sh@936 -- # '[' -z 3425088 ']' 00:18:35.356 12:14:36 -- common/autotest_common.sh@940 -- # kill -0 3425088 00:18:35.356 12:14:36 -- common/autotest_common.sh@941 -- # uname 00:18:35.356 12:14:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:35.356 12:14:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3425088 00:18:35.356 12:14:36 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:35.356 12:14:36 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:35.356 12:14:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3425088' 00:18:35.356 killing process with pid 3425088 00:18:35.356 12:14:36 -- common/autotest_common.sh@955 -- # kill 3425088 00:18:35.356 [2024-04-26 12:14:36.454103] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:35.356 12:14:36 -- common/autotest_common.sh@960 -- # wait 3425088 00:18:35.356 12:14:36 -- target/tls.sh@218 -- # nvmfappstart 00:18:35.356 12:14:36 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:35.356 12:14:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:35.356 12:14:36 -- common/autotest_common.sh@10 -- # set +x 00:18:35.617 12:14:36 -- nvmf/common.sh@470 -- # nvmfpid=3427461 00:18:35.617 12:14:36 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:35.617 12:14:36 -- nvmf/common.sh@471 -- # waitforlisten 3427461 00:18:35.617 12:14:36 -- common/autotest_common.sh@817 -- # '[' -z 3427461 ']' 00:18:35.617 12:14:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:35.617 12:14:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:35.617 12:14:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:35.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:35.617 12:14:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:35.617 12:14:36 -- common/autotest_common.sh@10 -- # set +x 00:18:35.617 [2024-04-26 12:14:36.631135] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:18:35.617 [2024-04-26 12:14:36.631190] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:35.617 EAL: No free 2048 kB hugepages reported on node 1 00:18:35.617 [2024-04-26 12:14:36.696252] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.617 [2024-04-26 12:14:36.759591] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:35.617 [2024-04-26 12:14:36.759629] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:35.617 [2024-04-26 12:14:36.759636] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:35.617 [2024-04-26 12:14:36.759642] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:35.617 [2024-04-26 12:14:36.759648] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:35.617 [2024-04-26 12:14:36.759670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:36.188 12:14:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:36.188 12:14:37 -- common/autotest_common.sh@850 -- # return 0 00:18:36.188 12:14:37 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:36.188 12:14:37 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:36.188 12:14:37 -- common/autotest_common.sh@10 -- # set +x 00:18:36.449 12:14:37 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:36.449 12:14:37 -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.0wbser70b5 00:18:36.449 12:14:37 -- target/tls.sh@49 -- # local key=/tmp/tmp.0wbser70b5 00:18:36.449 12:14:37 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:36.449 [2024-04-26 12:14:37.570286] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:36.449 12:14:37 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:36.710 12:14:37 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:36.710 [2024-04-26 12:14:37.879095] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:36.710 [2024-04-26 12:14:37.879330] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:36.710 12:14:37 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:36.970 malloc0 00:18:36.970 12:14:38 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:36.970 12:14:38 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.0wbser70b5 00:18:37.231 [2024-04-26 12:14:38.311007] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:37.231 12:14:38 -- target/tls.sh@222 -- # bdevperf_pid=3427827 00:18:37.231 12:14:38 -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:37.231 12:14:38 -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:37.231 12:14:38 -- target/tls.sh@225 -- # waitforlisten 3427827 /var/tmp/bdevperf.sock 00:18:37.231 12:14:38 -- common/autotest_common.sh@817 -- # '[' -z 3427827 ']' 00:18:37.231 12:14:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:37.231 12:14:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:37.231 12:14:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:37.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:37.231 12:14:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:37.231 12:14:38 -- common/autotest_common.sh@10 -- # set +x 00:18:37.231 [2024-04-26 12:14:38.372401] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:18:37.232 [2024-04-26 12:14:38.372452] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3427827 ] 00:18:37.232 EAL: No free 2048 kB hugepages reported on node 1 00:18:37.232 [2024-04-26 12:14:38.447861] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.492 [2024-04-26 12:14:38.500099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:38.063 12:14:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:38.063 12:14:39 -- common/autotest_common.sh@850 -- # return 0 00:18:38.063 12:14:39 -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.0wbser70b5 00:18:38.324 12:14:39 -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:38.324 [2024-04-26 12:14:39.422144] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:38.324 nvme0n1 00:18:38.324 12:14:39 -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:38.585 Running I/O for 1 seconds... 00:18:39.527 00:18:39.527 Latency(us) 00:18:39.527 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:39.527 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:39.527 Verification LBA range: start 0x0 length 0x2000 00:18:39.527 nvme0n1 : 1.05 4403.70 17.20 0.00 0.00 28430.83 4532.91 49807.36 00:18:39.527 =================================================================================================================== 00:18:39.527 Total : 4403.70 17.20 0.00 0.00 28430.83 4532.91 49807.36 00:18:39.527 0 00:18:39.527 12:14:40 -- target/tls.sh@234 -- # killprocess 3427827 00:18:39.527 12:14:40 -- common/autotest_common.sh@936 -- # '[' -z 3427827 ']' 00:18:39.527 12:14:40 -- common/autotest_common.sh@940 -- # kill -0 3427827 00:18:39.527 12:14:40 -- common/autotest_common.sh@941 -- # uname 00:18:39.527 12:14:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:39.527 12:14:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3427827 00:18:39.527 12:14:40 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:39.527 12:14:40 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:39.527 12:14:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3427827' 00:18:39.527 killing process with pid 3427827 00:18:39.527 12:14:40 -- common/autotest_common.sh@955 -- # kill 3427827 00:18:39.527 Received shutdown signal, test time was about 1.000000 seconds 00:18:39.527 00:18:39.527 Latency(us) 00:18:39.527 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:39.527 =================================================================================================================== 00:18:39.527 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:39.527 12:14:40 -- common/autotest_common.sh@960 -- # wait 3427827 00:18:39.788 12:14:40 -- target/tls.sh@235 -- # killprocess 3427461 00:18:39.788 12:14:40 -- common/autotest_common.sh@936 -- # '[' -z 3427461 ']' 00:18:39.788 12:14:40 -- common/autotest_common.sh@940 -- # kill -0 3427461 00:18:39.788 12:14:40 -- common/autotest_common.sh@941 -- # uname 00:18:39.788 12:14:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:39.788 12:14:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3427461 00:18:39.788 12:14:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:39.788 12:14:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:39.788 12:14:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3427461' 00:18:39.788 killing process with pid 3427461 00:18:39.788 12:14:40 -- common/autotest_common.sh@955 -- # kill 3427461 00:18:39.788 [2024-04-26 12:14:40.911809] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:39.788 12:14:40 -- common/autotest_common.sh@960 -- # wait 3427461 00:18:40.050 12:14:41 -- target/tls.sh@238 -- # nvmfappstart 00:18:40.050 12:14:41 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:40.050 12:14:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:40.050 12:14:41 -- common/autotest_common.sh@10 -- # set +x 00:18:40.050 12:14:41 -- nvmf/common.sh@470 -- # nvmfpid=3428321 00:18:40.050 12:14:41 -- nvmf/common.sh@471 -- # waitforlisten 3428321 00:18:40.050 12:14:41 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:40.050 12:14:41 -- common/autotest_common.sh@817 -- # '[' -z 3428321 ']' 00:18:40.050 12:14:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:40.050 12:14:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:40.050 12:14:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:40.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:40.050 12:14:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:40.050 12:14:41 -- common/autotest_common.sh@10 -- # set +x 00:18:40.050 [2024-04-26 12:14:41.109441] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:18:40.050 [2024-04-26 12:14:41.109497] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:40.050 EAL: No free 2048 kB hugepages reported on node 1 00:18:40.050 [2024-04-26 12:14:41.175537] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.050 [2024-04-26 12:14:41.239816] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:40.050 [2024-04-26 12:14:41.239859] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:40.050 [2024-04-26 12:14:41.239867] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:40.050 [2024-04-26 12:14:41.239873] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:40.050 [2024-04-26 12:14:41.239879] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:40.050 [2024-04-26 12:14:41.239897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:40.990 12:14:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:40.990 12:14:41 -- common/autotest_common.sh@850 -- # return 0 00:18:40.990 12:14:41 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:40.990 12:14:41 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:40.990 12:14:41 -- common/autotest_common.sh@10 -- # set +x 00:18:40.991 12:14:41 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:40.991 12:14:41 -- target/tls.sh@239 -- # rpc_cmd 00:18:40.991 12:14:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:40.991 12:14:41 -- common/autotest_common.sh@10 -- # set +x 00:18:40.991 [2024-04-26 12:14:41.914443] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:40.991 malloc0 00:18:40.991 [2024-04-26 12:14:41.941201] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:40.991 [2024-04-26 12:14:41.941392] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:40.991 12:14:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:40.991 12:14:41 -- target/tls.sh@252 -- # bdevperf_pid=3428533 00:18:40.991 12:14:41 -- target/tls.sh@254 -- # waitforlisten 3428533 /var/tmp/bdevperf.sock 00:18:40.991 12:14:41 -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:40.991 12:14:41 -- common/autotest_common.sh@817 -- # '[' -z 3428533 ']' 00:18:40.991 12:14:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:40.991 12:14:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:40.991 12:14:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:40.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:40.991 12:14:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:40.991 12:14:41 -- common/autotest_common.sh@10 -- # set +x 00:18:40.991 [2024-04-26 12:14:42.018238] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:18:40.991 [2024-04-26 12:14:42.018284] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3428533 ] 00:18:40.991 EAL: No free 2048 kB hugepages reported on node 1 00:18:40.991 [2024-04-26 12:14:42.093567] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.991 [2024-04-26 12:14:42.145598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:41.561 12:14:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:41.561 12:14:42 -- common/autotest_common.sh@850 -- # return 0 00:18:41.561 12:14:42 -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.0wbser70b5 00:18:41.820 12:14:42 -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:42.081 [2024-04-26 12:14:43.071596] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:42.081 nvme0n1 00:18:42.081 12:14:43 -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:42.081 Running I/O for 1 seconds... 00:18:43.465 00:18:43.466 Latency(us) 00:18:43.466 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:43.466 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:43.466 Verification LBA range: start 0x0 length 0x2000 00:18:43.466 nvme0n1 : 1.02 5298.54 20.70 0.00 0.00 23935.19 4505.60 52210.35 00:18:43.466 =================================================================================================================== 00:18:43.466 Total : 5298.54 20.70 0.00 0.00 23935.19 4505.60 52210.35 00:18:43.466 0 00:18:43.466 12:14:44 -- target/tls.sh@263 -- # rpc_cmd save_config 00:18:43.466 12:14:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:43.466 12:14:44 -- common/autotest_common.sh@10 -- # set +x 00:18:43.466 12:14:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:43.466 12:14:44 -- target/tls.sh@263 -- # tgtcfg='{ 00:18:43.466 "subsystems": [ 00:18:43.466 { 00:18:43.466 "subsystem": "keyring", 00:18:43.466 "config": [ 00:18:43.466 { 00:18:43.466 "method": "keyring_file_add_key", 00:18:43.466 "params": { 00:18:43.466 "name": "key0", 00:18:43.466 "path": "/tmp/tmp.0wbser70b5" 00:18:43.466 } 00:18:43.466 } 00:18:43.466 ] 00:18:43.466 }, 00:18:43.466 { 00:18:43.466 "subsystem": "iobuf", 00:18:43.466 "config": [ 00:18:43.466 { 00:18:43.466 "method": "iobuf_set_options", 00:18:43.466 "params": { 00:18:43.466 "small_pool_count": 8192, 00:18:43.466 "large_pool_count": 1024, 00:18:43.466 "small_bufsize": 8192, 00:18:43.466 "large_bufsize": 135168 00:18:43.466 } 00:18:43.466 } 00:18:43.466 ] 00:18:43.466 }, 00:18:43.466 { 00:18:43.466 "subsystem": "sock", 00:18:43.466 "config": [ 00:18:43.466 { 00:18:43.466 "method": "sock_impl_set_options", 00:18:43.466 "params": { 00:18:43.466 "impl_name": "posix", 00:18:43.466 "recv_buf_size": 2097152, 00:18:43.466 "send_buf_size": 2097152, 00:18:43.466 "enable_recv_pipe": true, 00:18:43.466 "enable_quickack": false, 00:18:43.466 "enable_placement_id": 0, 00:18:43.466 "enable_zerocopy_send_server": true, 00:18:43.466 "enable_zerocopy_send_client": false, 00:18:43.466 "zerocopy_threshold": 0, 00:18:43.466 "tls_version": 0, 00:18:43.466 "enable_ktls": false 00:18:43.466 } 00:18:43.466 }, 00:18:43.466 { 00:18:43.466 "method": "sock_impl_set_options", 00:18:43.466 "params": { 00:18:43.466 "impl_name": "ssl", 00:18:43.466 "recv_buf_size": 4096, 00:18:43.466 "send_buf_size": 4096, 00:18:43.466 "enable_recv_pipe": true, 00:18:43.466 "enable_quickack": false, 00:18:43.466 "enable_placement_id": 0, 00:18:43.466 "enable_zerocopy_send_server": true, 00:18:43.466 "enable_zerocopy_send_client": false, 00:18:43.466 "zerocopy_threshold": 0, 00:18:43.466 "tls_version": 0, 00:18:43.466 "enable_ktls": false 00:18:43.466 } 00:18:43.466 } 00:18:43.466 ] 00:18:43.466 }, 00:18:43.466 { 00:18:43.466 "subsystem": "vmd", 00:18:43.466 "config": [] 00:18:43.466 }, 00:18:43.466 { 00:18:43.466 "subsystem": "accel", 00:18:43.466 "config": [ 00:18:43.466 { 00:18:43.466 "method": "accel_set_options", 00:18:43.466 "params": { 00:18:43.466 "small_cache_size": 128, 00:18:43.466 "large_cache_size": 16, 00:18:43.466 "task_count": 2048, 00:18:43.466 "sequence_count": 2048, 00:18:43.466 "buf_count": 2048 00:18:43.466 } 00:18:43.466 } 00:18:43.466 ] 00:18:43.466 }, 00:18:43.466 { 00:18:43.466 "subsystem": "bdev", 00:18:43.466 "config": [ 00:18:43.466 { 00:18:43.466 "method": "bdev_set_options", 00:18:43.466 "params": { 00:18:43.466 "bdev_io_pool_size": 65535, 00:18:43.466 "bdev_io_cache_size": 256, 00:18:43.466 "bdev_auto_examine": true, 00:18:43.466 "iobuf_small_cache_size": 128, 00:18:43.466 "iobuf_large_cache_size": 16 00:18:43.466 } 00:18:43.466 }, 00:18:43.466 { 00:18:43.466 "method": "bdev_raid_set_options", 00:18:43.466 "params": { 00:18:43.466 "process_window_size_kb": 1024 00:18:43.466 } 00:18:43.466 }, 00:18:43.466 { 00:18:43.466 "method": "bdev_iscsi_set_options", 00:18:43.466 "params": { 00:18:43.466 "timeout_sec": 30 00:18:43.466 } 00:18:43.466 }, 00:18:43.466 { 00:18:43.466 "method": "bdev_nvme_set_options", 00:18:43.466 "params": { 00:18:43.466 "action_on_timeout": "none", 00:18:43.466 "timeout_us": 0, 00:18:43.466 "timeout_admin_us": 0, 00:18:43.466 "keep_alive_timeout_ms": 10000, 00:18:43.466 "arbitration_burst": 0, 00:18:43.466 "low_priority_weight": 0, 00:18:43.466 "medium_priority_weight": 0, 00:18:43.466 "high_priority_weight": 0, 00:18:43.466 "nvme_adminq_poll_period_us": 10000, 00:18:43.466 "nvme_ioq_poll_period_us": 0, 00:18:43.466 "io_queue_requests": 0, 00:18:43.466 "delay_cmd_submit": true, 00:18:43.466 "transport_retry_count": 4, 00:18:43.466 "bdev_retry_count": 3, 00:18:43.466 "transport_ack_timeout": 0, 00:18:43.466 "ctrlr_loss_timeout_sec": 0, 00:18:43.466 "reconnect_delay_sec": 0, 00:18:43.466 "fast_io_fail_timeout_sec": 0, 00:18:43.466 "disable_auto_failback": false, 00:18:43.466 "generate_uuids": false, 00:18:43.466 "transport_tos": 0, 00:18:43.466 "nvme_error_stat": false, 00:18:43.466 "rdma_srq_size": 0, 00:18:43.466 "io_path_stat": false, 00:18:43.466 "allow_accel_sequence": false, 00:18:43.466 "rdma_max_cq_size": 0, 00:18:43.466 "rdma_cm_event_timeout_ms": 0, 00:18:43.466 "dhchap_digests": [ 00:18:43.466 "sha256", 00:18:43.466 "sha384", 00:18:43.466 "sha512" 00:18:43.466 ], 00:18:43.466 "dhchap_dhgroups": [ 00:18:43.466 "null", 00:18:43.466 "ffdhe2048", 00:18:43.466 "ffdhe3072", 00:18:43.466 "ffdhe4096", 00:18:43.466 "ffdhe6144", 00:18:43.466 "ffdhe8192" 00:18:43.466 ] 00:18:43.466 } 00:18:43.466 }, 00:18:43.466 { 00:18:43.466 "method": "bdev_nvme_set_hotplug", 00:18:43.466 "params": { 00:18:43.466 "period_us": 100000, 00:18:43.466 "enable": false 00:18:43.466 } 00:18:43.466 }, 00:18:43.466 { 00:18:43.466 "method": "bdev_malloc_create", 00:18:43.466 "params": { 00:18:43.466 "name": "malloc0", 00:18:43.466 "num_blocks": 8192, 00:18:43.466 "block_size": 4096, 00:18:43.466 "physical_block_size": 4096, 00:18:43.466 "uuid": "a88e3485-07a1-4489-b7af-168e8c96f56e", 00:18:43.466 "optimal_io_boundary": 0 00:18:43.466 } 00:18:43.466 }, 00:18:43.466 { 00:18:43.466 "method": "bdev_wait_for_examine" 00:18:43.466 } 00:18:43.466 ] 00:18:43.466 }, 00:18:43.466 { 00:18:43.466 "subsystem": "nbd", 00:18:43.466 "config": [] 00:18:43.466 }, 00:18:43.466 { 00:18:43.466 "subsystem": "scheduler", 00:18:43.466 "config": [ 00:18:43.466 { 00:18:43.466 "method": "framework_set_scheduler", 00:18:43.466 "params": { 00:18:43.466 "name": "static" 00:18:43.466 } 00:18:43.466 } 00:18:43.466 ] 00:18:43.466 }, 00:18:43.466 { 00:18:43.466 "subsystem": "nvmf", 00:18:43.466 "config": [ 00:18:43.466 { 00:18:43.466 "method": "nvmf_set_config", 00:18:43.466 "params": { 00:18:43.466 "discovery_filter": "match_any", 00:18:43.466 "admin_cmd_passthru": { 00:18:43.466 "identify_ctrlr": false 00:18:43.466 } 00:18:43.466 } 00:18:43.466 }, 00:18:43.466 { 00:18:43.466 "method": "nvmf_set_max_subsystems", 00:18:43.466 "params": { 00:18:43.466 "max_subsystems": 1024 00:18:43.466 } 00:18:43.466 }, 00:18:43.466 { 00:18:43.466 "method": "nvmf_set_crdt", 00:18:43.466 "params": { 00:18:43.466 "crdt1": 0, 00:18:43.466 "crdt2": 0, 00:18:43.466 "crdt3": 0 00:18:43.466 } 00:18:43.466 }, 00:18:43.466 { 00:18:43.466 "method": "nvmf_create_transport", 00:18:43.466 "params": { 00:18:43.466 "trtype": "TCP", 00:18:43.466 "max_queue_depth": 128, 00:18:43.466 "max_io_qpairs_per_ctrlr": 127, 00:18:43.466 "in_capsule_data_size": 4096, 00:18:43.466 "max_io_size": 131072, 00:18:43.466 "io_unit_size": 131072, 00:18:43.466 "max_aq_depth": 128, 00:18:43.466 "num_shared_buffers": 511, 00:18:43.466 "buf_cache_size": 4294967295, 00:18:43.466 "dif_insert_or_strip": false, 00:18:43.466 "zcopy": false, 00:18:43.466 "c2h_success": false, 00:18:43.466 "sock_priority": 0, 00:18:43.466 "abort_timeout_sec": 1, 00:18:43.466 "ack_timeout": 0, 00:18:43.466 "data_wr_pool_size": 0 00:18:43.466 } 00:18:43.466 }, 00:18:43.466 { 00:18:43.466 "method": "nvmf_create_subsystem", 00:18:43.466 "params": { 00:18:43.466 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:43.466 "allow_any_host": false, 00:18:43.466 "serial_number": "00000000000000000000", 00:18:43.466 "model_number": "SPDK bdev Controller", 00:18:43.466 "max_namespaces": 32, 00:18:43.466 "min_cntlid": 1, 00:18:43.466 "max_cntlid": 65519, 00:18:43.466 "ana_reporting": false 00:18:43.466 } 00:18:43.466 }, 00:18:43.466 { 00:18:43.466 "method": "nvmf_subsystem_add_host", 00:18:43.466 "params": { 00:18:43.466 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:43.466 "host": "nqn.2016-06.io.spdk:host1", 00:18:43.466 "psk": "key0" 00:18:43.466 } 00:18:43.466 }, 00:18:43.466 { 00:18:43.466 "method": "nvmf_subsystem_add_ns", 00:18:43.466 "params": { 00:18:43.466 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:43.466 "namespace": { 00:18:43.466 "nsid": 1, 00:18:43.466 "bdev_name": "malloc0", 00:18:43.467 "nguid": "A88E348507A14489B7AF168E8C96F56E", 00:18:43.467 "uuid": "a88e3485-07a1-4489-b7af-168e8c96f56e", 00:18:43.467 "no_auto_visible": false 00:18:43.467 } 00:18:43.467 } 00:18:43.467 }, 00:18:43.467 { 00:18:43.467 "method": "nvmf_subsystem_add_listener", 00:18:43.467 "params": { 00:18:43.467 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:43.467 "listen_address": { 00:18:43.467 "trtype": "TCP", 00:18:43.467 "adrfam": "IPv4", 00:18:43.467 "traddr": "10.0.0.2", 00:18:43.467 "trsvcid": "4420" 00:18:43.467 }, 00:18:43.467 "secure_channel": true 00:18:43.467 } 00:18:43.467 } 00:18:43.467 ] 00:18:43.467 } 00:18:43.467 ] 00:18:43.467 }' 00:18:43.467 12:14:44 -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:43.467 12:14:44 -- target/tls.sh@264 -- # bperfcfg='{ 00:18:43.467 "subsystems": [ 00:18:43.467 { 00:18:43.467 "subsystem": "keyring", 00:18:43.467 "config": [ 00:18:43.467 { 00:18:43.467 "method": "keyring_file_add_key", 00:18:43.467 "params": { 00:18:43.467 "name": "key0", 00:18:43.467 "path": "/tmp/tmp.0wbser70b5" 00:18:43.467 } 00:18:43.467 } 00:18:43.467 ] 00:18:43.467 }, 00:18:43.467 { 00:18:43.467 "subsystem": "iobuf", 00:18:43.467 "config": [ 00:18:43.467 { 00:18:43.467 "method": "iobuf_set_options", 00:18:43.467 "params": { 00:18:43.467 "small_pool_count": 8192, 00:18:43.467 "large_pool_count": 1024, 00:18:43.467 "small_bufsize": 8192, 00:18:43.467 "large_bufsize": 135168 00:18:43.467 } 00:18:43.467 } 00:18:43.467 ] 00:18:43.467 }, 00:18:43.467 { 00:18:43.467 "subsystem": "sock", 00:18:43.467 "config": [ 00:18:43.467 { 00:18:43.467 "method": "sock_impl_set_options", 00:18:43.467 "params": { 00:18:43.467 "impl_name": "posix", 00:18:43.467 "recv_buf_size": 2097152, 00:18:43.467 "send_buf_size": 2097152, 00:18:43.467 "enable_recv_pipe": true, 00:18:43.467 "enable_quickack": false, 00:18:43.467 "enable_placement_id": 0, 00:18:43.467 "enable_zerocopy_send_server": true, 00:18:43.467 "enable_zerocopy_send_client": false, 00:18:43.467 "zerocopy_threshold": 0, 00:18:43.467 "tls_version": 0, 00:18:43.467 "enable_ktls": false 00:18:43.467 } 00:18:43.467 }, 00:18:43.467 { 00:18:43.467 "method": "sock_impl_set_options", 00:18:43.467 "params": { 00:18:43.467 "impl_name": "ssl", 00:18:43.467 "recv_buf_size": 4096, 00:18:43.467 "send_buf_size": 4096, 00:18:43.467 "enable_recv_pipe": true, 00:18:43.467 "enable_quickack": false, 00:18:43.467 "enable_placement_id": 0, 00:18:43.467 "enable_zerocopy_send_server": true, 00:18:43.467 "enable_zerocopy_send_client": false, 00:18:43.467 "zerocopy_threshold": 0, 00:18:43.467 "tls_version": 0, 00:18:43.467 "enable_ktls": false 00:18:43.467 } 00:18:43.467 } 00:18:43.467 ] 00:18:43.467 }, 00:18:43.467 { 00:18:43.467 "subsystem": "vmd", 00:18:43.467 "config": [] 00:18:43.467 }, 00:18:43.467 { 00:18:43.467 "subsystem": "accel", 00:18:43.467 "config": [ 00:18:43.467 { 00:18:43.467 "method": "accel_set_options", 00:18:43.467 "params": { 00:18:43.467 "small_cache_size": 128, 00:18:43.467 "large_cache_size": 16, 00:18:43.467 "task_count": 2048, 00:18:43.467 "sequence_count": 2048, 00:18:43.467 "buf_count": 2048 00:18:43.467 } 00:18:43.467 } 00:18:43.467 ] 00:18:43.467 }, 00:18:43.467 { 00:18:43.467 "subsystem": "bdev", 00:18:43.467 "config": [ 00:18:43.467 { 00:18:43.467 "method": "bdev_set_options", 00:18:43.467 "params": { 00:18:43.467 "bdev_io_pool_size": 65535, 00:18:43.467 "bdev_io_cache_size": 256, 00:18:43.467 "bdev_auto_examine": true, 00:18:43.467 "iobuf_small_cache_size": 128, 00:18:43.467 "iobuf_large_cache_size": 16 00:18:43.467 } 00:18:43.467 }, 00:18:43.467 { 00:18:43.467 "method": "bdev_raid_set_options", 00:18:43.467 "params": { 00:18:43.467 "process_window_size_kb": 1024 00:18:43.467 } 00:18:43.467 }, 00:18:43.467 { 00:18:43.467 "method": "bdev_iscsi_set_options", 00:18:43.467 "params": { 00:18:43.467 "timeout_sec": 30 00:18:43.467 } 00:18:43.467 }, 00:18:43.467 { 00:18:43.467 "method": "bdev_nvme_set_options", 00:18:43.467 "params": { 00:18:43.467 "action_on_timeout": "none", 00:18:43.467 "timeout_us": 0, 00:18:43.467 "timeout_admin_us": 0, 00:18:43.467 "keep_alive_timeout_ms": 10000, 00:18:43.467 "arbitration_burst": 0, 00:18:43.467 "low_priority_weight": 0, 00:18:43.467 "medium_priority_weight": 0, 00:18:43.467 "high_priority_weight": 0, 00:18:43.467 "nvme_adminq_poll_period_us": 10000, 00:18:43.467 "nvme_ioq_poll_period_us": 0, 00:18:43.467 "io_queue_requests": 512, 00:18:43.467 "delay_cmd_submit": true, 00:18:43.467 "transport_retry_count": 4, 00:18:43.467 "bdev_retry_count": 3, 00:18:43.467 "transport_ack_timeout": 0, 00:18:43.467 "ctrlr_loss_timeout_sec": 0, 00:18:43.467 "reconnect_delay_sec": 0, 00:18:43.467 "fast_io_fail_timeout_sec": 0, 00:18:43.467 "disable_auto_failback": false, 00:18:43.467 "generate_uuids": false, 00:18:43.467 "transport_tos": 0, 00:18:43.467 "nvme_error_stat": false, 00:18:43.467 "rdma_srq_size": 0, 00:18:43.467 "io_path_stat": false, 00:18:43.467 "allow_accel_sequence": false, 00:18:43.467 "rdma_max_cq_size": 0, 00:18:43.467 "rdma_cm_event_timeout_ms": 0, 00:18:43.467 "dhchap_digests": [ 00:18:43.467 "sha256", 00:18:43.467 "sha384", 00:18:43.467 "sha512" 00:18:43.467 ], 00:18:43.467 "dhchap_dhgroups": [ 00:18:43.467 "null", 00:18:43.467 "ffdhe2048", 00:18:43.467 "ffdhe3072", 00:18:43.467 "ffdhe4096", 00:18:43.467 "ffdhe6144", 00:18:43.467 "ffdhe8192" 00:18:43.467 ] 00:18:43.467 } 00:18:43.467 }, 00:18:43.467 { 00:18:43.467 "method": "bdev_nvme_attach_controller", 00:18:43.467 "params": { 00:18:43.467 "name": "nvme0", 00:18:43.467 "trtype": "TCP", 00:18:43.467 "adrfam": "IPv4", 00:18:43.467 "traddr": "10.0.0.2", 00:18:43.467 "trsvcid": "4420", 00:18:43.467 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:43.467 "prchk_reftag": false, 00:18:43.467 "prchk_guard": false, 00:18:43.467 "ctrlr_loss_timeout_sec": 0, 00:18:43.467 "reconnect_delay_sec": 0, 00:18:43.467 "fast_io_fail_timeout_sec": 0, 00:18:43.467 "psk": "key0", 00:18:43.467 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:43.467 "hdgst": false, 00:18:43.467 "ddgst": false 00:18:43.467 } 00:18:43.467 }, 00:18:43.467 { 00:18:43.467 "method": "bdev_nvme_set_hotplug", 00:18:43.467 "params": { 00:18:43.467 "period_us": 100000, 00:18:43.467 "enable": false 00:18:43.467 } 00:18:43.467 }, 00:18:43.467 { 00:18:43.467 "method": "bdev_enable_histogram", 00:18:43.467 "params": { 00:18:43.467 "name": "nvme0n1", 00:18:43.467 "enable": true 00:18:43.467 } 00:18:43.467 }, 00:18:43.467 { 00:18:43.467 "method": "bdev_wait_for_examine" 00:18:43.467 } 00:18:43.467 ] 00:18:43.467 }, 00:18:43.467 { 00:18:43.467 "subsystem": "nbd", 00:18:43.467 "config": [] 00:18:43.467 } 00:18:43.467 ] 00:18:43.467 }' 00:18:43.467 12:14:44 -- target/tls.sh@266 -- # killprocess 3428533 00:18:43.467 12:14:44 -- common/autotest_common.sh@936 -- # '[' -z 3428533 ']' 00:18:43.467 12:14:44 -- common/autotest_common.sh@940 -- # kill -0 3428533 00:18:43.467 12:14:44 -- common/autotest_common.sh@941 -- # uname 00:18:43.467 12:14:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:43.467 12:14:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3428533 00:18:43.729 12:14:44 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:43.729 12:14:44 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:43.729 12:14:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3428533' 00:18:43.729 killing process with pid 3428533 00:18:43.729 12:14:44 -- common/autotest_common.sh@955 -- # kill 3428533 00:18:43.729 Received shutdown signal, test time was about 1.000000 seconds 00:18:43.729 00:18:43.729 Latency(us) 00:18:43.729 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:43.729 =================================================================================================================== 00:18:43.729 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:43.729 12:14:44 -- common/autotest_common.sh@960 -- # wait 3428533 00:18:43.729 12:14:44 -- target/tls.sh@267 -- # killprocess 3428321 00:18:43.729 12:14:44 -- common/autotest_common.sh@936 -- # '[' -z 3428321 ']' 00:18:43.729 12:14:44 -- common/autotest_common.sh@940 -- # kill -0 3428321 00:18:43.729 12:14:44 -- common/autotest_common.sh@941 -- # uname 00:18:43.729 12:14:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:43.729 12:14:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3428321 00:18:43.729 12:14:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:43.729 12:14:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:43.729 12:14:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3428321' 00:18:43.729 killing process with pid 3428321 00:18:43.729 12:14:44 -- common/autotest_common.sh@955 -- # kill 3428321 00:18:43.729 12:14:44 -- common/autotest_common.sh@960 -- # wait 3428321 00:18:43.989 12:14:45 -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:18:43.989 12:14:45 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:43.989 12:14:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:43.989 12:14:45 -- target/tls.sh@269 -- # echo '{ 00:18:43.989 "subsystems": [ 00:18:43.989 { 00:18:43.989 "subsystem": "keyring", 00:18:43.989 "config": [ 00:18:43.989 { 00:18:43.989 "method": "keyring_file_add_key", 00:18:43.989 "params": { 00:18:43.989 "name": "key0", 00:18:43.989 "path": "/tmp/tmp.0wbser70b5" 00:18:43.989 } 00:18:43.989 } 00:18:43.989 ] 00:18:43.989 }, 00:18:43.989 { 00:18:43.989 "subsystem": "iobuf", 00:18:43.989 "config": [ 00:18:43.989 { 00:18:43.989 "method": "iobuf_set_options", 00:18:43.989 "params": { 00:18:43.989 "small_pool_count": 8192, 00:18:43.989 "large_pool_count": 1024, 00:18:43.989 "small_bufsize": 8192, 00:18:43.989 "large_bufsize": 135168 00:18:43.989 } 00:18:43.989 } 00:18:43.989 ] 00:18:43.989 }, 00:18:43.989 { 00:18:43.989 "subsystem": "sock", 00:18:43.989 "config": [ 00:18:43.989 { 00:18:43.989 "method": "sock_impl_set_options", 00:18:43.989 "params": { 00:18:43.989 "impl_name": "posix", 00:18:43.989 "recv_buf_size": 2097152, 00:18:43.989 "send_buf_size": 2097152, 00:18:43.989 "enable_recv_pipe": true, 00:18:43.989 "enable_quickack": false, 00:18:43.989 "enable_placement_id": 0, 00:18:43.989 "enable_zerocopy_send_server": true, 00:18:43.989 "enable_zerocopy_send_client": false, 00:18:43.989 "zerocopy_threshold": 0, 00:18:43.989 "tls_version": 0, 00:18:43.989 "enable_ktls": false 00:18:43.989 } 00:18:43.990 }, 00:18:43.990 { 00:18:43.990 "method": "sock_impl_set_options", 00:18:43.990 "params": { 00:18:43.990 "impl_name": "ssl", 00:18:43.990 "recv_buf_size": 4096, 00:18:43.990 "send_buf_size": 4096, 00:18:43.990 "enable_recv_pipe": true, 00:18:43.990 "enable_quickack": false, 00:18:43.990 "enable_placement_id": 0, 00:18:43.990 "enable_zerocopy_send_server": true, 00:18:43.990 "enable_zerocopy_send_client": false, 00:18:43.990 "zerocopy_threshold": 0, 00:18:43.990 "tls_version": 0, 00:18:43.990 "enable_ktls": false 00:18:43.990 } 00:18:43.990 } 00:18:43.990 ] 00:18:43.990 }, 00:18:43.990 { 00:18:43.990 "subsystem": "vmd", 00:18:43.990 "config": [] 00:18:43.990 }, 00:18:43.990 { 00:18:43.990 "subsystem": "accel", 00:18:43.990 "config": [ 00:18:43.990 { 00:18:43.990 "method": "accel_set_options", 00:18:43.990 "params": { 00:18:43.990 "small_cache_size": 128, 00:18:43.990 "large_cache_size": 16, 00:18:43.990 "task_count": 2048, 00:18:43.990 "sequence_count": 2048, 00:18:43.990 "buf_count": 2048 00:18:43.990 } 00:18:43.990 } 00:18:43.990 ] 00:18:43.990 }, 00:18:43.990 { 00:18:43.990 "subsystem": "bdev", 00:18:43.990 "config": [ 00:18:43.990 { 00:18:43.990 "method": "bdev_set_options", 00:18:43.990 "params": { 00:18:43.990 "bdev_io_pool_size": 65535, 00:18:43.990 "bdev_io_cache_size": 256, 00:18:43.990 "bdev_auto_examine": true, 00:18:43.990 "iobuf_small_cache_size": 128, 00:18:43.990 "iobuf_large_cache_size": 16 00:18:43.990 } 00:18:43.990 }, 00:18:43.990 { 00:18:43.990 "method": "bdev_raid_set_options", 00:18:43.990 "params": { 00:18:43.990 "process_window_size_kb": 1024 00:18:43.990 } 00:18:43.990 }, 00:18:43.990 { 00:18:43.990 "method": "bdev_iscsi_set_options", 00:18:43.990 "params": { 00:18:43.990 "timeout_sec": 30 00:18:43.990 } 00:18:43.990 }, 00:18:43.990 { 00:18:43.990 "method": "bdev_nvme_set_options", 00:18:43.990 "params": { 00:18:43.990 "action_on_timeout": "none", 00:18:43.990 "timeout_us": 0, 00:18:43.990 "timeout_admin_us": 0, 00:18:43.990 "keep_alive_timeout_ms": 10000, 00:18:43.990 "arbitration_burst": 0, 00:18:43.990 "low_priority_weight": 0, 00:18:43.990 "medium_priority_weight": 0, 00:18:43.990 "high_priority_weight": 0, 00:18:43.990 "nvme_adminq_poll_period_us": 10000, 00:18:43.990 "nvme_ioq_poll_period_us": 0, 00:18:43.990 "io_queue_requests": 0, 00:18:43.990 "delay_cmd_submit": true, 00:18:43.990 "transport_retry_count": 4, 00:18:43.990 "bdev_retry_count": 3, 00:18:43.990 "transport_ack_timeout": 0, 00:18:43.990 "ctrlr_loss_timeout_sec": 0, 00:18:43.990 "reconnect_delay_sec": 0, 00:18:43.990 "fast_io_fail_timeout_sec": 0, 00:18:43.990 "disable_auto_failback": false, 00:18:43.990 "generate_uuids": false, 00:18:43.990 "transport_tos": 0, 00:18:43.990 "nvme_error_stat": false, 00:18:43.990 "rdma_srq_size": 0, 00:18:43.990 "io_path_stat": false, 00:18:43.990 "allow_accel_sequence": false, 00:18:43.990 "rdma_max_cq_size": 0, 00:18:43.990 "rdma_cm_event_timeout_ms": 0, 00:18:43.990 "dhchap_digests": [ 00:18:43.990 "sha256", 00:18:43.990 "sha384", 00:18:43.990 "sha512" 00:18:43.990 ], 00:18:43.990 "dhchap_dhgroups": [ 00:18:43.990 "null", 00:18:43.990 "ffdhe2048", 00:18:43.990 "ffdhe3072", 00:18:43.990 "ffdhe4096", 00:18:43.990 "ffdhe6144", 00:18:43.990 "ffdhe8192" 00:18:43.990 ] 00:18:43.990 } 00:18:43.990 }, 00:18:43.990 { 00:18:43.990 "method": "bdev_nvme_set_hotplug", 00:18:43.990 "params": { 00:18:43.990 "period_us": 100000, 00:18:43.990 "enable": false 00:18:43.990 } 00:18:43.990 }, 00:18:43.990 { 00:18:43.990 "method": "bdev_malloc_create", 00:18:43.990 "params": { 00:18:43.990 "name": "malloc0", 00:18:43.990 "num_blocks": 8192, 00:18:43.990 "block_size": 4096, 00:18:43.990 "physical_block_size": 4096, 00:18:43.990 "uuid": "a88e3485-07a1-4489-b7af-168e8c96f56e", 00:18:43.990 "optimal_io_boundary": 0 00:18:43.990 } 00:18:43.990 }, 00:18:43.990 { 00:18:43.990 "method": "bdev_wait_for_examine" 00:18:43.990 } 00:18:43.990 ] 00:18:43.990 }, 00:18:43.990 { 00:18:43.990 "subsystem": "nbd", 00:18:43.990 "config": [] 00:18:43.990 }, 00:18:43.990 { 00:18:43.990 "subsystem": "scheduler", 00:18:43.990 "config": [ 00:18:43.990 { 00:18:43.990 "method": "framework_set_scheduler", 00:18:43.990 "params": { 00:18:43.990 "name": "static" 00:18:43.990 } 00:18:43.990 } 00:18:43.990 ] 00:18:43.990 }, 00:18:43.990 { 00:18:43.990 "subsystem": "nvmf", 00:18:43.990 "config": [ 00:18:43.990 { 00:18:43.990 "method": "nvmf_set_config", 00:18:43.990 "params": { 00:18:43.990 "discovery_filter": "match_any", 00:18:43.990 "admin_cmd_passthru": { 00:18:43.990 "identify_ctrlr": false 00:18:43.990 } 00:18:43.990 } 00:18:43.990 }, 00:18:43.990 { 00:18:43.990 "method": "nvmf_set_max_subsystems", 00:18:43.990 "params": { 00:18:43.990 "max_subsystems": 1024 00:18:43.990 } 00:18:43.990 }, 00:18:43.990 { 00:18:43.990 "method": "nvmf_set_crdt", 00:18:43.990 "params": { 00:18:43.990 "crdt1": 0, 00:18:43.990 "crdt2": 0, 00:18:43.990 "crdt3": 0 00:18:43.990 } 00:18:43.990 }, 00:18:43.990 { 00:18:43.990 "method": "nvmf_create_transport", 00:18:43.990 "params": { 00:18:43.990 "trtype": "TCP", 00:18:43.990 "max_queue_depth": 128, 00:18:43.990 "max_io_qpairs_per_ctrlr": 127, 00:18:43.990 "in_capsule_data_size": 4096, 00:18:43.990 "max_io_size": 131072, 00:18:43.990 "io_unit_size": 131072, 00:18:43.990 "max_aq_depth": 128, 00:18:43.990 "num_shared_buffers": 511, 00:18:43.990 "buf_cache_size": 4294967295, 00:18:43.990 "dif_insert_or_strip": false, 00:18:43.990 "zcopy": false, 00:18:43.990 "c2h_success": false, 00:18:43.990 "sock_priority": 0, 00:18:43.990 "abort_timeout_sec": 1, 00:18:43.990 "ack_timeout": 0, 00:18:43.990 "data_wr_pool_size": 0 00:18:43.990 } 00:18:43.990 }, 00:18:43.990 { 00:18:43.990 "method": "nvmf_create_subsystem", 00:18:43.990 "params": { 00:18:43.990 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:43.990 "allow_any_host": false, 00:18:43.990 "serial_number": "00000000000000000000", 00:18:43.990 "model_number": "SPDK bdev 12:14:45 -- common/autotest_common.sh@10 -- # set +x 00:18:43.990 Controller", 00:18:43.990 "max_namespaces": 32, 00:18:43.990 "min_cntlid": 1, 00:18:43.990 "max_cntlid": 65519, 00:18:43.990 "ana_reporting": false 00:18:43.990 } 00:18:43.990 }, 00:18:43.990 { 00:18:43.990 "method": "nvmf_subsystem_add_host", 00:18:43.990 "params": { 00:18:43.990 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:43.990 "host": "nqn.2016-06.io.spdk:host1", 00:18:43.990 "psk": "key0" 00:18:43.990 } 00:18:43.990 }, 00:18:43.990 { 00:18:43.990 "method": "nvmf_subsystem_add_ns", 00:18:43.990 "params": { 00:18:43.990 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:43.990 "namespace": { 00:18:43.990 "nsid": 1, 00:18:43.990 "bdev_name": "malloc0", 00:18:43.990 "nguid": "A88E348507A14489B7AF168E8C96F56E", 00:18:43.990 "uuid": "a88e3485-07a1-4489-b7af-168e8c96f56e", 00:18:43.990 "no_auto_visible": false 00:18:43.990 } 00:18:43.990 } 00:18:43.990 }, 00:18:43.990 { 00:18:43.990 "method": "nvmf_subsystem_add_listener", 00:18:43.990 "params": { 00:18:43.990 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:43.990 "listen_address": { 00:18:43.990 "trtype": "TCP", 00:18:43.990 "adrfam": "IPv4", 00:18:43.990 "traddr": "10.0.0.2", 00:18:43.990 "trsvcid": "4420" 00:18:43.990 }, 00:18:43.990 "secure_channel": true 00:18:43.990 } 00:18:43.990 } 00:18:43.990 ] 00:18:43.990 } 00:18:43.990 ] 00:18:43.990 }' 00:18:43.990 12:14:45 -- nvmf/common.sh@470 -- # nvmfpid=3429214 00:18:43.990 12:14:45 -- nvmf/common.sh@471 -- # waitforlisten 3429214 00:18:43.990 12:14:45 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:43.990 12:14:45 -- common/autotest_common.sh@817 -- # '[' -z 3429214 ']' 00:18:43.990 12:14:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:43.990 12:14:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:43.990 12:14:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:43.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:43.990 12:14:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:43.990 12:14:45 -- common/autotest_common.sh@10 -- # set +x 00:18:43.990 [2024-04-26 12:14:45.069529] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:18:43.990 [2024-04-26 12:14:45.069584] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:43.990 EAL: No free 2048 kB hugepages reported on node 1 00:18:43.990 [2024-04-26 12:14:45.134314] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.990 [2024-04-26 12:14:45.197512] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:43.990 [2024-04-26 12:14:45.197548] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:43.990 [2024-04-26 12:14:45.197555] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:43.990 [2024-04-26 12:14:45.197562] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:43.990 [2024-04-26 12:14:45.197568] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:43.991 [2024-04-26 12:14:45.197620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:44.259 [2024-04-26 12:14:45.386612] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:44.259 [2024-04-26 12:14:45.418617] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:44.259 [2024-04-26 12:14:45.431157] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:44.911 12:14:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:44.911 12:14:45 -- common/autotest_common.sh@850 -- # return 0 00:18:44.911 12:14:45 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:44.912 12:14:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:44.912 12:14:45 -- common/autotest_common.sh@10 -- # set +x 00:18:44.912 12:14:45 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:44.912 12:14:45 -- target/tls.sh@272 -- # bdevperf_pid=3429249 00:18:44.912 12:14:45 -- target/tls.sh@273 -- # waitforlisten 3429249 /var/tmp/bdevperf.sock 00:18:44.912 12:14:45 -- common/autotest_common.sh@817 -- # '[' -z 3429249 ']' 00:18:44.912 12:14:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:44.912 12:14:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:44.912 12:14:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:44.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:44.912 12:14:45 -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:44.912 12:14:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:44.912 12:14:45 -- common/autotest_common.sh@10 -- # set +x 00:18:44.912 12:14:45 -- target/tls.sh@270 -- # echo '{ 00:18:44.912 "subsystems": [ 00:18:44.912 { 00:18:44.912 "subsystem": "keyring", 00:18:44.912 "config": [ 00:18:44.912 { 00:18:44.912 "method": "keyring_file_add_key", 00:18:44.912 "params": { 00:18:44.912 "name": "key0", 00:18:44.912 "path": "/tmp/tmp.0wbser70b5" 00:18:44.912 } 00:18:44.912 } 00:18:44.912 ] 00:18:44.912 }, 00:18:44.912 { 00:18:44.912 "subsystem": "iobuf", 00:18:44.912 "config": [ 00:18:44.912 { 00:18:44.912 "method": "iobuf_set_options", 00:18:44.912 "params": { 00:18:44.912 "small_pool_count": 8192, 00:18:44.912 "large_pool_count": 1024, 00:18:44.912 "small_bufsize": 8192, 00:18:44.912 "large_bufsize": 135168 00:18:44.912 } 00:18:44.912 } 00:18:44.912 ] 00:18:44.912 }, 00:18:44.912 { 00:18:44.912 "subsystem": "sock", 00:18:44.912 "config": [ 00:18:44.912 { 00:18:44.912 "method": "sock_impl_set_options", 00:18:44.912 "params": { 00:18:44.912 "impl_name": "posix", 00:18:44.912 "recv_buf_size": 2097152, 00:18:44.912 "send_buf_size": 2097152, 00:18:44.912 "enable_recv_pipe": true, 00:18:44.912 "enable_quickack": false, 00:18:44.912 "enable_placement_id": 0, 00:18:44.912 "enable_zerocopy_send_server": true, 00:18:44.912 "enable_zerocopy_send_client": false, 00:18:44.912 "zerocopy_threshold": 0, 00:18:44.912 "tls_version": 0, 00:18:44.912 "enable_ktls": false 00:18:44.912 } 00:18:44.912 }, 00:18:44.912 { 00:18:44.912 "method": "sock_impl_set_options", 00:18:44.912 "params": { 00:18:44.912 "impl_name": "ssl", 00:18:44.912 "recv_buf_size": 4096, 00:18:44.912 "send_buf_size": 4096, 00:18:44.912 "enable_recv_pipe": true, 00:18:44.912 "enable_quickack": false, 00:18:44.912 "enable_placement_id": 0, 00:18:44.912 "enable_zerocopy_send_server": true, 00:18:44.912 "enable_zerocopy_send_client": false, 00:18:44.912 "zerocopy_threshold": 0, 00:18:44.912 "tls_version": 0, 00:18:44.912 "enable_ktls": false 00:18:44.912 } 00:18:44.912 } 00:18:44.912 ] 00:18:44.912 }, 00:18:44.912 { 00:18:44.912 "subsystem": "vmd", 00:18:44.912 "config": [] 00:18:44.912 }, 00:18:44.912 { 00:18:44.912 "subsystem": "accel", 00:18:44.912 "config": [ 00:18:44.912 { 00:18:44.912 "method": "accel_set_options", 00:18:44.912 "params": { 00:18:44.912 "small_cache_size": 128, 00:18:44.912 "large_cache_size": 16, 00:18:44.912 "task_count": 2048, 00:18:44.912 "sequence_count": 2048, 00:18:44.912 "buf_count": 2048 00:18:44.912 } 00:18:44.912 } 00:18:44.912 ] 00:18:44.912 }, 00:18:44.912 { 00:18:44.912 "subsystem": "bdev", 00:18:44.912 "config": [ 00:18:44.912 { 00:18:44.912 "method": "bdev_set_options", 00:18:44.912 "params": { 00:18:44.912 "bdev_io_pool_size": 65535, 00:18:44.912 "bdev_io_cache_size": 256, 00:18:44.912 "bdev_auto_examine": true, 00:18:44.912 "iobuf_small_cache_size": 128, 00:18:44.912 "iobuf_large_cache_size": 16 00:18:44.912 } 00:18:44.912 }, 00:18:44.912 { 00:18:44.912 "method": "bdev_raid_set_options", 00:18:44.912 "params": { 00:18:44.912 "process_window_size_kb": 1024 00:18:44.912 } 00:18:44.912 }, 00:18:44.912 { 00:18:44.912 "method": "bdev_iscsi_set_options", 00:18:44.912 "params": { 00:18:44.912 "timeout_sec": 30 00:18:44.912 } 00:18:44.912 }, 00:18:44.912 { 00:18:44.912 "method": "bdev_nvme_set_options", 00:18:44.912 "params": { 00:18:44.912 "action_on_timeout": "none", 00:18:44.912 "timeout_us": 0, 00:18:44.912 "timeout_admin_us": 0, 00:18:44.912 "keep_alive_timeout_ms": 10000, 00:18:44.912 "arbitration_burst": 0, 00:18:44.912 "low_priority_weight": 0, 00:18:44.912 "medium_priority_weight": 0, 00:18:44.912 "high_priority_weight": 0, 00:18:44.912 "nvme_adminq_poll_period_us": 10000, 00:18:44.912 "nvme_ioq_poll_period_us": 0, 00:18:44.912 "io_queue_requests": 512, 00:18:44.912 "delay_cmd_submit": true, 00:18:44.912 "transport_retry_count": 4, 00:18:44.912 "bdev_retry_count": 3, 00:18:44.912 "transport_ack_timeout": 0, 00:18:44.912 "ctrlr_loss_timeout_sec": 0, 00:18:44.912 "reconnect_delay_sec": 0, 00:18:44.912 "fast_io_fail_timeout_sec": 0, 00:18:44.912 "disable_auto_failback": false, 00:18:44.912 "generate_uuids": false, 00:18:44.912 "transport_tos": 0, 00:18:44.912 "nvme_error_stat": false, 00:18:44.912 "rdma_srq_size": 0, 00:18:44.912 "io_path_stat": false, 00:18:44.912 "allow_accel_sequence": false, 00:18:44.912 "rdma_max_cq_size": 0, 00:18:44.912 "rdma_cm_event_timeout_ms": 0, 00:18:44.912 "dhchap_digests": [ 00:18:44.912 "sha256", 00:18:44.912 "sha384", 00:18:44.912 "sha512" 00:18:44.912 ], 00:18:44.912 "dhchap_dhgroups": [ 00:18:44.912 "null", 00:18:44.912 "ffdhe2048", 00:18:44.912 "ffdhe3072", 00:18:44.912 "ffdhe4096", 00:18:44.912 "ffdhe6144", 00:18:44.912 "ffdhe8192" 00:18:44.912 ] 00:18:44.912 } 00:18:44.912 }, 00:18:44.912 { 00:18:44.912 "method": "bdev_nvme_attach_controller", 00:18:44.912 "params": { 00:18:44.912 "name": "nvme0", 00:18:44.912 "trtype": "TCP", 00:18:44.912 "adrfam": "IPv4", 00:18:44.912 "traddr": "10.0.0.2", 00:18:44.912 "trsvcid": "4420", 00:18:44.912 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:44.912 "prchk_reftag": false, 00:18:44.912 "prchk_guard": false, 00:18:44.912 "ctrlr_loss_timeout_sec": 0, 00:18:44.912 "reconnect_delay_sec": 0, 00:18:44.912 "fast_io_fail_timeout_sec": 0, 00:18:44.912 "psk": "key0", 00:18:44.912 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:44.912 "hdgst": false, 00:18:44.912 "ddgst": false 00:18:44.912 } 00:18:44.912 }, 00:18:44.912 { 00:18:44.912 "method": "bdev_nvme_set_hotplug", 00:18:44.912 "params": { 00:18:44.912 "period_us": 100000, 00:18:44.912 "enable": false 00:18:44.912 } 00:18:44.912 }, 00:18:44.912 { 00:18:44.912 "method": "bdev_enable_histogram", 00:18:44.912 "params": { 00:18:44.912 "name": "nvme0n1", 00:18:44.912 "enable": true 00:18:44.912 } 00:18:44.912 }, 00:18:44.912 { 00:18:44.912 "method": "bdev_wait_for_examine" 00:18:44.912 } 00:18:44.912 ] 00:18:44.912 }, 00:18:44.912 { 00:18:44.912 "subsystem": "nbd", 00:18:44.912 "config": [] 00:18:44.912 } 00:18:44.912 ] 00:18:44.912 }' 00:18:44.912 [2024-04-26 12:14:45.921316] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:18:44.912 [2024-04-26 12:14:45.921367] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3429249 ] 00:18:44.912 EAL: No free 2048 kB hugepages reported on node 1 00:18:44.912 [2024-04-26 12:14:45.994592] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.912 [2024-04-26 12:14:46.046751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:45.172 [2024-04-26 12:14:46.172524] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:45.742 12:14:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:45.742 12:14:46 -- common/autotest_common.sh@850 -- # return 0 00:18:45.742 12:14:46 -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:45.742 12:14:46 -- target/tls.sh@275 -- # jq -r '.[].name' 00:18:45.742 12:14:46 -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.742 12:14:46 -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:45.742 Running I/O for 1 seconds... 00:18:47.122 00:18:47.122 Latency(us) 00:18:47.122 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:47.122 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:47.122 Verification LBA range: start 0x0 length 0x2000 00:18:47.122 nvme0n1 : 1.02 4942.72 19.31 0.00 0.00 25710.92 4532.91 29491.20 00:18:47.122 =================================================================================================================== 00:18:47.122 Total : 4942.72 19.31 0.00 0.00 25710.92 4532.91 29491.20 00:18:47.122 0 00:18:47.122 12:14:47 -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:18:47.122 12:14:47 -- target/tls.sh@279 -- # cleanup 00:18:47.122 12:14:47 -- target/tls.sh@15 -- # process_shm --id 0 00:18:47.122 12:14:47 -- common/autotest_common.sh@794 -- # type=--id 00:18:47.122 12:14:47 -- common/autotest_common.sh@795 -- # id=0 00:18:47.122 12:14:47 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:18:47.122 12:14:47 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:47.122 12:14:47 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:18:47.122 12:14:47 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:18:47.122 12:14:47 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:18:47.122 12:14:47 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:47.122 nvmf_trace.0 00:18:47.122 12:14:48 -- common/autotest_common.sh@809 -- # return 0 00:18:47.122 12:14:48 -- target/tls.sh@16 -- # killprocess 3429249 00:18:47.122 12:14:48 -- common/autotest_common.sh@936 -- # '[' -z 3429249 ']' 00:18:47.122 12:14:48 -- common/autotest_common.sh@940 -- # kill -0 3429249 00:18:47.122 12:14:48 -- common/autotest_common.sh@941 -- # uname 00:18:47.122 12:14:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:47.122 12:14:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3429249 00:18:47.122 12:14:48 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:47.122 12:14:48 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:47.122 12:14:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3429249' 00:18:47.122 killing process with pid 3429249 00:18:47.122 12:14:48 -- common/autotest_common.sh@955 -- # kill 3429249 00:18:47.122 Received shutdown signal, test time was about 1.000000 seconds 00:18:47.122 00:18:47.122 Latency(us) 00:18:47.122 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:47.122 =================================================================================================================== 00:18:47.122 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:47.123 12:14:48 -- common/autotest_common.sh@960 -- # wait 3429249 00:18:47.123 12:14:48 -- target/tls.sh@17 -- # nvmftestfini 00:18:47.123 12:14:48 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:47.123 12:14:48 -- nvmf/common.sh@117 -- # sync 00:18:47.123 12:14:48 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:47.123 12:14:48 -- nvmf/common.sh@120 -- # set +e 00:18:47.123 12:14:48 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:47.123 12:14:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:47.123 rmmod nvme_tcp 00:18:47.123 rmmod nvme_fabrics 00:18:47.123 rmmod nvme_keyring 00:18:47.123 12:14:48 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:47.123 12:14:48 -- nvmf/common.sh@124 -- # set -e 00:18:47.123 12:14:48 -- nvmf/common.sh@125 -- # return 0 00:18:47.123 12:14:48 -- nvmf/common.sh@478 -- # '[' -n 3429214 ']' 00:18:47.123 12:14:48 -- nvmf/common.sh@479 -- # killprocess 3429214 00:18:47.123 12:14:48 -- common/autotest_common.sh@936 -- # '[' -z 3429214 ']' 00:18:47.123 12:14:48 -- common/autotest_common.sh@940 -- # kill -0 3429214 00:18:47.123 12:14:48 -- common/autotest_common.sh@941 -- # uname 00:18:47.123 12:14:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:47.123 12:14:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3429214 00:18:47.123 12:14:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:47.123 12:14:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:47.123 12:14:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3429214' 00:18:47.123 killing process with pid 3429214 00:18:47.123 12:14:48 -- common/autotest_common.sh@955 -- # kill 3429214 00:18:47.123 12:14:48 -- common/autotest_common.sh@960 -- # wait 3429214 00:18:47.384 12:14:48 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:47.384 12:14:48 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:47.384 12:14:48 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:47.384 12:14:48 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:47.384 12:14:48 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:47.384 12:14:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:47.384 12:14:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:47.384 12:14:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:49.935 12:14:50 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:49.935 12:14:50 -- target/tls.sh@18 -- # rm -f /tmp/tmp.sNP5q1q1VL /tmp/tmp.TdF0RkdNEb /tmp/tmp.0wbser70b5 00:18:49.935 00:18:49.935 real 1m22.233s 00:18:49.935 user 2m7.417s 00:18:49.935 sys 0m25.262s 00:18:49.935 12:14:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:49.935 12:14:50 -- common/autotest_common.sh@10 -- # set +x 00:18:49.935 ************************************ 00:18:49.935 END TEST nvmf_tls 00:18:49.935 ************************************ 00:18:49.935 12:14:50 -- nvmf/nvmf.sh@61 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:49.935 12:14:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:49.935 12:14:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:49.935 12:14:50 -- common/autotest_common.sh@10 -- # set +x 00:18:49.935 ************************************ 00:18:49.935 START TEST nvmf_fips 00:18:49.935 ************************************ 00:18:49.935 12:14:50 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:49.935 * Looking for test storage... 00:18:49.935 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:18:49.935 12:14:50 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:49.935 12:14:50 -- nvmf/common.sh@7 -- # uname -s 00:18:49.935 12:14:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:49.935 12:14:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:49.935 12:14:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:49.935 12:14:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:49.935 12:14:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:49.935 12:14:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:49.935 12:14:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:49.935 12:14:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:49.935 12:14:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:49.935 12:14:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:49.935 12:14:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:49.935 12:14:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:49.935 12:14:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:49.935 12:14:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:49.935 12:14:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:49.935 12:14:50 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:49.935 12:14:50 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:49.935 12:14:50 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:49.935 12:14:50 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:49.935 12:14:50 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:49.935 12:14:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.935 12:14:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.935 12:14:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.935 12:14:50 -- paths/export.sh@5 -- # export PATH 00:18:49.935 12:14:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.935 12:14:50 -- nvmf/common.sh@47 -- # : 0 00:18:49.935 12:14:50 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:49.936 12:14:50 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:49.936 12:14:50 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:49.936 12:14:50 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:49.936 12:14:50 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:49.936 12:14:50 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:49.936 12:14:50 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:49.936 12:14:50 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:49.936 12:14:50 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:49.936 12:14:50 -- fips/fips.sh@89 -- # check_openssl_version 00:18:49.936 12:14:50 -- fips/fips.sh@83 -- # local target=3.0.0 00:18:49.936 12:14:50 -- fips/fips.sh@85 -- # openssl version 00:18:49.936 12:14:50 -- fips/fips.sh@85 -- # awk '{print $2}' 00:18:49.936 12:14:50 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:18:49.936 12:14:50 -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:18:49.936 12:14:50 -- scripts/common.sh@330 -- # local ver1 ver1_l 00:18:49.936 12:14:50 -- scripts/common.sh@331 -- # local ver2 ver2_l 00:18:49.936 12:14:50 -- scripts/common.sh@333 -- # IFS=.-: 00:18:49.936 12:14:50 -- scripts/common.sh@333 -- # read -ra ver1 00:18:49.936 12:14:50 -- scripts/common.sh@334 -- # IFS=.-: 00:18:49.936 12:14:50 -- scripts/common.sh@334 -- # read -ra ver2 00:18:49.936 12:14:50 -- scripts/common.sh@335 -- # local 'op=>=' 00:18:49.936 12:14:50 -- scripts/common.sh@337 -- # ver1_l=3 00:18:49.936 12:14:50 -- scripts/common.sh@338 -- # ver2_l=3 00:18:49.936 12:14:50 -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:18:49.936 12:14:50 -- scripts/common.sh@341 -- # case "$op" in 00:18:49.936 12:14:50 -- scripts/common.sh@345 -- # : 1 00:18:49.936 12:14:50 -- scripts/common.sh@361 -- # (( v = 0 )) 00:18:49.936 12:14:50 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:49.936 12:14:50 -- scripts/common.sh@362 -- # decimal 3 00:18:49.936 12:14:50 -- scripts/common.sh@350 -- # local d=3 00:18:49.936 12:14:50 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:49.936 12:14:50 -- scripts/common.sh@352 -- # echo 3 00:18:49.936 12:14:50 -- scripts/common.sh@362 -- # ver1[v]=3 00:18:49.936 12:14:50 -- scripts/common.sh@363 -- # decimal 3 00:18:49.936 12:14:50 -- scripts/common.sh@350 -- # local d=3 00:18:49.936 12:14:50 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:49.936 12:14:50 -- scripts/common.sh@352 -- # echo 3 00:18:49.936 12:14:50 -- scripts/common.sh@363 -- # ver2[v]=3 00:18:49.936 12:14:50 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:49.936 12:14:50 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:49.936 12:14:50 -- scripts/common.sh@361 -- # (( v++ )) 00:18:49.936 12:14:50 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:49.936 12:14:50 -- scripts/common.sh@362 -- # decimal 0 00:18:49.936 12:14:50 -- scripts/common.sh@350 -- # local d=0 00:18:49.936 12:14:50 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:49.936 12:14:50 -- scripts/common.sh@352 -- # echo 0 00:18:49.936 12:14:50 -- scripts/common.sh@362 -- # ver1[v]=0 00:18:49.936 12:14:50 -- scripts/common.sh@363 -- # decimal 0 00:18:49.936 12:14:50 -- scripts/common.sh@350 -- # local d=0 00:18:49.936 12:14:50 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:49.936 12:14:50 -- scripts/common.sh@352 -- # echo 0 00:18:49.936 12:14:50 -- scripts/common.sh@363 -- # ver2[v]=0 00:18:49.936 12:14:50 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:49.936 12:14:50 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:49.936 12:14:50 -- scripts/common.sh@361 -- # (( v++ )) 00:18:49.936 12:14:50 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:49.936 12:14:50 -- scripts/common.sh@362 -- # decimal 9 00:18:49.936 12:14:50 -- scripts/common.sh@350 -- # local d=9 00:18:49.936 12:14:50 -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:18:49.936 12:14:50 -- scripts/common.sh@352 -- # echo 9 00:18:49.936 12:14:50 -- scripts/common.sh@362 -- # ver1[v]=9 00:18:49.936 12:14:50 -- scripts/common.sh@363 -- # decimal 0 00:18:49.936 12:14:50 -- scripts/common.sh@350 -- # local d=0 00:18:49.936 12:14:50 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:49.936 12:14:50 -- scripts/common.sh@352 -- # echo 0 00:18:49.936 12:14:50 -- scripts/common.sh@363 -- # ver2[v]=0 00:18:49.936 12:14:50 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:49.936 12:14:50 -- scripts/common.sh@364 -- # return 0 00:18:49.936 12:14:50 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:18:49.936 12:14:50 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:49.936 12:14:50 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:18:49.936 12:14:50 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:49.936 12:14:50 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:49.936 12:14:50 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:18:49.936 12:14:50 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:18:49.936 12:14:50 -- fips/fips.sh@113 -- # build_openssl_config 00:18:49.936 12:14:50 -- fips/fips.sh@37 -- # cat 00:18:49.936 12:14:50 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:18:49.936 12:14:50 -- fips/fips.sh@58 -- # cat - 00:18:49.936 12:14:50 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:49.936 12:14:50 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:18:49.936 12:14:50 -- fips/fips.sh@116 -- # mapfile -t providers 00:18:49.936 12:14:50 -- fips/fips.sh@116 -- # openssl list -providers 00:18:49.936 12:14:50 -- fips/fips.sh@116 -- # grep name 00:18:49.936 12:14:51 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:18:49.936 12:14:51 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:18:49.936 12:14:51 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:49.936 12:14:51 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:18:49.936 12:14:51 -- common/autotest_common.sh@638 -- # local es=0 00:18:49.936 12:14:51 -- common/autotest_common.sh@640 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:49.936 12:14:51 -- fips/fips.sh@127 -- # : 00:18:49.936 12:14:51 -- common/autotest_common.sh@626 -- # local arg=openssl 00:18:49.936 12:14:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:49.936 12:14:51 -- common/autotest_common.sh@630 -- # type -t openssl 00:18:49.936 12:14:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:49.936 12:14:51 -- common/autotest_common.sh@632 -- # type -P openssl 00:18:49.936 12:14:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:49.936 12:14:51 -- common/autotest_common.sh@632 -- # arg=/usr/bin/openssl 00:18:49.936 12:14:51 -- common/autotest_common.sh@632 -- # [[ -x /usr/bin/openssl ]] 00:18:49.936 12:14:51 -- common/autotest_common.sh@641 -- # openssl md5 /dev/fd/62 00:18:49.936 Error setting digest 00:18:49.936 00D2AD16E27F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:18:49.936 00D2AD16E27F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:18:49.936 12:14:51 -- common/autotest_common.sh@641 -- # es=1 00:18:49.936 12:14:51 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:49.936 12:14:51 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:49.936 12:14:51 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:49.936 12:14:51 -- fips/fips.sh@130 -- # nvmftestinit 00:18:49.936 12:14:51 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:49.936 12:14:51 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:49.936 12:14:51 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:49.936 12:14:51 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:49.936 12:14:51 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:49.936 12:14:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:49.936 12:14:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:49.936 12:14:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:49.936 12:14:51 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:49.936 12:14:51 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:49.936 12:14:51 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:49.936 12:14:51 -- common/autotest_common.sh@10 -- # set +x 00:18:58.073 12:14:58 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:58.073 12:14:58 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:58.073 12:14:58 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:58.073 12:14:58 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:58.073 12:14:58 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:58.073 12:14:58 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:58.073 12:14:58 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:58.073 12:14:58 -- nvmf/common.sh@295 -- # net_devs=() 00:18:58.073 12:14:58 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:58.073 12:14:58 -- nvmf/common.sh@296 -- # e810=() 00:18:58.073 12:14:58 -- nvmf/common.sh@296 -- # local -ga e810 00:18:58.073 12:14:58 -- nvmf/common.sh@297 -- # x722=() 00:18:58.073 12:14:58 -- nvmf/common.sh@297 -- # local -ga x722 00:18:58.073 12:14:58 -- nvmf/common.sh@298 -- # mlx=() 00:18:58.073 12:14:58 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:58.073 12:14:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:58.073 12:14:58 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:58.073 12:14:58 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:58.073 12:14:58 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:58.073 12:14:58 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:58.073 12:14:58 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:58.073 12:14:58 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:58.073 12:14:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:58.073 12:14:58 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:58.073 12:14:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:58.073 12:14:58 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:58.073 12:14:58 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:58.073 12:14:58 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:58.073 12:14:58 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:58.073 12:14:58 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:58.073 12:14:58 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:58.073 12:14:58 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:58.073 12:14:58 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:58.073 12:14:58 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:58.073 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:58.073 12:14:58 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:58.073 12:14:58 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:58.073 12:14:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:58.073 12:14:58 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:58.073 12:14:58 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:58.073 12:14:58 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:58.073 12:14:58 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:58.073 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:58.073 12:14:58 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:58.073 12:14:58 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:58.073 12:14:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:58.073 12:14:58 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:58.073 12:14:58 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:58.073 12:14:58 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:58.073 12:14:58 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:58.073 12:14:58 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:58.073 12:14:58 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:58.073 12:14:58 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:58.073 12:14:58 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:58.073 12:14:58 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:58.073 12:14:58 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:58.073 Found net devices under 0000:31:00.0: cvl_0_0 00:18:58.073 12:14:58 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:58.073 12:14:58 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:58.073 12:14:58 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:58.073 12:14:58 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:58.073 12:14:58 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:58.073 12:14:58 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:58.073 Found net devices under 0000:31:00.1: cvl_0_1 00:18:58.073 12:14:58 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:58.073 12:14:58 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:58.073 12:14:58 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:58.073 12:14:58 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:58.073 12:14:58 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:58.073 12:14:58 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:58.073 12:14:58 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:58.073 12:14:58 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:58.073 12:14:58 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:58.073 12:14:58 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:58.073 12:14:58 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:58.073 12:14:58 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:58.073 12:14:58 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:58.073 12:14:58 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:58.073 12:14:58 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:58.073 12:14:58 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:58.073 12:14:58 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:58.073 12:14:58 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:58.073 12:14:58 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:58.073 12:14:58 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:58.073 12:14:58 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:58.073 12:14:58 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:58.073 12:14:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:58.073 12:14:58 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:58.073 12:14:58 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:58.073 12:14:58 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:58.073 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:58.073 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.689 ms 00:18:58.073 00:18:58.073 --- 10.0.0.2 ping statistics --- 00:18:58.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:58.073 rtt min/avg/max/mdev = 0.689/0.689/0.689/0.000 ms 00:18:58.073 12:14:58 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:58.073 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:58.073 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:18:58.073 00:18:58.073 --- 10.0.0.1 ping statistics --- 00:18:58.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:58.073 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:18:58.073 12:14:58 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:58.073 12:14:58 -- nvmf/common.sh@411 -- # return 0 00:18:58.073 12:14:58 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:58.073 12:14:58 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:58.073 12:14:58 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:58.073 12:14:58 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:58.073 12:14:58 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:58.073 12:14:58 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:58.073 12:14:58 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:58.073 12:14:58 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:18:58.073 12:14:58 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:58.073 12:14:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:58.073 12:14:58 -- common/autotest_common.sh@10 -- # set +x 00:18:58.073 12:14:58 -- nvmf/common.sh@470 -- # nvmfpid=3434206 00:18:58.073 12:14:58 -- nvmf/common.sh@471 -- # waitforlisten 3434206 00:18:58.073 12:14:58 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:58.073 12:14:58 -- common/autotest_common.sh@817 -- # '[' -z 3434206 ']' 00:18:58.073 12:14:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:58.073 12:14:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:58.073 12:14:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:58.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:58.073 12:14:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:58.073 12:14:58 -- common/autotest_common.sh@10 -- # set +x 00:18:58.073 [2024-04-26 12:14:58.489355] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:18:58.073 [2024-04-26 12:14:58.489427] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:58.073 EAL: No free 2048 kB hugepages reported on node 1 00:18:58.073 [2024-04-26 12:14:58.564180] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.074 [2024-04-26 12:14:58.655056] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:58.074 [2024-04-26 12:14:58.655112] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:58.074 [2024-04-26 12:14:58.655120] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:58.074 [2024-04-26 12:14:58.655127] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:58.074 [2024-04-26 12:14:58.655133] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:58.074 [2024-04-26 12:14:58.655164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:58.074 12:14:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:58.074 12:14:59 -- common/autotest_common.sh@850 -- # return 0 00:18:58.074 12:14:59 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:58.074 12:14:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:58.074 12:14:59 -- common/autotest_common.sh@10 -- # set +x 00:18:58.074 12:14:59 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:58.074 12:14:59 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:18:58.074 12:14:59 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:58.074 12:14:59 -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:58.074 12:14:59 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:58.074 12:14:59 -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:58.074 12:14:59 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:58.074 12:14:59 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:58.074 12:14:59 -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:58.334 [2024-04-26 12:14:59.415546] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:58.334 [2024-04-26 12:14:59.431537] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:58.334 [2024-04-26 12:14:59.431761] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:58.334 [2024-04-26 12:14:59.461695] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:58.334 malloc0 00:18:58.334 12:14:59 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:58.334 12:14:59 -- fips/fips.sh@147 -- # bdevperf_pid=3434371 00:18:58.334 12:14:59 -- fips/fips.sh@148 -- # waitforlisten 3434371 /var/tmp/bdevperf.sock 00:18:58.334 12:14:59 -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:58.334 12:14:59 -- common/autotest_common.sh@817 -- # '[' -z 3434371 ']' 00:18:58.334 12:14:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:58.334 12:14:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:58.334 12:14:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:58.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:58.334 12:14:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:58.334 12:14:59 -- common/autotest_common.sh@10 -- # set +x 00:18:58.603 [2024-04-26 12:14:59.562467] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:18:58.603 [2024-04-26 12:14:59.562539] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3434371 ] 00:18:58.603 EAL: No free 2048 kB hugepages reported on node 1 00:18:58.603 [2024-04-26 12:14:59.620799] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.603 [2024-04-26 12:14:59.682102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:59.174 12:15:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:59.174 12:15:00 -- common/autotest_common.sh@850 -- # return 0 00:18:59.174 12:15:00 -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:59.435 [2024-04-26 12:15:00.457931] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:59.435 [2024-04-26 12:15:00.458003] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:59.435 TLSTESTn1 00:18:59.435 12:15:00 -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:59.435 Running I/O for 10 seconds... 00:19:11.672 00:19:11.672 Latency(us) 00:19:11.672 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:11.672 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:11.672 Verification LBA range: start 0x0 length 0x2000 00:19:11.672 TLSTESTn1 : 10.02 4157.50 16.24 0.00 0.00 30750.60 4587.52 86507.52 00:19:11.672 =================================================================================================================== 00:19:11.672 Total : 4157.50 16.24 0.00 0.00 30750.60 4587.52 86507.52 00:19:11.672 0 00:19:11.672 12:15:10 -- fips/fips.sh@1 -- # cleanup 00:19:11.672 12:15:10 -- fips/fips.sh@15 -- # process_shm --id 0 00:19:11.672 12:15:10 -- common/autotest_common.sh@794 -- # type=--id 00:19:11.672 12:15:10 -- common/autotest_common.sh@795 -- # id=0 00:19:11.672 12:15:10 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:19:11.672 12:15:10 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:11.672 12:15:10 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:19:11.672 12:15:10 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:19:11.672 12:15:10 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:19:11.672 12:15:10 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:11.672 nvmf_trace.0 00:19:11.672 12:15:10 -- common/autotest_common.sh@809 -- # return 0 00:19:11.672 12:15:10 -- fips/fips.sh@16 -- # killprocess 3434371 00:19:11.672 12:15:10 -- common/autotest_common.sh@936 -- # '[' -z 3434371 ']' 00:19:11.672 12:15:10 -- common/autotest_common.sh@940 -- # kill -0 3434371 00:19:11.672 12:15:10 -- common/autotest_common.sh@941 -- # uname 00:19:11.672 12:15:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:11.672 12:15:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3434371 00:19:11.672 12:15:10 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:19:11.672 12:15:10 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:19:11.672 12:15:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3434371' 00:19:11.672 killing process with pid 3434371 00:19:11.672 12:15:10 -- common/autotest_common.sh@955 -- # kill 3434371 00:19:11.672 Received shutdown signal, test time was about 10.000000 seconds 00:19:11.672 00:19:11.672 Latency(us) 00:19:11.672 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:11.672 =================================================================================================================== 00:19:11.672 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:11.672 [2024-04-26 12:15:10.831933] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:11.672 12:15:10 -- common/autotest_common.sh@960 -- # wait 3434371 00:19:11.672 12:15:10 -- fips/fips.sh@17 -- # nvmftestfini 00:19:11.672 12:15:10 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:11.672 12:15:10 -- nvmf/common.sh@117 -- # sync 00:19:11.672 12:15:10 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:11.672 12:15:10 -- nvmf/common.sh@120 -- # set +e 00:19:11.672 12:15:10 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:11.672 12:15:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:11.672 rmmod nvme_tcp 00:19:11.672 rmmod nvme_fabrics 00:19:11.672 rmmod nvme_keyring 00:19:11.672 12:15:10 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:11.672 12:15:10 -- nvmf/common.sh@124 -- # set -e 00:19:11.672 12:15:10 -- nvmf/common.sh@125 -- # return 0 00:19:11.672 12:15:10 -- nvmf/common.sh@478 -- # '[' -n 3434206 ']' 00:19:11.672 12:15:10 -- nvmf/common.sh@479 -- # killprocess 3434206 00:19:11.672 12:15:10 -- common/autotest_common.sh@936 -- # '[' -z 3434206 ']' 00:19:11.672 12:15:10 -- common/autotest_common.sh@940 -- # kill -0 3434206 00:19:11.672 12:15:10 -- common/autotest_common.sh@941 -- # uname 00:19:11.672 12:15:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:11.672 12:15:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3434206 00:19:11.672 12:15:11 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:11.672 12:15:11 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:11.672 12:15:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3434206' 00:19:11.672 killing process with pid 3434206 00:19:11.672 12:15:11 -- common/autotest_common.sh@955 -- # kill 3434206 00:19:11.672 [2024-04-26 12:15:11.050204] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:11.672 12:15:11 -- common/autotest_common.sh@960 -- # wait 3434206 00:19:11.672 12:15:11 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:11.672 12:15:11 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:11.672 12:15:11 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:11.672 12:15:11 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:11.672 12:15:11 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:11.672 12:15:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:11.672 12:15:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:11.672 12:15:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:12.243 12:15:13 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:12.243 12:15:13 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:12.243 00:19:12.243 real 0m22.481s 00:19:12.243 user 0m23.307s 00:19:12.243 sys 0m9.777s 00:19:12.243 12:15:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:12.243 12:15:13 -- common/autotest_common.sh@10 -- # set +x 00:19:12.243 ************************************ 00:19:12.243 END TEST nvmf_fips 00:19:12.243 ************************************ 00:19:12.243 12:15:13 -- nvmf/nvmf.sh@64 -- # '[' 0 -eq 1 ']' 00:19:12.243 12:15:13 -- nvmf/nvmf.sh@70 -- # [[ phy == phy ]] 00:19:12.243 12:15:13 -- nvmf/nvmf.sh@71 -- # '[' tcp = tcp ']' 00:19:12.243 12:15:13 -- nvmf/nvmf.sh@72 -- # gather_supported_nvmf_pci_devs 00:19:12.243 12:15:13 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:12.243 12:15:13 -- common/autotest_common.sh@10 -- # set +x 00:19:20.383 12:15:20 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:20.383 12:15:20 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:20.383 12:15:20 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:20.383 12:15:20 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:20.383 12:15:20 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:20.383 12:15:20 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:20.383 12:15:20 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:20.383 12:15:20 -- nvmf/common.sh@295 -- # net_devs=() 00:19:20.383 12:15:20 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:20.383 12:15:20 -- nvmf/common.sh@296 -- # e810=() 00:19:20.383 12:15:20 -- nvmf/common.sh@296 -- # local -ga e810 00:19:20.383 12:15:20 -- nvmf/common.sh@297 -- # x722=() 00:19:20.383 12:15:20 -- nvmf/common.sh@297 -- # local -ga x722 00:19:20.383 12:15:20 -- nvmf/common.sh@298 -- # mlx=() 00:19:20.383 12:15:20 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:20.383 12:15:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:20.383 12:15:20 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:20.383 12:15:20 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:20.383 12:15:20 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:20.383 12:15:20 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:20.383 12:15:20 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:20.383 12:15:20 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:20.383 12:15:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:20.383 12:15:20 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:20.383 12:15:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:20.383 12:15:20 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:20.383 12:15:20 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:20.383 12:15:20 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:20.383 12:15:20 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:20.383 12:15:20 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:20.383 12:15:20 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:20.383 12:15:20 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:20.383 12:15:20 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:20.383 12:15:20 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:20.383 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:20.383 12:15:20 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:20.383 12:15:20 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:20.383 12:15:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:20.383 12:15:20 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:20.383 12:15:20 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:20.383 12:15:20 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:20.383 12:15:20 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:20.383 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:20.383 12:15:20 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:20.383 12:15:20 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:20.383 12:15:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:20.383 12:15:20 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:20.383 12:15:20 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:20.383 12:15:20 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:20.383 12:15:20 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:20.383 12:15:20 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:20.383 12:15:20 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:20.383 12:15:20 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:20.383 12:15:20 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:20.383 12:15:20 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:20.383 12:15:20 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:20.383 Found net devices under 0000:31:00.0: cvl_0_0 00:19:20.383 12:15:20 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:20.383 12:15:20 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:20.383 12:15:20 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:20.383 12:15:20 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:20.383 12:15:20 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:20.383 12:15:20 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:20.383 Found net devices under 0000:31:00.1: cvl_0_1 00:19:20.383 12:15:20 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:20.383 12:15:20 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:20.383 12:15:20 -- nvmf/nvmf.sh@73 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:20.383 12:15:20 -- nvmf/nvmf.sh@74 -- # (( 2 > 0 )) 00:19:20.383 12:15:20 -- nvmf/nvmf.sh@75 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:20.383 12:15:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:20.383 12:15:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:20.383 12:15:20 -- common/autotest_common.sh@10 -- # set +x 00:19:20.383 ************************************ 00:19:20.383 START TEST nvmf_perf_adq 00:19:20.383 ************************************ 00:19:20.383 12:15:20 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:20.383 * Looking for test storage... 00:19:20.383 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:20.383 12:15:20 -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:20.383 12:15:20 -- nvmf/common.sh@7 -- # uname -s 00:19:20.383 12:15:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:20.383 12:15:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:20.383 12:15:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:20.383 12:15:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:20.383 12:15:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:20.383 12:15:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:20.383 12:15:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:20.383 12:15:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:20.383 12:15:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:20.383 12:15:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:20.383 12:15:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:20.383 12:15:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:20.383 12:15:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:20.383 12:15:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:20.383 12:15:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:20.383 12:15:20 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:20.383 12:15:20 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:20.383 12:15:20 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:20.383 12:15:20 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:20.383 12:15:20 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:20.383 12:15:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.384 12:15:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.384 12:15:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.384 12:15:20 -- paths/export.sh@5 -- # export PATH 00:19:20.384 12:15:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.384 12:15:20 -- nvmf/common.sh@47 -- # : 0 00:19:20.384 12:15:20 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:20.384 12:15:20 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:20.384 12:15:20 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:20.384 12:15:20 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:20.384 12:15:20 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:20.384 12:15:20 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:20.384 12:15:20 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:20.384 12:15:20 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:20.384 12:15:20 -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:19:20.384 12:15:20 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:20.384 12:15:20 -- common/autotest_common.sh@10 -- # set +x 00:19:26.975 12:15:27 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:26.975 12:15:27 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:26.975 12:15:27 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:26.975 12:15:27 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:26.975 12:15:27 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:26.975 12:15:27 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:26.975 12:15:27 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:26.975 12:15:27 -- nvmf/common.sh@295 -- # net_devs=() 00:19:26.975 12:15:27 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:26.975 12:15:27 -- nvmf/common.sh@296 -- # e810=() 00:19:26.975 12:15:27 -- nvmf/common.sh@296 -- # local -ga e810 00:19:26.975 12:15:27 -- nvmf/common.sh@297 -- # x722=() 00:19:26.975 12:15:27 -- nvmf/common.sh@297 -- # local -ga x722 00:19:26.975 12:15:27 -- nvmf/common.sh@298 -- # mlx=() 00:19:26.975 12:15:27 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:26.975 12:15:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:26.975 12:15:27 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:26.975 12:15:27 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:26.975 12:15:27 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:26.975 12:15:27 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:26.975 12:15:27 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:26.975 12:15:27 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:26.975 12:15:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:26.975 12:15:27 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:26.975 12:15:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:26.975 12:15:27 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:26.975 12:15:27 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:26.975 12:15:27 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:26.975 12:15:27 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:26.975 12:15:27 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:26.975 12:15:27 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:26.975 12:15:27 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:26.975 12:15:27 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:26.975 12:15:27 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:26.975 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:26.975 12:15:27 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:26.975 12:15:27 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:26.975 12:15:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:26.975 12:15:27 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:26.975 12:15:27 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:26.975 12:15:27 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:26.975 12:15:27 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:26.975 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:26.975 12:15:27 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:26.975 12:15:27 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:26.975 12:15:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:26.975 12:15:27 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:26.975 12:15:27 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:26.975 12:15:27 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:26.975 12:15:27 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:26.975 12:15:27 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:26.975 12:15:27 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:26.975 12:15:27 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:26.975 12:15:27 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:26.975 12:15:27 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:26.975 12:15:27 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:26.975 Found net devices under 0000:31:00.0: cvl_0_0 00:19:26.975 12:15:27 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:26.975 12:15:27 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:26.975 12:15:27 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:26.975 12:15:27 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:26.975 12:15:27 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:26.975 12:15:27 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:26.975 Found net devices under 0000:31:00.1: cvl_0_1 00:19:26.975 12:15:27 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:26.975 12:15:27 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:26.975 12:15:27 -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:26.975 12:15:27 -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:19:26.975 12:15:27 -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:26.975 12:15:27 -- target/perf_adq.sh@59 -- # adq_reload_driver 00:19:26.975 12:15:27 -- target/perf_adq.sh@52 -- # rmmod ice 00:19:27.919 12:15:29 -- target/perf_adq.sh@53 -- # modprobe ice 00:19:30.464 12:15:31 -- target/perf_adq.sh@54 -- # sleep 5 00:19:35.828 12:15:36 -- target/perf_adq.sh@67 -- # nvmftestinit 00:19:35.828 12:15:36 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:35.828 12:15:36 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:35.828 12:15:36 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:35.828 12:15:36 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:35.828 12:15:36 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:35.828 12:15:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:35.828 12:15:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:35.828 12:15:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:35.828 12:15:36 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:35.828 12:15:36 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:35.828 12:15:36 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:35.828 12:15:36 -- common/autotest_common.sh@10 -- # set +x 00:19:35.828 12:15:36 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:35.828 12:15:36 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:35.828 12:15:36 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:35.828 12:15:36 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:35.828 12:15:36 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:35.828 12:15:36 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:35.828 12:15:36 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:35.828 12:15:36 -- nvmf/common.sh@295 -- # net_devs=() 00:19:35.828 12:15:36 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:35.828 12:15:36 -- nvmf/common.sh@296 -- # e810=() 00:19:35.828 12:15:36 -- nvmf/common.sh@296 -- # local -ga e810 00:19:35.828 12:15:36 -- nvmf/common.sh@297 -- # x722=() 00:19:35.828 12:15:36 -- nvmf/common.sh@297 -- # local -ga x722 00:19:35.828 12:15:36 -- nvmf/common.sh@298 -- # mlx=() 00:19:35.828 12:15:36 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:35.828 12:15:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:35.828 12:15:36 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:35.828 12:15:36 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:35.828 12:15:36 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:35.828 12:15:36 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:35.828 12:15:36 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:35.828 12:15:36 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:35.828 12:15:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:35.828 12:15:36 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:35.828 12:15:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:35.828 12:15:36 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:35.828 12:15:36 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:35.828 12:15:36 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:35.828 12:15:36 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:35.828 12:15:36 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:35.828 12:15:36 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:35.828 12:15:36 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:35.828 12:15:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:35.828 12:15:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:35.828 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:35.828 12:15:36 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:35.828 12:15:36 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:35.828 12:15:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:35.828 12:15:36 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:35.828 12:15:36 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:35.828 12:15:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:35.828 12:15:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:35.828 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:35.828 12:15:36 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:35.828 12:15:36 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:35.828 12:15:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:35.828 12:15:36 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:35.828 12:15:36 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:35.828 12:15:36 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:35.828 12:15:36 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:35.828 12:15:36 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:35.828 12:15:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:35.828 12:15:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:35.829 12:15:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:35.829 12:15:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:35.829 12:15:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:35.829 Found net devices under 0000:31:00.0: cvl_0_0 00:19:35.829 12:15:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:35.829 12:15:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:35.829 12:15:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:35.829 12:15:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:35.829 12:15:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:35.829 12:15:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:35.829 Found net devices under 0000:31:00.1: cvl_0_1 00:19:35.829 12:15:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:35.829 12:15:36 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:35.829 12:15:36 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:35.829 12:15:36 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:35.829 12:15:36 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:35.829 12:15:36 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:35.829 12:15:36 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:35.829 12:15:36 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:35.829 12:15:36 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:35.829 12:15:36 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:35.829 12:15:36 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:35.829 12:15:36 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:35.829 12:15:36 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:35.829 12:15:36 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:35.829 12:15:36 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:35.829 12:15:36 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:35.829 12:15:36 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:35.829 12:15:36 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:35.829 12:15:36 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:35.829 12:15:36 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:35.829 12:15:36 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:35.829 12:15:36 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:35.829 12:15:36 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:35.829 12:15:36 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:35.829 12:15:36 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:35.829 12:15:36 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:35.829 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:35.829 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.540 ms 00:19:35.829 00:19:35.829 --- 10.0.0.2 ping statistics --- 00:19:35.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:35.829 rtt min/avg/max/mdev = 0.540/0.540/0.540/0.000 ms 00:19:35.829 12:15:36 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:35.829 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:35.829 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:19:35.829 00:19:35.829 --- 10.0.0.1 ping statistics --- 00:19:35.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:35.829 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:19:35.829 12:15:36 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:35.829 12:15:36 -- nvmf/common.sh@411 -- # return 0 00:19:35.829 12:15:36 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:35.829 12:15:36 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:35.829 12:15:36 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:35.829 12:15:36 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:35.829 12:15:36 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:35.829 12:15:36 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:35.829 12:15:36 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:35.829 12:15:36 -- target/perf_adq.sh@68 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:35.829 12:15:36 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:35.829 12:15:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:35.829 12:15:36 -- common/autotest_common.sh@10 -- # set +x 00:19:35.829 12:15:36 -- nvmf/common.sh@470 -- # nvmfpid=3447052 00:19:35.829 12:15:36 -- nvmf/common.sh@471 -- # waitforlisten 3447052 00:19:35.829 12:15:36 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:35.829 12:15:36 -- common/autotest_common.sh@817 -- # '[' -z 3447052 ']' 00:19:35.829 12:15:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:35.829 12:15:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:35.829 12:15:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:35.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:35.829 12:15:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:35.829 12:15:36 -- common/autotest_common.sh@10 -- # set +x 00:19:35.829 [2024-04-26 12:15:36.554635] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:19:35.829 [2024-04-26 12:15:36.554703] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:35.829 EAL: No free 2048 kB hugepages reported on node 1 00:19:35.829 [2024-04-26 12:15:36.628063] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:35.829 [2024-04-26 12:15:36.702039] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:35.829 [2024-04-26 12:15:36.702081] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:35.829 [2024-04-26 12:15:36.702090] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:35.829 [2024-04-26 12:15:36.702097] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:35.829 [2024-04-26 12:15:36.702104] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:35.829 [2024-04-26 12:15:36.702318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:35.829 [2024-04-26 12:15:36.702490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:35.829 [2024-04-26 12:15:36.702529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:35.829 [2024-04-26 12:15:36.702361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:36.403 12:15:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:36.403 12:15:37 -- common/autotest_common.sh@850 -- # return 0 00:19:36.403 12:15:37 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:36.403 12:15:37 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:36.403 12:15:37 -- common/autotest_common.sh@10 -- # set +x 00:19:36.403 12:15:37 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:36.403 12:15:37 -- target/perf_adq.sh@69 -- # adq_configure_nvmf_target 0 00:19:36.403 12:15:37 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:19:36.403 12:15:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:36.403 12:15:37 -- common/autotest_common.sh@10 -- # set +x 00:19:36.403 12:15:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:36.403 12:15:37 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:19:36.403 12:15:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:36.403 12:15:37 -- common/autotest_common.sh@10 -- # set +x 00:19:36.403 12:15:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:36.403 12:15:37 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:19:36.403 12:15:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:36.403 12:15:37 -- common/autotest_common.sh@10 -- # set +x 00:19:36.403 [2024-04-26 12:15:37.463783] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:36.403 12:15:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:36.403 12:15:37 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:36.403 12:15:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:36.403 12:15:37 -- common/autotest_common.sh@10 -- # set +x 00:19:36.403 Malloc1 00:19:36.403 12:15:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:36.403 12:15:37 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:36.403 12:15:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:36.403 12:15:37 -- common/autotest_common.sh@10 -- # set +x 00:19:36.403 12:15:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:36.403 12:15:37 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:36.403 12:15:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:36.403 12:15:37 -- common/autotest_common.sh@10 -- # set +x 00:19:36.403 12:15:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:36.403 12:15:37 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:36.403 12:15:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:36.403 12:15:37 -- common/autotest_common.sh@10 -- # set +x 00:19:36.403 [2024-04-26 12:15:37.523084] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:36.403 12:15:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:36.403 12:15:37 -- target/perf_adq.sh@73 -- # perfpid=3447307 00:19:36.403 12:15:37 -- target/perf_adq.sh@74 -- # sleep 2 00:19:36.403 12:15:37 -- target/perf_adq.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:36.403 EAL: No free 2048 kB hugepages reported on node 1 00:19:38.342 12:15:39 -- target/perf_adq.sh@76 -- # rpc_cmd nvmf_get_stats 00:19:38.342 12:15:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:38.342 12:15:39 -- target/perf_adq.sh@76 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:19:38.342 12:15:39 -- common/autotest_common.sh@10 -- # set +x 00:19:38.342 12:15:39 -- target/perf_adq.sh@76 -- # wc -l 00:19:38.342 12:15:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:38.601 12:15:39 -- target/perf_adq.sh@76 -- # count=4 00:19:38.601 12:15:39 -- target/perf_adq.sh@77 -- # [[ 4 -ne 4 ]] 00:19:38.601 12:15:39 -- target/perf_adq.sh@81 -- # wait 3447307 00:19:46.741 Initializing NVMe Controllers 00:19:46.741 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:46.741 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:46.741 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:46.741 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:46.741 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:46.741 Initialization complete. Launching workers. 00:19:46.741 ======================================================== 00:19:46.741 Latency(us) 00:19:46.741 Device Information : IOPS MiB/s Average min max 00:19:46.741 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10748.30 41.99 5955.86 1712.58 9547.91 00:19:46.741 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 14116.00 55.14 4534.29 1126.86 10085.04 00:19:46.741 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13273.50 51.85 4821.49 1248.80 11289.23 00:19:46.741 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 13076.00 51.08 4894.00 1306.02 10115.66 00:19:46.741 ======================================================== 00:19:46.741 Total : 51213.80 200.05 4998.92 1126.86 11289.23 00:19:46.741 00:19:46.741 12:15:47 -- target/perf_adq.sh@82 -- # nvmftestfini 00:19:46.741 12:15:47 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:46.741 12:15:47 -- nvmf/common.sh@117 -- # sync 00:19:46.741 12:15:47 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:46.741 12:15:47 -- nvmf/common.sh@120 -- # set +e 00:19:46.741 12:15:47 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:46.741 12:15:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:46.741 rmmod nvme_tcp 00:19:46.741 rmmod nvme_fabrics 00:19:46.741 rmmod nvme_keyring 00:19:46.741 12:15:47 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:46.741 12:15:47 -- nvmf/common.sh@124 -- # set -e 00:19:46.741 12:15:47 -- nvmf/common.sh@125 -- # return 0 00:19:46.741 12:15:47 -- nvmf/common.sh@478 -- # '[' -n 3447052 ']' 00:19:46.741 12:15:47 -- nvmf/common.sh@479 -- # killprocess 3447052 00:19:46.741 12:15:47 -- common/autotest_common.sh@936 -- # '[' -z 3447052 ']' 00:19:46.741 12:15:47 -- common/autotest_common.sh@940 -- # kill -0 3447052 00:19:46.741 12:15:47 -- common/autotest_common.sh@941 -- # uname 00:19:46.741 12:15:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:46.741 12:15:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3447052 00:19:46.741 12:15:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:46.741 12:15:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:46.741 12:15:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3447052' 00:19:46.741 killing process with pid 3447052 00:19:46.741 12:15:47 -- common/autotest_common.sh@955 -- # kill 3447052 00:19:46.741 12:15:47 -- common/autotest_common.sh@960 -- # wait 3447052 00:19:47.002 12:15:47 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:47.002 12:15:47 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:47.002 12:15:47 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:47.002 12:15:47 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:47.002 12:15:47 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:47.002 12:15:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.002 12:15:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:47.002 12:15:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:48.916 12:15:50 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:48.916 12:15:50 -- target/perf_adq.sh@84 -- # adq_reload_driver 00:19:48.916 12:15:50 -- target/perf_adq.sh@52 -- # rmmod ice 00:19:50.831 12:15:51 -- target/perf_adq.sh@53 -- # modprobe ice 00:19:52.745 12:15:53 -- target/perf_adq.sh@54 -- # sleep 5 00:19:58.033 12:15:58 -- target/perf_adq.sh@87 -- # nvmftestinit 00:19:58.033 12:15:58 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:58.033 12:15:58 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:58.033 12:15:58 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:58.033 12:15:58 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:58.033 12:15:58 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:58.033 12:15:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:58.033 12:15:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:58.033 12:15:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:58.033 12:15:58 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:58.033 12:15:58 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:58.033 12:15:58 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:58.033 12:15:58 -- common/autotest_common.sh@10 -- # set +x 00:19:58.033 12:15:58 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:58.033 12:15:58 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:58.033 12:15:58 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:58.033 12:15:58 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:58.033 12:15:58 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:58.033 12:15:58 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:58.033 12:15:58 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:58.033 12:15:58 -- nvmf/common.sh@295 -- # net_devs=() 00:19:58.033 12:15:58 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:58.033 12:15:58 -- nvmf/common.sh@296 -- # e810=() 00:19:58.033 12:15:58 -- nvmf/common.sh@296 -- # local -ga e810 00:19:58.033 12:15:58 -- nvmf/common.sh@297 -- # x722=() 00:19:58.033 12:15:58 -- nvmf/common.sh@297 -- # local -ga x722 00:19:58.033 12:15:58 -- nvmf/common.sh@298 -- # mlx=() 00:19:58.033 12:15:58 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:58.033 12:15:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:58.033 12:15:58 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:58.033 12:15:58 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:58.033 12:15:58 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:58.033 12:15:58 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:58.033 12:15:58 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:58.033 12:15:58 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:58.033 12:15:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:58.033 12:15:58 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:58.033 12:15:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:58.033 12:15:58 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:58.033 12:15:58 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:58.033 12:15:58 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:58.033 12:15:58 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:58.033 12:15:58 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:58.033 12:15:58 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:58.033 12:15:58 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:58.033 12:15:58 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:58.033 12:15:58 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:58.033 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:58.033 12:15:58 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:58.033 12:15:58 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:58.033 12:15:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:58.033 12:15:58 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:58.033 12:15:58 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:58.033 12:15:58 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:58.033 12:15:58 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:58.033 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:58.033 12:15:58 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:58.033 12:15:58 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:58.033 12:15:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:58.033 12:15:58 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:58.033 12:15:58 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:58.033 12:15:58 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:58.033 12:15:58 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:58.033 12:15:58 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:58.033 12:15:58 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:58.033 12:15:58 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:58.033 12:15:58 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:58.033 12:15:58 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:58.033 12:15:58 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:58.033 Found net devices under 0000:31:00.0: cvl_0_0 00:19:58.033 12:15:58 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:58.033 12:15:58 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:58.033 12:15:58 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:58.033 12:15:58 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:58.033 12:15:58 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:58.033 12:15:58 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:58.033 Found net devices under 0000:31:00.1: cvl_0_1 00:19:58.033 12:15:58 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:58.033 12:15:58 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:58.033 12:15:58 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:58.033 12:15:58 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:58.033 12:15:58 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:58.033 12:15:58 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:58.033 12:15:58 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:58.033 12:15:58 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:58.033 12:15:58 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:58.033 12:15:58 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:58.033 12:15:58 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:58.033 12:15:58 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:58.033 12:15:58 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:58.033 12:15:58 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:58.033 12:15:58 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:58.033 12:15:58 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:58.033 12:15:58 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:58.033 12:15:58 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:58.033 12:15:58 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:58.033 12:15:58 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:58.033 12:15:58 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:58.033 12:15:58 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:58.033 12:15:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:58.033 12:15:59 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:58.033 12:15:59 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:58.033 12:15:59 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:58.033 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:58.033 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.559 ms 00:19:58.033 00:19:58.033 --- 10.0.0.2 ping statistics --- 00:19:58.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:58.033 rtt min/avg/max/mdev = 0.559/0.559/0.559/0.000 ms 00:19:58.033 12:15:59 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:58.033 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:58.033 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:19:58.033 00:19:58.033 --- 10.0.0.1 ping statistics --- 00:19:58.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:58.033 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:19:58.033 12:15:59 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:58.033 12:15:59 -- nvmf/common.sh@411 -- # return 0 00:19:58.033 12:15:59 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:58.033 12:15:59 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:58.033 12:15:59 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:58.033 12:15:59 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:58.033 12:15:59 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:58.033 12:15:59 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:58.033 12:15:59 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:58.033 12:15:59 -- target/perf_adq.sh@88 -- # adq_configure_driver 00:19:58.033 12:15:59 -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:19:58.033 12:15:59 -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:19:58.033 12:15:59 -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:19:58.033 net.core.busy_poll = 1 00:19:58.033 12:15:59 -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:19:58.033 net.core.busy_read = 1 00:19:58.033 12:15:59 -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:19:58.033 12:15:59 -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:19:58.294 12:15:59 -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:19:58.294 12:15:59 -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:19:58.294 12:15:59 -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:19:58.294 12:15:59 -- target/perf_adq.sh@89 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:58.294 12:15:59 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:58.294 12:15:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:58.294 12:15:59 -- common/autotest_common.sh@10 -- # set +x 00:19:58.294 12:15:59 -- nvmf/common.sh@470 -- # nvmfpid=3452086 00:19:58.294 12:15:59 -- nvmf/common.sh@471 -- # waitforlisten 3452086 00:19:58.294 12:15:59 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:58.294 12:15:59 -- common/autotest_common.sh@817 -- # '[' -z 3452086 ']' 00:19:58.294 12:15:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:58.294 12:15:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:58.294 12:15:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:58.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:58.294 12:15:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:58.294 12:15:59 -- common/autotest_common.sh@10 -- # set +x 00:19:58.294 [2024-04-26 12:15:59.455151] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:19:58.294 [2024-04-26 12:15:59.455202] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:58.294 EAL: No free 2048 kB hugepages reported on node 1 00:19:58.553 [2024-04-26 12:15:59.523130] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:58.553 [2024-04-26 12:15:59.588957] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:58.553 [2024-04-26 12:15:59.588998] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:58.553 [2024-04-26 12:15:59.589006] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:58.553 [2024-04-26 12:15:59.589014] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:58.553 [2024-04-26 12:15:59.589021] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:58.553 [2024-04-26 12:15:59.589182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:58.554 [2024-04-26 12:15:59.589313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:58.554 [2024-04-26 12:15:59.589470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:58.554 [2024-04-26 12:15:59.589471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:59.125 12:16:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:59.125 12:16:00 -- common/autotest_common.sh@850 -- # return 0 00:19:59.125 12:16:00 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:59.125 12:16:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:59.125 12:16:00 -- common/autotest_common.sh@10 -- # set +x 00:19:59.125 12:16:00 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:59.125 12:16:00 -- target/perf_adq.sh@90 -- # adq_configure_nvmf_target 1 00:19:59.125 12:16:00 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:19:59.125 12:16:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.125 12:16:00 -- common/autotest_common.sh@10 -- # set +x 00:19:59.125 12:16:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.125 12:16:00 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:19:59.125 12:16:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.125 12:16:00 -- common/autotest_common.sh@10 -- # set +x 00:19:59.385 12:16:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.385 12:16:00 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:19:59.385 12:16:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.385 12:16:00 -- common/autotest_common.sh@10 -- # set +x 00:19:59.385 [2024-04-26 12:16:00.363792] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:59.385 12:16:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.385 12:16:00 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:59.385 12:16:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.385 12:16:00 -- common/autotest_common.sh@10 -- # set +x 00:19:59.385 Malloc1 00:19:59.385 12:16:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.385 12:16:00 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:59.385 12:16:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.385 12:16:00 -- common/autotest_common.sh@10 -- # set +x 00:19:59.385 12:16:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.385 12:16:00 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:59.385 12:16:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.385 12:16:00 -- common/autotest_common.sh@10 -- # set +x 00:19:59.385 12:16:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.385 12:16:00 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:59.385 12:16:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.385 12:16:00 -- common/autotest_common.sh@10 -- # set +x 00:19:59.385 [2024-04-26 12:16:00.419164] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:59.385 12:16:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.385 12:16:00 -- target/perf_adq.sh@94 -- # perfpid=3452130 00:19:59.385 12:16:00 -- target/perf_adq.sh@95 -- # sleep 2 00:19:59.385 12:16:00 -- target/perf_adq.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:59.385 EAL: No free 2048 kB hugepages reported on node 1 00:20:01.291 12:16:02 -- target/perf_adq.sh@97 -- # rpc_cmd nvmf_get_stats 00:20:01.292 12:16:02 -- target/perf_adq.sh@97 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:20:01.292 12:16:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.292 12:16:02 -- target/perf_adq.sh@97 -- # wc -l 00:20:01.292 12:16:02 -- common/autotest_common.sh@10 -- # set +x 00:20:01.292 12:16:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.292 12:16:02 -- target/perf_adq.sh@97 -- # count=2 00:20:01.292 12:16:02 -- target/perf_adq.sh@98 -- # [[ 2 -lt 2 ]] 00:20:01.292 12:16:02 -- target/perf_adq.sh@103 -- # wait 3452130 00:20:09.427 Initializing NVMe Controllers 00:20:09.427 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:09.427 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:09.427 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:09.427 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:09.427 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:09.427 Initialization complete. Launching workers. 00:20:09.427 ======================================================== 00:20:09.427 Latency(us) 00:20:09.427 Device Information : IOPS MiB/s Average min max 00:20:09.427 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10328.60 40.35 6196.00 1429.29 50166.35 00:20:09.427 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10013.30 39.11 6393.11 1144.62 51139.69 00:20:09.427 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 9981.30 38.99 6431.82 1189.82 50010.75 00:20:09.427 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9010.90 35.20 7125.24 1462.31 51833.37 00:20:09.427 ======================================================== 00:20:09.427 Total : 39334.09 153.65 6518.89 1144.62 51833.37 00:20:09.427 00:20:09.427 12:16:10 -- target/perf_adq.sh@104 -- # nvmftestfini 00:20:09.427 12:16:10 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:09.427 12:16:10 -- nvmf/common.sh@117 -- # sync 00:20:09.427 12:16:10 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:09.427 12:16:10 -- nvmf/common.sh@120 -- # set +e 00:20:09.427 12:16:10 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:09.427 12:16:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:09.427 rmmod nvme_tcp 00:20:09.427 rmmod nvme_fabrics 00:20:09.687 rmmod nvme_keyring 00:20:09.687 12:16:10 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:09.687 12:16:10 -- nvmf/common.sh@124 -- # set -e 00:20:09.687 12:16:10 -- nvmf/common.sh@125 -- # return 0 00:20:09.687 12:16:10 -- nvmf/common.sh@478 -- # '[' -n 3452086 ']' 00:20:09.687 12:16:10 -- nvmf/common.sh@479 -- # killprocess 3452086 00:20:09.687 12:16:10 -- common/autotest_common.sh@936 -- # '[' -z 3452086 ']' 00:20:09.687 12:16:10 -- common/autotest_common.sh@940 -- # kill -0 3452086 00:20:09.687 12:16:10 -- common/autotest_common.sh@941 -- # uname 00:20:09.687 12:16:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:09.687 12:16:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3452086 00:20:09.687 12:16:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:09.687 12:16:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:09.688 12:16:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3452086' 00:20:09.688 killing process with pid 3452086 00:20:09.688 12:16:10 -- common/autotest_common.sh@955 -- # kill 3452086 00:20:09.688 12:16:10 -- common/autotest_common.sh@960 -- # wait 3452086 00:20:09.688 12:16:10 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:09.688 12:16:10 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:09.688 12:16:10 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:09.688 12:16:10 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:09.688 12:16:10 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:09.688 12:16:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:09.688 12:16:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:09.688 12:16:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:12.232 12:16:12 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:12.232 12:16:12 -- target/perf_adq.sh@106 -- # trap - SIGINT SIGTERM EXIT 00:20:12.232 00:20:12.232 real 0m52.540s 00:20:12.232 user 2m49.554s 00:20:12.232 sys 0m10.397s 00:20:12.232 12:16:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:12.232 12:16:12 -- common/autotest_common.sh@10 -- # set +x 00:20:12.232 ************************************ 00:20:12.232 END TEST nvmf_perf_adq 00:20:12.232 ************************************ 00:20:12.232 12:16:12 -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:12.232 12:16:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:12.232 12:16:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:12.232 12:16:13 -- common/autotest_common.sh@10 -- # set +x 00:20:12.232 ************************************ 00:20:12.232 START TEST nvmf_shutdown 00:20:12.232 ************************************ 00:20:12.232 12:16:13 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:12.232 * Looking for test storage... 00:20:12.232 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:12.232 12:16:13 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:12.232 12:16:13 -- nvmf/common.sh@7 -- # uname -s 00:20:12.232 12:16:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:12.232 12:16:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:12.232 12:16:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:12.232 12:16:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:12.232 12:16:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:12.232 12:16:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:12.232 12:16:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:12.232 12:16:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:12.232 12:16:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:12.232 12:16:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:12.232 12:16:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:12.232 12:16:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:12.232 12:16:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:12.232 12:16:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:12.232 12:16:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:12.232 12:16:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:12.232 12:16:13 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:12.232 12:16:13 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:12.232 12:16:13 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:12.232 12:16:13 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:12.232 12:16:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.232 12:16:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.232 12:16:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.232 12:16:13 -- paths/export.sh@5 -- # export PATH 00:20:12.232 12:16:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.232 12:16:13 -- nvmf/common.sh@47 -- # : 0 00:20:12.232 12:16:13 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:12.232 12:16:13 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:12.232 12:16:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:12.232 12:16:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:12.232 12:16:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:12.232 12:16:13 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:12.232 12:16:13 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:12.232 12:16:13 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:12.232 12:16:13 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:12.232 12:16:13 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:12.232 12:16:13 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:20:12.232 12:16:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:12.232 12:16:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:12.232 12:16:13 -- common/autotest_common.sh@10 -- # set +x 00:20:12.232 ************************************ 00:20:12.232 START TEST nvmf_shutdown_tc1 00:20:12.232 ************************************ 00:20:12.232 12:16:13 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc1 00:20:12.232 12:16:13 -- target/shutdown.sh@74 -- # starttarget 00:20:12.232 12:16:13 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:12.232 12:16:13 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:12.232 12:16:13 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:12.232 12:16:13 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:12.232 12:16:13 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:12.232 12:16:13 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:12.232 12:16:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:12.232 12:16:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:12.232 12:16:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:12.492 12:16:13 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:12.492 12:16:13 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:12.492 12:16:13 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:12.492 12:16:13 -- common/autotest_common.sh@10 -- # set +x 00:20:20.710 12:16:20 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:20.710 12:16:20 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:20.710 12:16:20 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:20.710 12:16:20 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:20.710 12:16:20 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:20.710 12:16:20 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:20.710 12:16:20 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:20.710 12:16:20 -- nvmf/common.sh@295 -- # net_devs=() 00:20:20.710 12:16:20 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:20.710 12:16:20 -- nvmf/common.sh@296 -- # e810=() 00:20:20.710 12:16:20 -- nvmf/common.sh@296 -- # local -ga e810 00:20:20.710 12:16:20 -- nvmf/common.sh@297 -- # x722=() 00:20:20.710 12:16:20 -- nvmf/common.sh@297 -- # local -ga x722 00:20:20.710 12:16:20 -- nvmf/common.sh@298 -- # mlx=() 00:20:20.710 12:16:20 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:20.710 12:16:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:20.710 12:16:20 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:20.710 12:16:20 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:20.710 12:16:20 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:20.710 12:16:20 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:20.711 12:16:20 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:20.711 12:16:20 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:20.711 12:16:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:20.711 12:16:20 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:20.711 12:16:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:20.711 12:16:20 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:20.711 12:16:20 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:20.711 12:16:20 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:20.711 12:16:20 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:20.711 12:16:20 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:20.711 12:16:20 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:20.711 12:16:20 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:20.711 12:16:20 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:20.711 12:16:20 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:20.711 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:20.711 12:16:20 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:20.711 12:16:20 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:20.711 12:16:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:20.711 12:16:20 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:20.711 12:16:20 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:20.711 12:16:20 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:20.711 12:16:20 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:20.711 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:20.711 12:16:20 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:20.711 12:16:20 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:20.711 12:16:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:20.711 12:16:20 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:20.711 12:16:20 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:20.711 12:16:20 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:20.711 12:16:20 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:20.711 12:16:20 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:20.711 12:16:20 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:20.711 12:16:20 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:20.711 12:16:20 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:20.711 12:16:20 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:20.711 12:16:20 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:20.711 Found net devices under 0000:31:00.0: cvl_0_0 00:20:20.711 12:16:20 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:20.711 12:16:20 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:20.711 12:16:20 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:20.711 12:16:20 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:20.711 12:16:20 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:20.711 12:16:20 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:20.711 Found net devices under 0000:31:00.1: cvl_0_1 00:20:20.711 12:16:20 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:20.711 12:16:20 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:20.711 12:16:20 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:20.711 12:16:20 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:20.711 12:16:20 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:20.711 12:16:20 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:20.711 12:16:20 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:20.711 12:16:20 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:20.711 12:16:20 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:20.711 12:16:20 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:20.711 12:16:20 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:20.711 12:16:20 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:20.711 12:16:20 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:20.711 12:16:20 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:20.711 12:16:20 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:20.711 12:16:20 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:20.711 12:16:20 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:20.711 12:16:20 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:20.711 12:16:20 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:20.711 12:16:20 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:20.711 12:16:20 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:20.711 12:16:20 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:20.711 12:16:20 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:20.711 12:16:20 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:20.711 12:16:20 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:20.711 12:16:20 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:20.711 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:20.711 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.548 ms 00:20:20.711 00:20:20.711 --- 10.0.0.2 ping statistics --- 00:20:20.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:20.711 rtt min/avg/max/mdev = 0.548/0.548/0.548/0.000 ms 00:20:20.711 12:16:20 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:20.711 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:20.711 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.360 ms 00:20:20.711 00:20:20.711 --- 10.0.0.1 ping statistics --- 00:20:20.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:20.711 rtt min/avg/max/mdev = 0.360/0.360/0.360/0.000 ms 00:20:20.711 12:16:20 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:20.711 12:16:20 -- nvmf/common.sh@411 -- # return 0 00:20:20.711 12:16:20 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:20.711 12:16:20 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:20.711 12:16:20 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:20.711 12:16:20 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:20.711 12:16:20 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:20.711 12:16:20 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:20.711 12:16:20 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:20.711 12:16:20 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:20.711 12:16:20 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:20.711 12:16:20 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:20.711 12:16:20 -- common/autotest_common.sh@10 -- # set +x 00:20:20.711 12:16:20 -- nvmf/common.sh@470 -- # nvmfpid=3458647 00:20:20.711 12:16:20 -- nvmf/common.sh@471 -- # waitforlisten 3458647 00:20:20.711 12:16:20 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:20.711 12:16:20 -- common/autotest_common.sh@817 -- # '[' -z 3458647 ']' 00:20:20.711 12:16:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:20.711 12:16:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:20.711 12:16:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:20.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:20.711 12:16:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:20.711 12:16:20 -- common/autotest_common.sh@10 -- # set +x 00:20:20.711 [2024-04-26 12:16:20.872017] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:20:20.711 [2024-04-26 12:16:20.872086] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:20.711 EAL: No free 2048 kB hugepages reported on node 1 00:20:20.711 [2024-04-26 12:16:20.960735] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:20.711 [2024-04-26 12:16:21.051985] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:20.711 [2024-04-26 12:16:21.052049] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:20.711 [2024-04-26 12:16:21.052058] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:20.711 [2024-04-26 12:16:21.052065] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:20.711 [2024-04-26 12:16:21.052071] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:20.711 [2024-04-26 12:16:21.052211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:20.711 [2024-04-26 12:16:21.052377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:20.711 [2024-04-26 12:16:21.052543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:20.711 [2024-04-26 12:16:21.052544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:20.711 12:16:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:20.711 12:16:21 -- common/autotest_common.sh@850 -- # return 0 00:20:20.711 12:16:21 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:20.711 12:16:21 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:20.711 12:16:21 -- common/autotest_common.sh@10 -- # set +x 00:20:20.711 12:16:21 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:20.711 12:16:21 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:20.711 12:16:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.711 12:16:21 -- common/autotest_common.sh@10 -- # set +x 00:20:20.711 [2024-04-26 12:16:21.674217] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:20.711 12:16:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.711 12:16:21 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:20.711 12:16:21 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:20.711 12:16:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:20.711 12:16:21 -- common/autotest_common.sh@10 -- # set +x 00:20:20.711 12:16:21 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:20.711 12:16:21 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:20.711 12:16:21 -- target/shutdown.sh@28 -- # cat 00:20:20.711 12:16:21 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:20.711 12:16:21 -- target/shutdown.sh@28 -- # cat 00:20:20.711 12:16:21 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:20.711 12:16:21 -- target/shutdown.sh@28 -- # cat 00:20:20.711 12:16:21 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:20.711 12:16:21 -- target/shutdown.sh@28 -- # cat 00:20:20.712 12:16:21 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:20.712 12:16:21 -- target/shutdown.sh@28 -- # cat 00:20:20.712 12:16:21 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:20.712 12:16:21 -- target/shutdown.sh@28 -- # cat 00:20:20.712 12:16:21 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:20.712 12:16:21 -- target/shutdown.sh@28 -- # cat 00:20:20.712 12:16:21 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:20.712 12:16:21 -- target/shutdown.sh@28 -- # cat 00:20:20.712 12:16:21 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:20.712 12:16:21 -- target/shutdown.sh@28 -- # cat 00:20:20.712 12:16:21 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:20.712 12:16:21 -- target/shutdown.sh@28 -- # cat 00:20:20.712 12:16:21 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:20.712 12:16:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.712 12:16:21 -- common/autotest_common.sh@10 -- # set +x 00:20:20.712 Malloc1 00:20:20.712 [2024-04-26 12:16:21.775121] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:20.712 Malloc2 00:20:20.712 Malloc3 00:20:20.712 Malloc4 00:20:20.712 Malloc5 00:20:20.973 Malloc6 00:20:20.973 Malloc7 00:20:20.973 Malloc8 00:20:20.973 Malloc9 00:20:20.973 Malloc10 00:20:20.973 12:16:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.973 12:16:22 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:20.973 12:16:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:20.973 12:16:22 -- common/autotest_common.sh@10 -- # set +x 00:20:20.973 12:16:22 -- target/shutdown.sh@78 -- # perfpid=3459032 00:20:20.973 12:16:22 -- target/shutdown.sh@79 -- # waitforlisten 3459032 /var/tmp/bdevperf.sock 00:20:20.973 12:16:22 -- common/autotest_common.sh@817 -- # '[' -z 3459032 ']' 00:20:20.973 12:16:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:20.973 12:16:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:20.973 12:16:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:20.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:20.973 12:16:22 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:20:20.973 12:16:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:20.973 12:16:22 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:20.973 12:16:22 -- common/autotest_common.sh@10 -- # set +x 00:20:20.973 12:16:22 -- nvmf/common.sh@521 -- # config=() 00:20:20.973 12:16:22 -- nvmf/common.sh@521 -- # local subsystem config 00:20:20.974 12:16:22 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:20.974 12:16:22 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:20.974 { 00:20:20.974 "params": { 00:20:20.974 "name": "Nvme$subsystem", 00:20:20.974 "trtype": "$TEST_TRANSPORT", 00:20:20.974 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:20.974 "adrfam": "ipv4", 00:20:20.974 "trsvcid": "$NVMF_PORT", 00:20:20.974 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:20.974 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:20.974 "hdgst": ${hdgst:-false}, 00:20:20.974 "ddgst": ${ddgst:-false} 00:20:20.974 }, 00:20:20.974 "method": "bdev_nvme_attach_controller" 00:20:20.974 } 00:20:20.974 EOF 00:20:20.974 )") 00:20:20.974 12:16:22 -- nvmf/common.sh@543 -- # cat 00:20:20.974 12:16:22 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:20.974 12:16:22 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:20.974 { 00:20:20.974 "params": { 00:20:20.974 "name": "Nvme$subsystem", 00:20:20.974 "trtype": "$TEST_TRANSPORT", 00:20:20.974 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:20.974 "adrfam": "ipv4", 00:20:20.974 "trsvcid": "$NVMF_PORT", 00:20:20.974 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:20.974 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:20.974 "hdgst": ${hdgst:-false}, 00:20:20.974 "ddgst": ${ddgst:-false} 00:20:20.974 }, 00:20:20.974 "method": "bdev_nvme_attach_controller" 00:20:20.974 } 00:20:20.974 EOF 00:20:20.974 )") 00:20:20.974 12:16:22 -- nvmf/common.sh@543 -- # cat 00:20:20.974 12:16:22 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:20.974 12:16:22 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:20.974 { 00:20:20.974 "params": { 00:20:20.974 "name": "Nvme$subsystem", 00:20:20.974 "trtype": "$TEST_TRANSPORT", 00:20:20.974 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:20.974 "adrfam": "ipv4", 00:20:20.974 "trsvcid": "$NVMF_PORT", 00:20:20.974 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:20.974 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:20.974 "hdgst": ${hdgst:-false}, 00:20:20.974 "ddgst": ${ddgst:-false} 00:20:20.974 }, 00:20:20.974 "method": "bdev_nvme_attach_controller" 00:20:20.974 } 00:20:20.974 EOF 00:20:20.974 )") 00:20:21.235 12:16:22 -- nvmf/common.sh@543 -- # cat 00:20:21.235 12:16:22 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:21.235 12:16:22 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:21.235 { 00:20:21.235 "params": { 00:20:21.235 "name": "Nvme$subsystem", 00:20:21.235 "trtype": "$TEST_TRANSPORT", 00:20:21.235 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:21.235 "adrfam": "ipv4", 00:20:21.235 "trsvcid": "$NVMF_PORT", 00:20:21.235 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:21.235 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:21.235 "hdgst": ${hdgst:-false}, 00:20:21.235 "ddgst": ${ddgst:-false} 00:20:21.235 }, 00:20:21.235 "method": "bdev_nvme_attach_controller" 00:20:21.235 } 00:20:21.235 EOF 00:20:21.235 )") 00:20:21.235 12:16:22 -- nvmf/common.sh@543 -- # cat 00:20:21.235 12:16:22 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:21.235 12:16:22 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:21.235 { 00:20:21.235 "params": { 00:20:21.235 "name": "Nvme$subsystem", 00:20:21.235 "trtype": "$TEST_TRANSPORT", 00:20:21.235 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:21.235 "adrfam": "ipv4", 00:20:21.235 "trsvcid": "$NVMF_PORT", 00:20:21.235 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:21.235 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:21.235 "hdgst": ${hdgst:-false}, 00:20:21.235 "ddgst": ${ddgst:-false} 00:20:21.235 }, 00:20:21.235 "method": "bdev_nvme_attach_controller" 00:20:21.235 } 00:20:21.235 EOF 00:20:21.235 )") 00:20:21.235 12:16:22 -- nvmf/common.sh@543 -- # cat 00:20:21.236 12:16:22 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:21.236 12:16:22 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:21.236 { 00:20:21.236 "params": { 00:20:21.236 "name": "Nvme$subsystem", 00:20:21.236 "trtype": "$TEST_TRANSPORT", 00:20:21.236 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:21.236 "adrfam": "ipv4", 00:20:21.236 "trsvcid": "$NVMF_PORT", 00:20:21.236 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:21.236 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:21.236 "hdgst": ${hdgst:-false}, 00:20:21.236 "ddgst": ${ddgst:-false} 00:20:21.236 }, 00:20:21.236 "method": "bdev_nvme_attach_controller" 00:20:21.236 } 00:20:21.236 EOF 00:20:21.236 )") 00:20:21.236 12:16:22 -- nvmf/common.sh@543 -- # cat 00:20:21.236 12:16:22 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:21.236 12:16:22 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:21.236 { 00:20:21.236 "params": { 00:20:21.236 "name": "Nvme$subsystem", 00:20:21.236 "trtype": "$TEST_TRANSPORT", 00:20:21.236 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:21.236 "adrfam": "ipv4", 00:20:21.236 "trsvcid": "$NVMF_PORT", 00:20:21.236 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:21.236 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:21.236 "hdgst": ${hdgst:-false}, 00:20:21.236 "ddgst": ${ddgst:-false} 00:20:21.236 }, 00:20:21.236 "method": "bdev_nvme_attach_controller" 00:20:21.236 } 00:20:21.236 EOF 00:20:21.236 )") 00:20:21.236 [2024-04-26 12:16:22.223888] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:20:21.236 [2024-04-26 12:16:22.223965] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:20:21.236 12:16:22 -- nvmf/common.sh@543 -- # cat 00:20:21.236 12:16:22 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:21.236 12:16:22 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:21.236 { 00:20:21.236 "params": { 00:20:21.236 "name": "Nvme$subsystem", 00:20:21.236 "trtype": "$TEST_TRANSPORT", 00:20:21.236 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:21.236 "adrfam": "ipv4", 00:20:21.236 "trsvcid": "$NVMF_PORT", 00:20:21.236 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:21.236 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:21.236 "hdgst": ${hdgst:-false}, 00:20:21.236 "ddgst": ${ddgst:-false} 00:20:21.236 }, 00:20:21.236 "method": "bdev_nvme_attach_controller" 00:20:21.236 } 00:20:21.236 EOF 00:20:21.236 )") 00:20:21.236 12:16:22 -- nvmf/common.sh@543 -- # cat 00:20:21.236 12:16:22 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:21.236 12:16:22 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:21.236 { 00:20:21.236 "params": { 00:20:21.236 "name": "Nvme$subsystem", 00:20:21.236 "trtype": "$TEST_TRANSPORT", 00:20:21.236 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:21.236 "adrfam": "ipv4", 00:20:21.236 "trsvcid": "$NVMF_PORT", 00:20:21.236 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:21.236 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:21.236 "hdgst": ${hdgst:-false}, 00:20:21.236 "ddgst": ${ddgst:-false} 00:20:21.236 }, 00:20:21.236 "method": "bdev_nvme_attach_controller" 00:20:21.236 } 00:20:21.236 EOF 00:20:21.236 )") 00:20:21.236 12:16:22 -- nvmf/common.sh@543 -- # cat 00:20:21.236 12:16:22 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:21.236 12:16:22 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:21.236 { 00:20:21.236 "params": { 00:20:21.236 "name": "Nvme$subsystem", 00:20:21.236 "trtype": "$TEST_TRANSPORT", 00:20:21.236 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:21.236 "adrfam": "ipv4", 00:20:21.236 "trsvcid": "$NVMF_PORT", 00:20:21.236 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:21.236 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:21.236 "hdgst": ${hdgst:-false}, 00:20:21.236 "ddgst": ${ddgst:-false} 00:20:21.236 }, 00:20:21.236 "method": "bdev_nvme_attach_controller" 00:20:21.236 } 00:20:21.236 EOF 00:20:21.236 )") 00:20:21.236 12:16:22 -- nvmf/common.sh@543 -- # cat 00:20:21.236 12:16:22 -- nvmf/common.sh@545 -- # jq . 00:20:21.236 EAL: No free 2048 kB hugepages reported on node 1 00:20:21.236 12:16:22 -- nvmf/common.sh@546 -- # IFS=, 00:20:21.236 12:16:22 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:21.236 "params": { 00:20:21.236 "name": "Nvme1", 00:20:21.236 "trtype": "tcp", 00:20:21.236 "traddr": "10.0.0.2", 00:20:21.236 "adrfam": "ipv4", 00:20:21.236 "trsvcid": "4420", 00:20:21.236 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:21.236 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:21.236 "hdgst": false, 00:20:21.236 "ddgst": false 00:20:21.236 }, 00:20:21.236 "method": "bdev_nvme_attach_controller" 00:20:21.236 },{ 00:20:21.236 "params": { 00:20:21.236 "name": "Nvme2", 00:20:21.236 "trtype": "tcp", 00:20:21.236 "traddr": "10.0.0.2", 00:20:21.236 "adrfam": "ipv4", 00:20:21.236 "trsvcid": "4420", 00:20:21.236 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:21.236 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:21.236 "hdgst": false, 00:20:21.236 "ddgst": false 00:20:21.236 }, 00:20:21.236 "method": "bdev_nvme_attach_controller" 00:20:21.236 },{ 00:20:21.236 "params": { 00:20:21.236 "name": "Nvme3", 00:20:21.236 "trtype": "tcp", 00:20:21.236 "traddr": "10.0.0.2", 00:20:21.236 "adrfam": "ipv4", 00:20:21.236 "trsvcid": "4420", 00:20:21.236 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:21.236 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:21.236 "hdgst": false, 00:20:21.236 "ddgst": false 00:20:21.236 }, 00:20:21.236 "method": "bdev_nvme_attach_controller" 00:20:21.236 },{ 00:20:21.236 "params": { 00:20:21.236 "name": "Nvme4", 00:20:21.236 "trtype": "tcp", 00:20:21.236 "traddr": "10.0.0.2", 00:20:21.236 "adrfam": "ipv4", 00:20:21.236 "trsvcid": "4420", 00:20:21.236 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:21.236 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:21.236 "hdgst": false, 00:20:21.236 "ddgst": false 00:20:21.236 }, 00:20:21.236 "method": "bdev_nvme_attach_controller" 00:20:21.236 },{ 00:20:21.236 "params": { 00:20:21.236 "name": "Nvme5", 00:20:21.236 "trtype": "tcp", 00:20:21.236 "traddr": "10.0.0.2", 00:20:21.236 "adrfam": "ipv4", 00:20:21.236 "trsvcid": "4420", 00:20:21.236 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:21.236 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:21.236 "hdgst": false, 00:20:21.236 "ddgst": false 00:20:21.236 }, 00:20:21.236 "method": "bdev_nvme_attach_controller" 00:20:21.236 },{ 00:20:21.236 "params": { 00:20:21.236 "name": "Nvme6", 00:20:21.236 "trtype": "tcp", 00:20:21.236 "traddr": "10.0.0.2", 00:20:21.236 "adrfam": "ipv4", 00:20:21.236 "trsvcid": "4420", 00:20:21.236 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:21.236 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:21.236 "hdgst": false, 00:20:21.236 "ddgst": false 00:20:21.236 }, 00:20:21.236 "method": "bdev_nvme_attach_controller" 00:20:21.236 },{ 00:20:21.236 "params": { 00:20:21.236 "name": "Nvme7", 00:20:21.236 "trtype": "tcp", 00:20:21.236 "traddr": "10.0.0.2", 00:20:21.236 "adrfam": "ipv4", 00:20:21.236 "trsvcid": "4420", 00:20:21.236 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:21.236 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:21.236 "hdgst": false, 00:20:21.236 "ddgst": false 00:20:21.236 }, 00:20:21.236 "method": "bdev_nvme_attach_controller" 00:20:21.236 },{ 00:20:21.236 "params": { 00:20:21.236 "name": "Nvme8", 00:20:21.236 "trtype": "tcp", 00:20:21.236 "traddr": "10.0.0.2", 00:20:21.236 "adrfam": "ipv4", 00:20:21.236 "trsvcid": "4420", 00:20:21.236 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:21.236 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:21.236 "hdgst": false, 00:20:21.236 "ddgst": false 00:20:21.236 }, 00:20:21.236 "method": "bdev_nvme_attach_controller" 00:20:21.236 },{ 00:20:21.236 "params": { 00:20:21.236 "name": "Nvme9", 00:20:21.236 "trtype": "tcp", 00:20:21.236 "traddr": "10.0.0.2", 00:20:21.236 "adrfam": "ipv4", 00:20:21.236 "trsvcid": "4420", 00:20:21.236 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:21.236 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:21.236 "hdgst": false, 00:20:21.236 "ddgst": false 00:20:21.236 }, 00:20:21.236 "method": "bdev_nvme_attach_controller" 00:20:21.236 },{ 00:20:21.236 "params": { 00:20:21.236 "name": "Nvme10", 00:20:21.236 "trtype": "tcp", 00:20:21.236 "traddr": "10.0.0.2", 00:20:21.236 "adrfam": "ipv4", 00:20:21.236 "trsvcid": "4420", 00:20:21.236 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:21.236 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:21.236 "hdgst": false, 00:20:21.236 "ddgst": false 00:20:21.236 }, 00:20:21.236 "method": "bdev_nvme_attach_controller" 00:20:21.236 }' 00:20:21.236 [2024-04-26 12:16:22.288276] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.236 [2024-04-26 12:16:22.351412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:22.621 12:16:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:22.621 12:16:23 -- common/autotest_common.sh@850 -- # return 0 00:20:22.621 12:16:23 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:22.621 12:16:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:22.621 12:16:23 -- common/autotest_common.sh@10 -- # set +x 00:20:22.621 12:16:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:22.621 12:16:23 -- target/shutdown.sh@83 -- # kill -9 3459032 00:20:22.621 12:16:23 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:20:22.621 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 3459032 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:20:22.621 12:16:23 -- target/shutdown.sh@87 -- # sleep 1 00:20:23.564 12:16:24 -- target/shutdown.sh@88 -- # kill -0 3458647 00:20:23.564 12:16:24 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:20:23.564 12:16:24 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:23.564 12:16:24 -- nvmf/common.sh@521 -- # config=() 00:20:23.564 12:16:24 -- nvmf/common.sh@521 -- # local subsystem config 00:20:23.564 12:16:24 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:23.564 12:16:24 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:23.564 { 00:20:23.564 "params": { 00:20:23.564 "name": "Nvme$subsystem", 00:20:23.564 "trtype": "$TEST_TRANSPORT", 00:20:23.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:23.564 "adrfam": "ipv4", 00:20:23.564 "trsvcid": "$NVMF_PORT", 00:20:23.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:23.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:23.564 "hdgst": ${hdgst:-false}, 00:20:23.564 "ddgst": ${ddgst:-false} 00:20:23.564 }, 00:20:23.564 "method": "bdev_nvme_attach_controller" 00:20:23.564 } 00:20:23.564 EOF 00:20:23.564 )") 00:20:23.564 12:16:24 -- nvmf/common.sh@543 -- # cat 00:20:23.564 12:16:24 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:23.564 12:16:24 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:23.564 { 00:20:23.564 "params": { 00:20:23.564 "name": "Nvme$subsystem", 00:20:23.564 "trtype": "$TEST_TRANSPORT", 00:20:23.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:23.564 "adrfam": "ipv4", 00:20:23.564 "trsvcid": "$NVMF_PORT", 00:20:23.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:23.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:23.564 "hdgst": ${hdgst:-false}, 00:20:23.564 "ddgst": ${ddgst:-false} 00:20:23.564 }, 00:20:23.564 "method": "bdev_nvme_attach_controller" 00:20:23.564 } 00:20:23.564 EOF 00:20:23.564 )") 00:20:23.564 12:16:24 -- nvmf/common.sh@543 -- # cat 00:20:23.564 12:16:24 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:23.564 12:16:24 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:23.564 { 00:20:23.564 "params": { 00:20:23.564 "name": "Nvme$subsystem", 00:20:23.564 "trtype": "$TEST_TRANSPORT", 00:20:23.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:23.564 "adrfam": "ipv4", 00:20:23.564 "trsvcid": "$NVMF_PORT", 00:20:23.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:23.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:23.564 "hdgst": ${hdgst:-false}, 00:20:23.564 "ddgst": ${ddgst:-false} 00:20:23.564 }, 00:20:23.564 "method": "bdev_nvme_attach_controller" 00:20:23.564 } 00:20:23.564 EOF 00:20:23.564 )") 00:20:23.564 12:16:24 -- nvmf/common.sh@543 -- # cat 00:20:23.564 12:16:24 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:23.564 12:16:24 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:23.564 { 00:20:23.564 "params": { 00:20:23.564 "name": "Nvme$subsystem", 00:20:23.564 "trtype": "$TEST_TRANSPORT", 00:20:23.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:23.564 "adrfam": "ipv4", 00:20:23.564 "trsvcid": "$NVMF_PORT", 00:20:23.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:23.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:23.564 "hdgst": ${hdgst:-false}, 00:20:23.564 "ddgst": ${ddgst:-false} 00:20:23.564 }, 00:20:23.564 "method": "bdev_nvme_attach_controller" 00:20:23.564 } 00:20:23.564 EOF 00:20:23.564 )") 00:20:23.564 12:16:24 -- nvmf/common.sh@543 -- # cat 00:20:23.564 12:16:24 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:23.564 12:16:24 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:23.564 { 00:20:23.564 "params": { 00:20:23.564 "name": "Nvme$subsystem", 00:20:23.564 "trtype": "$TEST_TRANSPORT", 00:20:23.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:23.564 "adrfam": "ipv4", 00:20:23.564 "trsvcid": "$NVMF_PORT", 00:20:23.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:23.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:23.564 "hdgst": ${hdgst:-false}, 00:20:23.564 "ddgst": ${ddgst:-false} 00:20:23.565 }, 00:20:23.565 "method": "bdev_nvme_attach_controller" 00:20:23.565 } 00:20:23.565 EOF 00:20:23.565 )") 00:20:23.565 12:16:24 -- nvmf/common.sh@543 -- # cat 00:20:23.565 12:16:24 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:23.565 12:16:24 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:23.565 { 00:20:23.565 "params": { 00:20:23.565 "name": "Nvme$subsystem", 00:20:23.565 "trtype": "$TEST_TRANSPORT", 00:20:23.565 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:23.565 "adrfam": "ipv4", 00:20:23.565 "trsvcid": "$NVMF_PORT", 00:20:23.565 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:23.565 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:23.565 "hdgst": ${hdgst:-false}, 00:20:23.565 "ddgst": ${ddgst:-false} 00:20:23.565 }, 00:20:23.565 "method": "bdev_nvme_attach_controller" 00:20:23.565 } 00:20:23.565 EOF 00:20:23.565 )") 00:20:23.565 12:16:24 -- nvmf/common.sh@543 -- # cat 00:20:23.565 12:16:24 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:23.565 12:16:24 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:23.565 { 00:20:23.565 "params": { 00:20:23.565 "name": "Nvme$subsystem", 00:20:23.565 "trtype": "$TEST_TRANSPORT", 00:20:23.565 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:23.565 "adrfam": "ipv4", 00:20:23.565 "trsvcid": "$NVMF_PORT", 00:20:23.565 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:23.565 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:23.565 "hdgst": ${hdgst:-false}, 00:20:23.565 "ddgst": ${ddgst:-false} 00:20:23.565 }, 00:20:23.565 "method": "bdev_nvme_attach_controller" 00:20:23.565 } 00:20:23.565 EOF 00:20:23.565 )") 00:20:23.565 [2024-04-26 12:16:24.705671] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:20:23.565 [2024-04-26 12:16:24.705722] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3459409 ] 00:20:23.565 12:16:24 -- nvmf/common.sh@543 -- # cat 00:20:23.565 12:16:24 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:23.565 12:16:24 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:23.565 { 00:20:23.565 "params": { 00:20:23.565 "name": "Nvme$subsystem", 00:20:23.565 "trtype": "$TEST_TRANSPORT", 00:20:23.565 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:23.565 "adrfam": "ipv4", 00:20:23.565 "trsvcid": "$NVMF_PORT", 00:20:23.565 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:23.565 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:23.565 "hdgst": ${hdgst:-false}, 00:20:23.565 "ddgst": ${ddgst:-false} 00:20:23.565 }, 00:20:23.565 "method": "bdev_nvme_attach_controller" 00:20:23.565 } 00:20:23.565 EOF 00:20:23.565 )") 00:20:23.565 12:16:24 -- nvmf/common.sh@543 -- # cat 00:20:23.565 12:16:24 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:23.565 12:16:24 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:23.565 { 00:20:23.565 "params": { 00:20:23.565 "name": "Nvme$subsystem", 00:20:23.565 "trtype": "$TEST_TRANSPORT", 00:20:23.565 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:23.565 "adrfam": "ipv4", 00:20:23.565 "trsvcid": "$NVMF_PORT", 00:20:23.565 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:23.565 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:23.565 "hdgst": ${hdgst:-false}, 00:20:23.565 "ddgst": ${ddgst:-false} 00:20:23.565 }, 00:20:23.565 "method": "bdev_nvme_attach_controller" 00:20:23.565 } 00:20:23.565 EOF 00:20:23.565 )") 00:20:23.565 12:16:24 -- nvmf/common.sh@543 -- # cat 00:20:23.565 12:16:24 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:23.565 12:16:24 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:23.565 { 00:20:23.565 "params": { 00:20:23.565 "name": "Nvme$subsystem", 00:20:23.565 "trtype": "$TEST_TRANSPORT", 00:20:23.565 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:23.565 "adrfam": "ipv4", 00:20:23.565 "trsvcid": "$NVMF_PORT", 00:20:23.565 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:23.565 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:23.565 "hdgst": ${hdgst:-false}, 00:20:23.565 "ddgst": ${ddgst:-false} 00:20:23.565 }, 00:20:23.565 "method": "bdev_nvme_attach_controller" 00:20:23.565 } 00:20:23.565 EOF 00:20:23.565 )") 00:20:23.565 12:16:24 -- nvmf/common.sh@543 -- # cat 00:20:23.565 EAL: No free 2048 kB hugepages reported on node 1 00:20:23.565 12:16:24 -- nvmf/common.sh@545 -- # jq . 00:20:23.565 12:16:24 -- nvmf/common.sh@546 -- # IFS=, 00:20:23.565 12:16:24 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:23.565 "params": { 00:20:23.565 "name": "Nvme1", 00:20:23.565 "trtype": "tcp", 00:20:23.565 "traddr": "10.0.0.2", 00:20:23.565 "adrfam": "ipv4", 00:20:23.565 "trsvcid": "4420", 00:20:23.565 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:23.565 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:23.565 "hdgst": false, 00:20:23.565 "ddgst": false 00:20:23.565 }, 00:20:23.565 "method": "bdev_nvme_attach_controller" 00:20:23.565 },{ 00:20:23.565 "params": { 00:20:23.565 "name": "Nvme2", 00:20:23.565 "trtype": "tcp", 00:20:23.565 "traddr": "10.0.0.2", 00:20:23.565 "adrfam": "ipv4", 00:20:23.565 "trsvcid": "4420", 00:20:23.565 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:23.565 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:23.565 "hdgst": false, 00:20:23.565 "ddgst": false 00:20:23.565 }, 00:20:23.565 "method": "bdev_nvme_attach_controller" 00:20:23.565 },{ 00:20:23.565 "params": { 00:20:23.565 "name": "Nvme3", 00:20:23.565 "trtype": "tcp", 00:20:23.565 "traddr": "10.0.0.2", 00:20:23.565 "adrfam": "ipv4", 00:20:23.565 "trsvcid": "4420", 00:20:23.565 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:23.565 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:23.565 "hdgst": false, 00:20:23.565 "ddgst": false 00:20:23.565 }, 00:20:23.565 "method": "bdev_nvme_attach_controller" 00:20:23.565 },{ 00:20:23.565 "params": { 00:20:23.565 "name": "Nvme4", 00:20:23.565 "trtype": "tcp", 00:20:23.565 "traddr": "10.0.0.2", 00:20:23.565 "adrfam": "ipv4", 00:20:23.565 "trsvcid": "4420", 00:20:23.565 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:23.565 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:23.565 "hdgst": false, 00:20:23.565 "ddgst": false 00:20:23.565 }, 00:20:23.565 "method": "bdev_nvme_attach_controller" 00:20:23.565 },{ 00:20:23.565 "params": { 00:20:23.565 "name": "Nvme5", 00:20:23.565 "trtype": "tcp", 00:20:23.565 "traddr": "10.0.0.2", 00:20:23.565 "adrfam": "ipv4", 00:20:23.565 "trsvcid": "4420", 00:20:23.565 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:23.565 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:23.565 "hdgst": false, 00:20:23.565 "ddgst": false 00:20:23.565 }, 00:20:23.565 "method": "bdev_nvme_attach_controller" 00:20:23.565 },{ 00:20:23.565 "params": { 00:20:23.565 "name": "Nvme6", 00:20:23.565 "trtype": "tcp", 00:20:23.565 "traddr": "10.0.0.2", 00:20:23.565 "adrfam": "ipv4", 00:20:23.565 "trsvcid": "4420", 00:20:23.565 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:23.565 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:23.565 "hdgst": false, 00:20:23.565 "ddgst": false 00:20:23.565 }, 00:20:23.565 "method": "bdev_nvme_attach_controller" 00:20:23.565 },{ 00:20:23.565 "params": { 00:20:23.565 "name": "Nvme7", 00:20:23.565 "trtype": "tcp", 00:20:23.565 "traddr": "10.0.0.2", 00:20:23.565 "adrfam": "ipv4", 00:20:23.565 "trsvcid": "4420", 00:20:23.565 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:23.565 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:23.565 "hdgst": false, 00:20:23.565 "ddgst": false 00:20:23.565 }, 00:20:23.565 "method": "bdev_nvme_attach_controller" 00:20:23.565 },{ 00:20:23.565 "params": { 00:20:23.565 "name": "Nvme8", 00:20:23.565 "trtype": "tcp", 00:20:23.565 "traddr": "10.0.0.2", 00:20:23.565 "adrfam": "ipv4", 00:20:23.565 "trsvcid": "4420", 00:20:23.565 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:23.565 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:23.565 "hdgst": false, 00:20:23.565 "ddgst": false 00:20:23.565 }, 00:20:23.565 "method": "bdev_nvme_attach_controller" 00:20:23.565 },{ 00:20:23.565 "params": { 00:20:23.565 "name": "Nvme9", 00:20:23.565 "trtype": "tcp", 00:20:23.565 "traddr": "10.0.0.2", 00:20:23.565 "adrfam": "ipv4", 00:20:23.565 "trsvcid": "4420", 00:20:23.565 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:23.565 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:23.565 "hdgst": false, 00:20:23.565 "ddgst": false 00:20:23.565 }, 00:20:23.565 "method": "bdev_nvme_attach_controller" 00:20:23.565 },{ 00:20:23.566 "params": { 00:20:23.566 "name": "Nvme10", 00:20:23.566 "trtype": "tcp", 00:20:23.566 "traddr": "10.0.0.2", 00:20:23.566 "adrfam": "ipv4", 00:20:23.566 "trsvcid": "4420", 00:20:23.566 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:23.566 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:23.566 "hdgst": false, 00:20:23.566 "ddgst": false 00:20:23.566 }, 00:20:23.566 "method": "bdev_nvme_attach_controller" 00:20:23.566 }' 00:20:23.566 [2024-04-26 12:16:24.769108] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.827 [2024-04-26 12:16:24.831351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:25.208 Running I/O for 1 seconds... 00:20:26.149 00:20:26.149 Latency(us) 00:20:26.149 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:26.149 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:26.149 Verification LBA range: start 0x0 length 0x400 00:20:26.149 Nvme1n1 : 1.15 167.46 10.47 0.00 0.00 378536.96 41506.13 347777.71 00:20:26.149 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:26.149 Verification LBA range: start 0x0 length 0x400 00:20:26.149 Nvme2n1 : 1.15 167.18 10.45 0.00 0.00 372277.48 18240.85 326806.19 00:20:26.149 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:26.149 Verification LBA range: start 0x0 length 0x400 00:20:26.149 Nvme3n1 : 1.20 213.89 13.37 0.00 0.00 286391.04 19770.03 339039.57 00:20:26.149 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:26.149 Verification LBA range: start 0x0 length 0x400 00:20:26.149 Nvme4n1 : 1.20 212.45 13.28 0.00 0.00 283812.69 30365.01 305834.67 00:20:26.149 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:26.149 Verification LBA range: start 0x0 length 0x400 00:20:26.149 Nvme5n1 : 1.21 211.86 13.24 0.00 0.00 279831.25 15619.41 365253.97 00:20:26.149 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:26.149 Verification LBA range: start 0x0 length 0x400 00:20:26.149 Nvme6n1 : 1.16 165.99 10.37 0.00 0.00 349480.11 21080.75 335544.32 00:20:26.149 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:26.149 Verification LBA range: start 0x0 length 0x400 00:20:26.149 Nvme7n1 : 1.20 213.07 13.32 0.00 0.00 267559.68 22391.47 312825.17 00:20:26.149 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:26.149 Verification LBA range: start 0x0 length 0x400 00:20:26.149 Nvme8n1 : 1.21 210.85 13.18 0.00 0.00 266794.88 15619.41 335544.32 00:20:26.149 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:26.149 Verification LBA range: start 0x0 length 0x400 00:20:26.149 Nvme9n1 : 1.16 165.28 10.33 0.00 0.00 332057.32 26105.17 340787.20 00:20:26.149 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:26.149 Verification LBA range: start 0x0 length 0x400 00:20:26.149 Nvme10n1 : 1.22 209.95 13.12 0.00 0.00 258570.99 11851.09 358263.47 00:20:26.149 =================================================================================================================== 00:20:26.149 Total : 1937.99 121.12 0.00 0.00 301913.83 11851.09 365253.97 00:20:26.409 12:16:27 -- target/shutdown.sh@94 -- # stoptarget 00:20:26.409 12:16:27 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:26.409 12:16:27 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:26.409 12:16:27 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:26.409 12:16:27 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:26.409 12:16:27 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:26.409 12:16:27 -- nvmf/common.sh@117 -- # sync 00:20:26.409 12:16:27 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:26.409 12:16:27 -- nvmf/common.sh@120 -- # set +e 00:20:26.409 12:16:27 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:26.409 12:16:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:26.409 rmmod nvme_tcp 00:20:26.409 rmmod nvme_fabrics 00:20:26.409 rmmod nvme_keyring 00:20:26.409 12:16:27 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:26.409 12:16:27 -- nvmf/common.sh@124 -- # set -e 00:20:26.409 12:16:27 -- nvmf/common.sh@125 -- # return 0 00:20:26.409 12:16:27 -- nvmf/common.sh@478 -- # '[' -n 3458647 ']' 00:20:26.409 12:16:27 -- nvmf/common.sh@479 -- # killprocess 3458647 00:20:26.409 12:16:27 -- common/autotest_common.sh@936 -- # '[' -z 3458647 ']' 00:20:26.409 12:16:27 -- common/autotest_common.sh@940 -- # kill -0 3458647 00:20:26.409 12:16:27 -- common/autotest_common.sh@941 -- # uname 00:20:26.409 12:16:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:26.409 12:16:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3458647 00:20:26.668 12:16:27 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:26.669 12:16:27 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:26.669 12:16:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3458647' 00:20:26.669 killing process with pid 3458647 00:20:26.669 12:16:27 -- common/autotest_common.sh@955 -- # kill 3458647 00:20:26.669 12:16:27 -- common/autotest_common.sh@960 -- # wait 3458647 00:20:26.669 12:16:27 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:26.669 12:16:27 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:26.669 12:16:27 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:26.669 12:16:27 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:26.669 12:16:27 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:26.669 12:16:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:26.669 12:16:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:26.669 12:16:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:29.211 12:16:29 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:29.211 00:20:29.211 real 0m16.497s 00:20:29.211 user 0m33.425s 00:20:29.211 sys 0m6.447s 00:20:29.211 12:16:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:29.211 12:16:29 -- common/autotest_common.sh@10 -- # set +x 00:20:29.211 ************************************ 00:20:29.211 END TEST nvmf_shutdown_tc1 00:20:29.211 ************************************ 00:20:29.211 12:16:29 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:20:29.211 12:16:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:29.211 12:16:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:29.211 12:16:29 -- common/autotest_common.sh@10 -- # set +x 00:20:29.211 ************************************ 00:20:29.211 START TEST nvmf_shutdown_tc2 00:20:29.211 ************************************ 00:20:29.211 12:16:30 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc2 00:20:29.211 12:16:30 -- target/shutdown.sh@99 -- # starttarget 00:20:29.211 12:16:30 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:29.211 12:16:30 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:29.211 12:16:30 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:29.211 12:16:30 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:29.211 12:16:30 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:29.211 12:16:30 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:29.211 12:16:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:29.211 12:16:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:29.211 12:16:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:29.211 12:16:30 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:29.211 12:16:30 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:29.211 12:16:30 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:29.211 12:16:30 -- common/autotest_common.sh@10 -- # set +x 00:20:29.211 12:16:30 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:29.211 12:16:30 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:29.211 12:16:30 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:29.211 12:16:30 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:29.211 12:16:30 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:29.211 12:16:30 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:29.211 12:16:30 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:29.211 12:16:30 -- nvmf/common.sh@295 -- # net_devs=() 00:20:29.211 12:16:30 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:29.211 12:16:30 -- nvmf/common.sh@296 -- # e810=() 00:20:29.211 12:16:30 -- nvmf/common.sh@296 -- # local -ga e810 00:20:29.211 12:16:30 -- nvmf/common.sh@297 -- # x722=() 00:20:29.211 12:16:30 -- nvmf/common.sh@297 -- # local -ga x722 00:20:29.211 12:16:30 -- nvmf/common.sh@298 -- # mlx=() 00:20:29.211 12:16:30 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:29.211 12:16:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:29.211 12:16:30 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:29.211 12:16:30 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:29.211 12:16:30 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:29.211 12:16:30 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:29.211 12:16:30 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:29.211 12:16:30 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:29.212 12:16:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:29.212 12:16:30 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:29.212 12:16:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:29.212 12:16:30 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:29.212 12:16:30 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:29.212 12:16:30 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:29.212 12:16:30 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:29.212 12:16:30 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:29.212 12:16:30 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:29.212 12:16:30 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:29.212 12:16:30 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:29.212 12:16:30 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:29.212 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:29.212 12:16:30 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:29.212 12:16:30 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:29.212 12:16:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:29.212 12:16:30 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:29.212 12:16:30 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:29.212 12:16:30 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:29.212 12:16:30 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:29.212 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:29.212 12:16:30 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:29.212 12:16:30 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:29.212 12:16:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:29.212 12:16:30 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:29.212 12:16:30 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:29.212 12:16:30 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:29.212 12:16:30 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:29.212 12:16:30 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:29.212 12:16:30 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:29.212 12:16:30 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:29.212 12:16:30 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:29.212 12:16:30 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:29.212 12:16:30 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:29.212 Found net devices under 0000:31:00.0: cvl_0_0 00:20:29.212 12:16:30 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:29.212 12:16:30 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:29.212 12:16:30 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:29.212 12:16:30 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:29.212 12:16:30 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:29.212 12:16:30 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:29.212 Found net devices under 0000:31:00.1: cvl_0_1 00:20:29.212 12:16:30 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:29.212 12:16:30 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:29.212 12:16:30 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:29.212 12:16:30 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:29.212 12:16:30 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:29.212 12:16:30 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:29.212 12:16:30 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:29.212 12:16:30 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:29.212 12:16:30 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:29.212 12:16:30 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:29.212 12:16:30 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:29.212 12:16:30 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:29.212 12:16:30 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:29.212 12:16:30 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:29.212 12:16:30 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:29.212 12:16:30 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:29.212 12:16:30 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:29.212 12:16:30 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:29.212 12:16:30 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:29.212 12:16:30 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:29.212 12:16:30 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:29.212 12:16:30 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:29.212 12:16:30 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:29.472 12:16:30 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:29.472 12:16:30 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:29.472 12:16:30 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:29.472 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:29.472 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.349 ms 00:20:29.472 00:20:29.472 --- 10.0.0.2 ping statistics --- 00:20:29.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:29.472 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:20:29.472 12:16:30 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:29.472 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:29.472 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:20:29.472 00:20:29.472 --- 10.0.0.1 ping statistics --- 00:20:29.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:29.472 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:20:29.472 12:16:30 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:29.472 12:16:30 -- nvmf/common.sh@411 -- # return 0 00:20:29.472 12:16:30 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:29.472 12:16:30 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:29.472 12:16:30 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:29.472 12:16:30 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:29.472 12:16:30 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:29.472 12:16:30 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:29.472 12:16:30 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:29.472 12:16:30 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:29.472 12:16:30 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:29.472 12:16:30 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:29.472 12:16:30 -- common/autotest_common.sh@10 -- # set +x 00:20:29.472 12:16:30 -- nvmf/common.sh@470 -- # nvmfpid=3460823 00:20:29.472 12:16:30 -- nvmf/common.sh@471 -- # waitforlisten 3460823 00:20:29.472 12:16:30 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:29.472 12:16:30 -- common/autotest_common.sh@817 -- # '[' -z 3460823 ']' 00:20:29.472 12:16:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:29.472 12:16:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:29.472 12:16:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:29.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:29.472 12:16:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:29.472 12:16:30 -- common/autotest_common.sh@10 -- # set +x 00:20:29.472 [2024-04-26 12:16:30.579639] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:20:29.472 [2024-04-26 12:16:30.579690] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:29.472 EAL: No free 2048 kB hugepages reported on node 1 00:20:29.472 [2024-04-26 12:16:30.658649] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:29.732 [2024-04-26 12:16:30.713021] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:29.732 [2024-04-26 12:16:30.713055] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:29.732 [2024-04-26 12:16:30.713061] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:29.732 [2024-04-26 12:16:30.713067] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:29.732 [2024-04-26 12:16:30.713072] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:29.732 [2024-04-26 12:16:30.713177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:29.732 [2024-04-26 12:16:30.713336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:29.732 [2024-04-26 12:16:30.713462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:29.732 [2024-04-26 12:16:30.713465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:30.303 12:16:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:30.303 12:16:31 -- common/autotest_common.sh@850 -- # return 0 00:20:30.303 12:16:31 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:30.303 12:16:31 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:30.303 12:16:31 -- common/autotest_common.sh@10 -- # set +x 00:20:30.303 12:16:31 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:30.303 12:16:31 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:30.303 12:16:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:30.303 12:16:31 -- common/autotest_common.sh@10 -- # set +x 00:20:30.303 [2024-04-26 12:16:31.399214] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:30.303 12:16:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:30.303 12:16:31 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:30.303 12:16:31 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:30.303 12:16:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:30.303 12:16:31 -- common/autotest_common.sh@10 -- # set +x 00:20:30.303 12:16:31 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:30.303 12:16:31 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:30.303 12:16:31 -- target/shutdown.sh@28 -- # cat 00:20:30.303 12:16:31 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:30.303 12:16:31 -- target/shutdown.sh@28 -- # cat 00:20:30.303 12:16:31 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:30.303 12:16:31 -- target/shutdown.sh@28 -- # cat 00:20:30.303 12:16:31 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:30.303 12:16:31 -- target/shutdown.sh@28 -- # cat 00:20:30.303 12:16:31 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:30.303 12:16:31 -- target/shutdown.sh@28 -- # cat 00:20:30.303 12:16:31 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:30.303 12:16:31 -- target/shutdown.sh@28 -- # cat 00:20:30.303 12:16:31 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:30.303 12:16:31 -- target/shutdown.sh@28 -- # cat 00:20:30.303 12:16:31 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:30.303 12:16:31 -- target/shutdown.sh@28 -- # cat 00:20:30.303 12:16:31 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:30.303 12:16:31 -- target/shutdown.sh@28 -- # cat 00:20:30.303 12:16:31 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:30.303 12:16:31 -- target/shutdown.sh@28 -- # cat 00:20:30.303 12:16:31 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:30.303 12:16:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:30.303 12:16:31 -- common/autotest_common.sh@10 -- # set +x 00:20:30.303 Malloc1 00:20:30.303 [2024-04-26 12:16:31.493846] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:30.303 Malloc2 00:20:30.562 Malloc3 00:20:30.562 Malloc4 00:20:30.562 Malloc5 00:20:30.562 Malloc6 00:20:30.562 Malloc7 00:20:30.562 Malloc8 00:20:30.822 Malloc9 00:20:30.822 Malloc10 00:20:30.822 12:16:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:30.822 12:16:31 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:30.822 12:16:31 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:30.822 12:16:31 -- common/autotest_common.sh@10 -- # set +x 00:20:30.822 12:16:31 -- target/shutdown.sh@103 -- # perfpid=3461027 00:20:30.822 12:16:31 -- target/shutdown.sh@104 -- # waitforlisten 3461027 /var/tmp/bdevperf.sock 00:20:30.822 12:16:31 -- common/autotest_common.sh@817 -- # '[' -z 3461027 ']' 00:20:30.822 12:16:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:30.822 12:16:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:30.822 12:16:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:30.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:30.822 12:16:31 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:30.822 12:16:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:30.822 12:16:31 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:30.822 12:16:31 -- common/autotest_common.sh@10 -- # set +x 00:20:30.822 12:16:31 -- nvmf/common.sh@521 -- # config=() 00:20:30.822 12:16:31 -- nvmf/common.sh@521 -- # local subsystem config 00:20:30.822 12:16:31 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:30.822 12:16:31 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:30.822 { 00:20:30.822 "params": { 00:20:30.822 "name": "Nvme$subsystem", 00:20:30.822 "trtype": "$TEST_TRANSPORT", 00:20:30.822 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:30.822 "adrfam": "ipv4", 00:20:30.822 "trsvcid": "$NVMF_PORT", 00:20:30.822 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:30.822 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:30.822 "hdgst": ${hdgst:-false}, 00:20:30.822 "ddgst": ${ddgst:-false} 00:20:30.822 }, 00:20:30.822 "method": "bdev_nvme_attach_controller" 00:20:30.822 } 00:20:30.822 EOF 00:20:30.822 )") 00:20:30.822 12:16:31 -- nvmf/common.sh@543 -- # cat 00:20:30.822 12:16:31 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:30.822 12:16:31 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:30.822 { 00:20:30.822 "params": { 00:20:30.822 "name": "Nvme$subsystem", 00:20:30.822 "trtype": "$TEST_TRANSPORT", 00:20:30.822 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:30.822 "adrfam": "ipv4", 00:20:30.822 "trsvcid": "$NVMF_PORT", 00:20:30.822 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:30.822 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:30.822 "hdgst": ${hdgst:-false}, 00:20:30.822 "ddgst": ${ddgst:-false} 00:20:30.822 }, 00:20:30.822 "method": "bdev_nvme_attach_controller" 00:20:30.822 } 00:20:30.822 EOF 00:20:30.822 )") 00:20:30.822 12:16:31 -- nvmf/common.sh@543 -- # cat 00:20:30.822 12:16:31 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:30.822 12:16:31 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:30.822 { 00:20:30.822 "params": { 00:20:30.822 "name": "Nvme$subsystem", 00:20:30.822 "trtype": "$TEST_TRANSPORT", 00:20:30.822 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:30.822 "adrfam": "ipv4", 00:20:30.822 "trsvcid": "$NVMF_PORT", 00:20:30.822 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:30.822 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:30.822 "hdgst": ${hdgst:-false}, 00:20:30.822 "ddgst": ${ddgst:-false} 00:20:30.822 }, 00:20:30.822 "method": "bdev_nvme_attach_controller" 00:20:30.822 } 00:20:30.822 EOF 00:20:30.822 )") 00:20:30.822 12:16:31 -- nvmf/common.sh@543 -- # cat 00:20:30.822 12:16:31 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:30.822 12:16:31 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:30.822 { 00:20:30.822 "params": { 00:20:30.822 "name": "Nvme$subsystem", 00:20:30.822 "trtype": "$TEST_TRANSPORT", 00:20:30.822 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:30.822 "adrfam": "ipv4", 00:20:30.822 "trsvcid": "$NVMF_PORT", 00:20:30.822 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:30.822 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:30.822 "hdgst": ${hdgst:-false}, 00:20:30.822 "ddgst": ${ddgst:-false} 00:20:30.822 }, 00:20:30.822 "method": "bdev_nvme_attach_controller" 00:20:30.822 } 00:20:30.822 EOF 00:20:30.822 )") 00:20:30.822 12:16:31 -- nvmf/common.sh@543 -- # cat 00:20:30.822 12:16:31 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:30.822 12:16:31 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:30.822 { 00:20:30.822 "params": { 00:20:30.822 "name": "Nvme$subsystem", 00:20:30.822 "trtype": "$TEST_TRANSPORT", 00:20:30.822 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:30.822 "adrfam": "ipv4", 00:20:30.822 "trsvcid": "$NVMF_PORT", 00:20:30.822 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:30.822 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:30.822 "hdgst": ${hdgst:-false}, 00:20:30.822 "ddgst": ${ddgst:-false} 00:20:30.822 }, 00:20:30.822 "method": "bdev_nvme_attach_controller" 00:20:30.822 } 00:20:30.822 EOF 00:20:30.822 )") 00:20:30.822 12:16:31 -- nvmf/common.sh@543 -- # cat 00:20:30.822 12:16:31 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:30.822 12:16:31 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:30.822 { 00:20:30.822 "params": { 00:20:30.822 "name": "Nvme$subsystem", 00:20:30.822 "trtype": "$TEST_TRANSPORT", 00:20:30.822 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:30.822 "adrfam": "ipv4", 00:20:30.822 "trsvcid": "$NVMF_PORT", 00:20:30.822 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:30.822 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:30.822 "hdgst": ${hdgst:-false}, 00:20:30.822 "ddgst": ${ddgst:-false} 00:20:30.822 }, 00:20:30.822 "method": "bdev_nvme_attach_controller" 00:20:30.822 } 00:20:30.822 EOF 00:20:30.822 )") 00:20:30.822 12:16:31 -- nvmf/common.sh@543 -- # cat 00:20:30.822 [2024-04-26 12:16:31.933414] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:20:30.822 [2024-04-26 12:16:31.933465] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3461027 ] 00:20:30.822 12:16:31 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:30.822 12:16:31 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:30.822 { 00:20:30.822 "params": { 00:20:30.822 "name": "Nvme$subsystem", 00:20:30.822 "trtype": "$TEST_TRANSPORT", 00:20:30.822 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:30.822 "adrfam": "ipv4", 00:20:30.822 "trsvcid": "$NVMF_PORT", 00:20:30.822 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:30.822 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:30.822 "hdgst": ${hdgst:-false}, 00:20:30.822 "ddgst": ${ddgst:-false} 00:20:30.822 }, 00:20:30.822 "method": "bdev_nvme_attach_controller" 00:20:30.822 } 00:20:30.822 EOF 00:20:30.822 )") 00:20:30.822 12:16:31 -- nvmf/common.sh@543 -- # cat 00:20:30.822 12:16:31 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:30.822 12:16:31 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:30.822 { 00:20:30.822 "params": { 00:20:30.822 "name": "Nvme$subsystem", 00:20:30.822 "trtype": "$TEST_TRANSPORT", 00:20:30.822 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:30.822 "adrfam": "ipv4", 00:20:30.822 "trsvcid": "$NVMF_PORT", 00:20:30.822 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:30.823 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:30.823 "hdgst": ${hdgst:-false}, 00:20:30.823 "ddgst": ${ddgst:-false} 00:20:30.823 }, 00:20:30.823 "method": "bdev_nvme_attach_controller" 00:20:30.823 } 00:20:30.823 EOF 00:20:30.823 )") 00:20:30.823 12:16:31 -- nvmf/common.sh@543 -- # cat 00:20:30.823 12:16:31 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:30.823 12:16:31 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:30.823 { 00:20:30.823 "params": { 00:20:30.823 "name": "Nvme$subsystem", 00:20:30.823 "trtype": "$TEST_TRANSPORT", 00:20:30.823 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:30.823 "adrfam": "ipv4", 00:20:30.823 "trsvcid": "$NVMF_PORT", 00:20:30.823 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:30.823 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:30.823 "hdgst": ${hdgst:-false}, 00:20:30.823 "ddgst": ${ddgst:-false} 00:20:30.823 }, 00:20:30.823 "method": "bdev_nvme_attach_controller" 00:20:30.823 } 00:20:30.823 EOF 00:20:30.823 )") 00:20:30.823 12:16:31 -- nvmf/common.sh@543 -- # cat 00:20:30.823 12:16:31 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:30.823 12:16:31 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:30.823 { 00:20:30.823 "params": { 00:20:30.823 "name": "Nvme$subsystem", 00:20:30.823 "trtype": "$TEST_TRANSPORT", 00:20:30.823 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:30.823 "adrfam": "ipv4", 00:20:30.823 "trsvcid": "$NVMF_PORT", 00:20:30.823 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:30.823 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:30.823 "hdgst": ${hdgst:-false}, 00:20:30.823 "ddgst": ${ddgst:-false} 00:20:30.823 }, 00:20:30.823 "method": "bdev_nvme_attach_controller" 00:20:30.823 } 00:20:30.823 EOF 00:20:30.823 )") 00:20:30.823 12:16:31 -- nvmf/common.sh@543 -- # cat 00:20:30.823 EAL: No free 2048 kB hugepages reported on node 1 00:20:30.823 12:16:31 -- nvmf/common.sh@545 -- # jq . 00:20:30.823 12:16:31 -- nvmf/common.sh@546 -- # IFS=, 00:20:30.823 12:16:31 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:30.823 "params": { 00:20:30.823 "name": "Nvme1", 00:20:30.823 "trtype": "tcp", 00:20:30.823 "traddr": "10.0.0.2", 00:20:30.823 "adrfam": "ipv4", 00:20:30.823 "trsvcid": "4420", 00:20:30.823 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:30.823 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:30.823 "hdgst": false, 00:20:30.823 "ddgst": false 00:20:30.823 }, 00:20:30.823 "method": "bdev_nvme_attach_controller" 00:20:30.823 },{ 00:20:30.823 "params": { 00:20:30.823 "name": "Nvme2", 00:20:30.823 "trtype": "tcp", 00:20:30.823 "traddr": "10.0.0.2", 00:20:30.823 "adrfam": "ipv4", 00:20:30.823 "trsvcid": "4420", 00:20:30.823 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:30.823 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:30.823 "hdgst": false, 00:20:30.823 "ddgst": false 00:20:30.823 }, 00:20:30.823 "method": "bdev_nvme_attach_controller" 00:20:30.823 },{ 00:20:30.823 "params": { 00:20:30.823 "name": "Nvme3", 00:20:30.823 "trtype": "tcp", 00:20:30.823 "traddr": "10.0.0.2", 00:20:30.823 "adrfam": "ipv4", 00:20:30.823 "trsvcid": "4420", 00:20:30.823 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:30.823 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:30.823 "hdgst": false, 00:20:30.823 "ddgst": false 00:20:30.823 }, 00:20:30.823 "method": "bdev_nvme_attach_controller" 00:20:30.823 },{ 00:20:30.823 "params": { 00:20:30.823 "name": "Nvme4", 00:20:30.823 "trtype": "tcp", 00:20:30.823 "traddr": "10.0.0.2", 00:20:30.823 "adrfam": "ipv4", 00:20:30.823 "trsvcid": "4420", 00:20:30.823 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:30.823 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:30.823 "hdgst": false, 00:20:30.823 "ddgst": false 00:20:30.823 }, 00:20:30.823 "method": "bdev_nvme_attach_controller" 00:20:30.823 },{ 00:20:30.823 "params": { 00:20:30.823 "name": "Nvme5", 00:20:30.823 "trtype": "tcp", 00:20:30.823 "traddr": "10.0.0.2", 00:20:30.823 "adrfam": "ipv4", 00:20:30.823 "trsvcid": "4420", 00:20:30.823 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:30.823 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:30.823 "hdgst": false, 00:20:30.823 "ddgst": false 00:20:30.823 }, 00:20:30.823 "method": "bdev_nvme_attach_controller" 00:20:30.823 },{ 00:20:30.823 "params": { 00:20:30.823 "name": "Nvme6", 00:20:30.823 "trtype": "tcp", 00:20:30.823 "traddr": "10.0.0.2", 00:20:30.823 "adrfam": "ipv4", 00:20:30.823 "trsvcid": "4420", 00:20:30.823 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:30.823 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:30.823 "hdgst": false, 00:20:30.823 "ddgst": false 00:20:30.823 }, 00:20:30.823 "method": "bdev_nvme_attach_controller" 00:20:30.823 },{ 00:20:30.823 "params": { 00:20:30.823 "name": "Nvme7", 00:20:30.823 "trtype": "tcp", 00:20:30.823 "traddr": "10.0.0.2", 00:20:30.823 "adrfam": "ipv4", 00:20:30.823 "trsvcid": "4420", 00:20:30.823 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:30.823 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:30.823 "hdgst": false, 00:20:30.823 "ddgst": false 00:20:30.823 }, 00:20:30.823 "method": "bdev_nvme_attach_controller" 00:20:30.823 },{ 00:20:30.823 "params": { 00:20:30.823 "name": "Nvme8", 00:20:30.823 "trtype": "tcp", 00:20:30.823 "traddr": "10.0.0.2", 00:20:30.823 "adrfam": "ipv4", 00:20:30.823 "trsvcid": "4420", 00:20:30.823 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:30.823 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:30.823 "hdgst": false, 00:20:30.823 "ddgst": false 00:20:30.823 }, 00:20:30.823 "method": "bdev_nvme_attach_controller" 00:20:30.823 },{ 00:20:30.823 "params": { 00:20:30.823 "name": "Nvme9", 00:20:30.823 "trtype": "tcp", 00:20:30.823 "traddr": "10.0.0.2", 00:20:30.823 "adrfam": "ipv4", 00:20:30.823 "trsvcid": "4420", 00:20:30.823 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:30.823 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:30.823 "hdgst": false, 00:20:30.823 "ddgst": false 00:20:30.823 }, 00:20:30.823 "method": "bdev_nvme_attach_controller" 00:20:30.823 },{ 00:20:30.823 "params": { 00:20:30.823 "name": "Nvme10", 00:20:30.823 "trtype": "tcp", 00:20:30.823 "traddr": "10.0.0.2", 00:20:30.823 "adrfam": "ipv4", 00:20:30.823 "trsvcid": "4420", 00:20:30.823 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:30.823 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:30.823 "hdgst": false, 00:20:30.823 "ddgst": false 00:20:30.823 }, 00:20:30.823 "method": "bdev_nvme_attach_controller" 00:20:30.823 }' 00:20:30.823 [2024-04-26 12:16:31.994068] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:31.083 [2024-04-26 12:16:32.057585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:32.467 Running I/O for 10 seconds... 00:20:32.467 12:16:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:32.467 12:16:33 -- common/autotest_common.sh@850 -- # return 0 00:20:32.467 12:16:33 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:32.467 12:16:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:32.467 12:16:33 -- common/autotest_common.sh@10 -- # set +x 00:20:32.467 12:16:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:32.467 12:16:33 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:32.467 12:16:33 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:32.467 12:16:33 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:20:32.467 12:16:33 -- target/shutdown.sh@57 -- # local ret=1 00:20:32.467 12:16:33 -- target/shutdown.sh@58 -- # local i 00:20:32.467 12:16:33 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:20:32.467 12:16:33 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:32.467 12:16:33 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:32.467 12:16:33 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:32.467 12:16:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:32.467 12:16:33 -- common/autotest_common.sh@10 -- # set +x 00:20:32.467 12:16:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:32.467 12:16:33 -- target/shutdown.sh@60 -- # read_io_count=3 00:20:32.467 12:16:33 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:20:32.467 12:16:33 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:32.727 12:16:33 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:32.727 12:16:33 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:32.727 12:16:33 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:32.727 12:16:33 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:32.727 12:16:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:32.727 12:16:33 -- common/autotest_common.sh@10 -- # set +x 00:20:32.727 12:16:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:32.988 12:16:33 -- target/shutdown.sh@60 -- # read_io_count=67 00:20:32.988 12:16:33 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:20:32.988 12:16:33 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:33.249 12:16:34 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:33.249 12:16:34 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:33.249 12:16:34 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:33.249 12:16:34 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:33.249 12:16:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:33.249 12:16:34 -- common/autotest_common.sh@10 -- # set +x 00:20:33.249 12:16:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:33.249 12:16:34 -- target/shutdown.sh@60 -- # read_io_count=131 00:20:33.249 12:16:34 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:20:33.249 12:16:34 -- target/shutdown.sh@64 -- # ret=0 00:20:33.249 12:16:34 -- target/shutdown.sh@65 -- # break 00:20:33.249 12:16:34 -- target/shutdown.sh@69 -- # return 0 00:20:33.249 12:16:34 -- target/shutdown.sh@110 -- # killprocess 3461027 00:20:33.249 12:16:34 -- common/autotest_common.sh@936 -- # '[' -z 3461027 ']' 00:20:33.249 12:16:34 -- common/autotest_common.sh@940 -- # kill -0 3461027 00:20:33.249 12:16:34 -- common/autotest_common.sh@941 -- # uname 00:20:33.249 12:16:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:33.249 12:16:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3461027 00:20:33.249 12:16:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:33.249 12:16:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:33.249 12:16:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3461027' 00:20:33.249 killing process with pid 3461027 00:20:33.249 12:16:34 -- common/autotest_common.sh@955 -- # kill 3461027 00:20:33.249 12:16:34 -- common/autotest_common.sh@960 -- # wait 3461027 00:20:33.249 Received shutdown signal, test time was about 0.954043 seconds 00:20:33.249 00:20:33.249 Latency(us) 00:20:33.249 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:33.249 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:33.249 Verification LBA range: start 0x0 length 0x400 00:20:33.249 Nvme1n1 : 0.95 269.95 16.87 0.00 0.00 234189.87 21845.33 272629.76 00:20:33.249 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:33.249 Verification LBA range: start 0x0 length 0x400 00:20:33.249 Nvme2n1 : 0.94 276.75 17.30 0.00 0.00 223391.00 3413.33 251658.24 00:20:33.249 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:33.249 Verification LBA range: start 0x0 length 0x400 00:20:33.249 Nvme3n1 : 0.94 276.44 17.28 0.00 0.00 219100.65 1392.64 248162.99 00:20:33.249 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:33.249 Verification LBA range: start 0x0 length 0x400 00:20:33.249 Nvme4n1 : 0.95 268.58 16.79 0.00 0.00 221248.00 17148.59 242920.11 00:20:33.249 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:33.249 Verification LBA range: start 0x0 length 0x400 00:20:33.249 Nvme5n1 : 0.93 205.84 12.86 0.00 0.00 281984.85 19442.35 255153.49 00:20:33.250 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:33.250 Verification LBA range: start 0x0 length 0x400 00:20:33.250 Nvme6n1 : 0.95 269.38 16.84 0.00 0.00 211037.23 20643.84 228939.09 00:20:33.250 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:33.250 Verification LBA range: start 0x0 length 0x400 00:20:33.250 Nvme7n1 : 0.92 214.79 13.42 0.00 0.00 254936.36 4396.37 241172.48 00:20:33.250 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:33.250 Verification LBA range: start 0x0 length 0x400 00:20:33.250 Nvme8n1 : 0.93 207.29 12.96 0.00 0.00 260895.29 20862.29 228939.09 00:20:33.250 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:33.250 Verification LBA range: start 0x0 length 0x400 00:20:33.250 Nvme9n1 : 0.92 207.89 12.99 0.00 0.00 253558.33 19660.80 251658.24 00:20:33.250 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:33.250 Verification LBA range: start 0x0 length 0x400 00:20:33.250 Nvme10n1 : 0.94 205.05 12.82 0.00 0.00 251598.36 13817.17 269134.51 00:20:33.250 =================================================================================================================== 00:20:33.250 Total : 2401.95 150.12 0.00 0.00 238382.92 1392.64 272629.76 00:20:33.510 12:16:34 -- target/shutdown.sh@113 -- # sleep 1 00:20:34.453 12:16:35 -- target/shutdown.sh@114 -- # kill -0 3460823 00:20:34.453 12:16:35 -- target/shutdown.sh@116 -- # stoptarget 00:20:34.453 12:16:35 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:34.453 12:16:35 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:34.453 12:16:35 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:34.453 12:16:35 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:34.453 12:16:35 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:34.453 12:16:35 -- nvmf/common.sh@117 -- # sync 00:20:34.453 12:16:35 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:34.453 12:16:35 -- nvmf/common.sh@120 -- # set +e 00:20:34.453 12:16:35 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:34.453 12:16:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:34.453 rmmod nvme_tcp 00:20:34.453 rmmod nvme_fabrics 00:20:34.453 rmmod nvme_keyring 00:20:34.453 12:16:35 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:34.453 12:16:35 -- nvmf/common.sh@124 -- # set -e 00:20:34.453 12:16:35 -- nvmf/common.sh@125 -- # return 0 00:20:34.453 12:16:35 -- nvmf/common.sh@478 -- # '[' -n 3460823 ']' 00:20:34.453 12:16:35 -- nvmf/common.sh@479 -- # killprocess 3460823 00:20:34.453 12:16:35 -- common/autotest_common.sh@936 -- # '[' -z 3460823 ']' 00:20:34.453 12:16:35 -- common/autotest_common.sh@940 -- # kill -0 3460823 00:20:34.453 12:16:35 -- common/autotest_common.sh@941 -- # uname 00:20:34.453 12:16:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:34.453 12:16:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3460823 00:20:34.453 12:16:35 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:34.453 12:16:35 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:34.453 12:16:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3460823' 00:20:34.453 killing process with pid 3460823 00:20:34.453 12:16:35 -- common/autotest_common.sh@955 -- # kill 3460823 00:20:34.453 12:16:35 -- common/autotest_common.sh@960 -- # wait 3460823 00:20:34.714 12:16:35 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:34.714 12:16:35 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:34.714 12:16:35 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:34.714 12:16:35 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:34.714 12:16:35 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:34.714 12:16:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:34.714 12:16:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:34.714 12:16:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:37.261 12:16:37 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:37.261 00:20:37.261 real 0m7.856s 00:20:37.261 user 0m23.485s 00:20:37.261 sys 0m1.225s 00:20:37.261 12:16:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:37.261 12:16:37 -- common/autotest_common.sh@10 -- # set +x 00:20:37.261 ************************************ 00:20:37.261 END TEST nvmf_shutdown_tc2 00:20:37.261 ************************************ 00:20:37.261 12:16:38 -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:20:37.261 12:16:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:37.261 12:16:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:37.261 12:16:38 -- common/autotest_common.sh@10 -- # set +x 00:20:37.261 ************************************ 00:20:37.261 START TEST nvmf_shutdown_tc3 00:20:37.261 ************************************ 00:20:37.261 12:16:38 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc3 00:20:37.261 12:16:38 -- target/shutdown.sh@121 -- # starttarget 00:20:37.261 12:16:38 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:37.261 12:16:38 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:37.261 12:16:38 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:37.261 12:16:38 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:37.261 12:16:38 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:37.261 12:16:38 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:37.261 12:16:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:37.261 12:16:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:37.261 12:16:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:37.261 12:16:38 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:37.261 12:16:38 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:37.261 12:16:38 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:37.261 12:16:38 -- common/autotest_common.sh@10 -- # set +x 00:20:37.261 12:16:38 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:37.261 12:16:38 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:37.261 12:16:38 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:37.261 12:16:38 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:37.261 12:16:38 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:37.261 12:16:38 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:37.261 12:16:38 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:37.261 12:16:38 -- nvmf/common.sh@295 -- # net_devs=() 00:20:37.261 12:16:38 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:37.261 12:16:38 -- nvmf/common.sh@296 -- # e810=() 00:20:37.261 12:16:38 -- nvmf/common.sh@296 -- # local -ga e810 00:20:37.261 12:16:38 -- nvmf/common.sh@297 -- # x722=() 00:20:37.261 12:16:38 -- nvmf/common.sh@297 -- # local -ga x722 00:20:37.261 12:16:38 -- nvmf/common.sh@298 -- # mlx=() 00:20:37.261 12:16:38 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:37.261 12:16:38 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:37.261 12:16:38 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:37.261 12:16:38 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:37.261 12:16:38 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:37.261 12:16:38 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:37.261 12:16:38 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:37.261 12:16:38 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:37.261 12:16:38 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:37.261 12:16:38 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:37.261 12:16:38 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:37.261 12:16:38 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:37.261 12:16:38 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:37.261 12:16:38 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:37.261 12:16:38 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:37.261 12:16:38 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:37.261 12:16:38 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:37.261 12:16:38 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:37.261 12:16:38 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:37.261 12:16:38 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:37.261 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:37.261 12:16:38 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:37.261 12:16:38 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:37.261 12:16:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:37.261 12:16:38 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:37.261 12:16:38 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:37.261 12:16:38 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:37.261 12:16:38 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:37.261 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:37.261 12:16:38 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:37.261 12:16:38 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:37.261 12:16:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:37.261 12:16:38 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:37.261 12:16:38 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:37.261 12:16:38 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:37.261 12:16:38 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:37.261 12:16:38 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:37.261 12:16:38 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:37.261 12:16:38 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:37.261 12:16:38 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:37.261 12:16:38 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:37.261 12:16:38 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:37.261 Found net devices under 0000:31:00.0: cvl_0_0 00:20:37.261 12:16:38 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:37.261 12:16:38 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:37.261 12:16:38 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:37.261 12:16:38 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:37.261 12:16:38 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:37.261 12:16:38 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:37.261 Found net devices under 0000:31:00.1: cvl_0_1 00:20:37.261 12:16:38 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:37.261 12:16:38 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:37.261 12:16:38 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:37.261 12:16:38 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:37.261 12:16:38 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:37.261 12:16:38 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:37.261 12:16:38 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:37.261 12:16:38 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:37.261 12:16:38 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:37.261 12:16:38 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:37.261 12:16:38 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:37.261 12:16:38 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:37.261 12:16:38 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:37.261 12:16:38 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:37.261 12:16:38 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:37.261 12:16:38 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:37.261 12:16:38 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:37.261 12:16:38 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:37.262 12:16:38 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:37.262 12:16:38 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:37.262 12:16:38 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:37.262 12:16:38 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:37.262 12:16:38 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:37.262 12:16:38 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:37.262 12:16:38 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:37.262 12:16:38 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:37.262 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:37.262 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.637 ms 00:20:37.262 00:20:37.262 --- 10.0.0.2 ping statistics --- 00:20:37.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:37.262 rtt min/avg/max/mdev = 0.637/0.637/0.637/0.000 ms 00:20:37.262 12:16:38 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:37.524 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:37.524 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:20:37.524 00:20:37.524 --- 10.0.0.1 ping statistics --- 00:20:37.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:37.524 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:20:37.524 12:16:38 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:37.524 12:16:38 -- nvmf/common.sh@411 -- # return 0 00:20:37.524 12:16:38 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:37.524 12:16:38 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:37.524 12:16:38 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:37.524 12:16:38 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:37.524 12:16:38 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:37.524 12:16:38 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:37.524 12:16:38 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:37.524 12:16:38 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:37.524 12:16:38 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:37.524 12:16:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:37.524 12:16:38 -- common/autotest_common.sh@10 -- # set +x 00:20:37.524 12:16:38 -- nvmf/common.sh@470 -- # nvmfpid=3462395 00:20:37.524 12:16:38 -- nvmf/common.sh@471 -- # waitforlisten 3462395 00:20:37.524 12:16:38 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:37.524 12:16:38 -- common/autotest_common.sh@817 -- # '[' -z 3462395 ']' 00:20:37.524 12:16:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:37.524 12:16:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:37.524 12:16:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:37.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:37.524 12:16:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:37.524 12:16:38 -- common/autotest_common.sh@10 -- # set +x 00:20:37.524 [2024-04-26 12:16:38.606966] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:20:37.524 [2024-04-26 12:16:38.607021] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:37.524 EAL: No free 2048 kB hugepages reported on node 1 00:20:37.524 [2024-04-26 12:16:38.697305] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:37.785 [2024-04-26 12:16:38.765021] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:37.785 [2024-04-26 12:16:38.765062] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:37.785 [2024-04-26 12:16:38.765069] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:37.785 [2024-04-26 12:16:38.765074] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:37.785 [2024-04-26 12:16:38.765080] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:37.785 [2024-04-26 12:16:38.765194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:37.785 [2024-04-26 12:16:38.765345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:37.785 [2024-04-26 12:16:38.765504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:37.785 [2024-04-26 12:16:38.765507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:38.357 12:16:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:38.357 12:16:39 -- common/autotest_common.sh@850 -- # return 0 00:20:38.357 12:16:39 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:38.357 12:16:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:38.357 12:16:39 -- common/autotest_common.sh@10 -- # set +x 00:20:38.357 12:16:39 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:38.357 12:16:39 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:38.357 12:16:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:38.357 12:16:39 -- common/autotest_common.sh@10 -- # set +x 00:20:38.357 [2024-04-26 12:16:39.426955] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:38.357 12:16:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:38.357 12:16:39 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:38.357 12:16:39 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:38.357 12:16:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:38.357 12:16:39 -- common/autotest_common.sh@10 -- # set +x 00:20:38.357 12:16:39 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:38.357 12:16:39 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:38.358 12:16:39 -- target/shutdown.sh@28 -- # cat 00:20:38.358 12:16:39 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:38.358 12:16:39 -- target/shutdown.sh@28 -- # cat 00:20:38.358 12:16:39 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:38.358 12:16:39 -- target/shutdown.sh@28 -- # cat 00:20:38.358 12:16:39 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:38.358 12:16:39 -- target/shutdown.sh@28 -- # cat 00:20:38.358 12:16:39 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:38.358 12:16:39 -- target/shutdown.sh@28 -- # cat 00:20:38.358 12:16:39 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:38.358 12:16:39 -- target/shutdown.sh@28 -- # cat 00:20:38.358 12:16:39 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:38.358 12:16:39 -- target/shutdown.sh@28 -- # cat 00:20:38.358 12:16:39 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:38.358 12:16:39 -- target/shutdown.sh@28 -- # cat 00:20:38.358 12:16:39 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:38.358 12:16:39 -- target/shutdown.sh@28 -- # cat 00:20:38.358 12:16:39 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:38.358 12:16:39 -- target/shutdown.sh@28 -- # cat 00:20:38.358 12:16:39 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:38.358 12:16:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:38.358 12:16:39 -- common/autotest_common.sh@10 -- # set +x 00:20:38.358 Malloc1 00:20:38.358 [2024-04-26 12:16:39.525829] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:38.358 Malloc2 00:20:38.619 Malloc3 00:20:38.619 Malloc4 00:20:38.619 Malloc5 00:20:38.619 Malloc6 00:20:38.619 Malloc7 00:20:38.619 Malloc8 00:20:38.619 Malloc9 00:20:38.881 Malloc10 00:20:38.881 12:16:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:38.881 12:16:39 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:38.881 12:16:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:38.881 12:16:39 -- common/autotest_common.sh@10 -- # set +x 00:20:38.881 12:16:39 -- target/shutdown.sh@125 -- # perfpid=3462764 00:20:38.881 12:16:39 -- target/shutdown.sh@126 -- # waitforlisten 3462764 /var/tmp/bdevperf.sock 00:20:38.881 12:16:39 -- common/autotest_common.sh@817 -- # '[' -z 3462764 ']' 00:20:38.881 12:16:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:38.881 12:16:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:38.881 12:16:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:38.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:38.881 12:16:39 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:38.881 12:16:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:38.881 12:16:39 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:38.881 12:16:39 -- common/autotest_common.sh@10 -- # set +x 00:20:38.881 12:16:39 -- nvmf/common.sh@521 -- # config=() 00:20:38.881 12:16:39 -- nvmf/common.sh@521 -- # local subsystem config 00:20:38.881 12:16:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:38.881 12:16:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:38.881 { 00:20:38.881 "params": { 00:20:38.881 "name": "Nvme$subsystem", 00:20:38.881 "trtype": "$TEST_TRANSPORT", 00:20:38.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:38.881 "adrfam": "ipv4", 00:20:38.881 "trsvcid": "$NVMF_PORT", 00:20:38.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:38.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:38.881 "hdgst": ${hdgst:-false}, 00:20:38.881 "ddgst": ${ddgst:-false} 00:20:38.881 }, 00:20:38.881 "method": "bdev_nvme_attach_controller" 00:20:38.881 } 00:20:38.881 EOF 00:20:38.881 )") 00:20:38.881 12:16:39 -- nvmf/common.sh@543 -- # cat 00:20:38.881 12:16:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:38.881 12:16:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:38.881 { 00:20:38.881 "params": { 00:20:38.881 "name": "Nvme$subsystem", 00:20:38.881 "trtype": "$TEST_TRANSPORT", 00:20:38.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:38.881 "adrfam": "ipv4", 00:20:38.881 "trsvcid": "$NVMF_PORT", 00:20:38.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:38.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:38.881 "hdgst": ${hdgst:-false}, 00:20:38.881 "ddgst": ${ddgst:-false} 00:20:38.881 }, 00:20:38.881 "method": "bdev_nvme_attach_controller" 00:20:38.881 } 00:20:38.881 EOF 00:20:38.881 )") 00:20:38.881 12:16:39 -- nvmf/common.sh@543 -- # cat 00:20:38.881 12:16:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:38.881 12:16:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:38.881 { 00:20:38.881 "params": { 00:20:38.881 "name": "Nvme$subsystem", 00:20:38.881 "trtype": "$TEST_TRANSPORT", 00:20:38.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:38.881 "adrfam": "ipv4", 00:20:38.881 "trsvcid": "$NVMF_PORT", 00:20:38.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:38.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:38.881 "hdgst": ${hdgst:-false}, 00:20:38.881 "ddgst": ${ddgst:-false} 00:20:38.881 }, 00:20:38.881 "method": "bdev_nvme_attach_controller" 00:20:38.881 } 00:20:38.881 EOF 00:20:38.881 )") 00:20:38.881 12:16:39 -- nvmf/common.sh@543 -- # cat 00:20:38.881 12:16:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:38.881 12:16:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:38.881 { 00:20:38.881 "params": { 00:20:38.881 "name": "Nvme$subsystem", 00:20:38.881 "trtype": "$TEST_TRANSPORT", 00:20:38.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:38.881 "adrfam": "ipv4", 00:20:38.881 "trsvcid": "$NVMF_PORT", 00:20:38.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:38.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:38.881 "hdgst": ${hdgst:-false}, 00:20:38.881 "ddgst": ${ddgst:-false} 00:20:38.881 }, 00:20:38.881 "method": "bdev_nvme_attach_controller" 00:20:38.881 } 00:20:38.881 EOF 00:20:38.881 )") 00:20:38.881 12:16:39 -- nvmf/common.sh@543 -- # cat 00:20:38.881 12:16:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:38.881 12:16:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:38.881 { 00:20:38.881 "params": { 00:20:38.881 "name": "Nvme$subsystem", 00:20:38.881 "trtype": "$TEST_TRANSPORT", 00:20:38.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:38.881 "adrfam": "ipv4", 00:20:38.881 "trsvcid": "$NVMF_PORT", 00:20:38.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:38.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:38.881 "hdgst": ${hdgst:-false}, 00:20:38.881 "ddgst": ${ddgst:-false} 00:20:38.881 }, 00:20:38.881 "method": "bdev_nvme_attach_controller" 00:20:38.881 } 00:20:38.881 EOF 00:20:38.881 )") 00:20:38.881 12:16:39 -- nvmf/common.sh@543 -- # cat 00:20:38.881 12:16:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:38.881 12:16:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:38.881 { 00:20:38.881 "params": { 00:20:38.881 "name": "Nvme$subsystem", 00:20:38.881 "trtype": "$TEST_TRANSPORT", 00:20:38.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:38.881 "adrfam": "ipv4", 00:20:38.881 "trsvcid": "$NVMF_PORT", 00:20:38.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:38.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:38.881 "hdgst": ${hdgst:-false}, 00:20:38.881 "ddgst": ${ddgst:-false} 00:20:38.881 }, 00:20:38.881 "method": "bdev_nvme_attach_controller" 00:20:38.881 } 00:20:38.881 EOF 00:20:38.881 )") 00:20:38.881 12:16:39 -- nvmf/common.sh@543 -- # cat 00:20:38.881 [2024-04-26 12:16:39.964203] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:20:38.881 [2024-04-26 12:16:39.964253] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3462764 ] 00:20:38.881 12:16:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:38.881 12:16:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:38.881 { 00:20:38.881 "params": { 00:20:38.881 "name": "Nvme$subsystem", 00:20:38.881 "trtype": "$TEST_TRANSPORT", 00:20:38.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:38.881 "adrfam": "ipv4", 00:20:38.881 "trsvcid": "$NVMF_PORT", 00:20:38.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:38.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:38.881 "hdgst": ${hdgst:-false}, 00:20:38.881 "ddgst": ${ddgst:-false} 00:20:38.881 }, 00:20:38.881 "method": "bdev_nvme_attach_controller" 00:20:38.881 } 00:20:38.881 EOF 00:20:38.881 )") 00:20:38.881 12:16:39 -- nvmf/common.sh@543 -- # cat 00:20:38.881 12:16:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:38.881 12:16:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:38.881 { 00:20:38.881 "params": { 00:20:38.881 "name": "Nvme$subsystem", 00:20:38.881 "trtype": "$TEST_TRANSPORT", 00:20:38.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:38.881 "adrfam": "ipv4", 00:20:38.881 "trsvcid": "$NVMF_PORT", 00:20:38.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:38.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:38.881 "hdgst": ${hdgst:-false}, 00:20:38.881 "ddgst": ${ddgst:-false} 00:20:38.881 }, 00:20:38.881 "method": "bdev_nvme_attach_controller" 00:20:38.881 } 00:20:38.881 EOF 00:20:38.881 )") 00:20:38.881 12:16:39 -- nvmf/common.sh@543 -- # cat 00:20:38.882 12:16:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:38.882 12:16:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:38.882 { 00:20:38.882 "params": { 00:20:38.882 "name": "Nvme$subsystem", 00:20:38.882 "trtype": "$TEST_TRANSPORT", 00:20:38.882 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:38.882 "adrfam": "ipv4", 00:20:38.882 "trsvcid": "$NVMF_PORT", 00:20:38.882 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:38.882 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:38.882 "hdgst": ${hdgst:-false}, 00:20:38.882 "ddgst": ${ddgst:-false} 00:20:38.882 }, 00:20:38.882 "method": "bdev_nvme_attach_controller" 00:20:38.882 } 00:20:38.882 EOF 00:20:38.882 )") 00:20:38.882 12:16:39 -- nvmf/common.sh@543 -- # cat 00:20:38.882 12:16:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:38.882 EAL: No free 2048 kB hugepages reported on node 1 00:20:38.882 12:16:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:38.882 { 00:20:38.882 "params": { 00:20:38.882 "name": "Nvme$subsystem", 00:20:38.882 "trtype": "$TEST_TRANSPORT", 00:20:38.882 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:38.882 "adrfam": "ipv4", 00:20:38.882 "trsvcid": "$NVMF_PORT", 00:20:38.882 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:38.882 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:38.882 "hdgst": ${hdgst:-false}, 00:20:38.882 "ddgst": ${ddgst:-false} 00:20:38.882 }, 00:20:38.882 "method": "bdev_nvme_attach_controller" 00:20:38.882 } 00:20:38.882 EOF 00:20:38.882 )") 00:20:38.882 12:16:39 -- nvmf/common.sh@543 -- # cat 00:20:38.882 12:16:39 -- nvmf/common.sh@545 -- # jq . 00:20:38.882 12:16:39 -- nvmf/common.sh@546 -- # IFS=, 00:20:38.882 12:16:40 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:38.882 "params": { 00:20:38.882 "name": "Nvme1", 00:20:38.882 "trtype": "tcp", 00:20:38.882 "traddr": "10.0.0.2", 00:20:38.882 "adrfam": "ipv4", 00:20:38.882 "trsvcid": "4420", 00:20:38.882 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:38.882 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:38.882 "hdgst": false, 00:20:38.882 "ddgst": false 00:20:38.882 }, 00:20:38.882 "method": "bdev_nvme_attach_controller" 00:20:38.882 },{ 00:20:38.882 "params": { 00:20:38.882 "name": "Nvme2", 00:20:38.882 "trtype": "tcp", 00:20:38.882 "traddr": "10.0.0.2", 00:20:38.882 "adrfam": "ipv4", 00:20:38.882 "trsvcid": "4420", 00:20:38.882 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:38.882 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:38.882 "hdgst": false, 00:20:38.882 "ddgst": false 00:20:38.882 }, 00:20:38.882 "method": "bdev_nvme_attach_controller" 00:20:38.882 },{ 00:20:38.882 "params": { 00:20:38.882 "name": "Nvme3", 00:20:38.882 "trtype": "tcp", 00:20:38.882 "traddr": "10.0.0.2", 00:20:38.882 "adrfam": "ipv4", 00:20:38.882 "trsvcid": "4420", 00:20:38.882 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:38.882 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:38.882 "hdgst": false, 00:20:38.882 "ddgst": false 00:20:38.882 }, 00:20:38.882 "method": "bdev_nvme_attach_controller" 00:20:38.882 },{ 00:20:38.882 "params": { 00:20:38.882 "name": "Nvme4", 00:20:38.882 "trtype": "tcp", 00:20:38.882 "traddr": "10.0.0.2", 00:20:38.882 "adrfam": "ipv4", 00:20:38.882 "trsvcid": "4420", 00:20:38.882 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:38.882 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:38.882 "hdgst": false, 00:20:38.882 "ddgst": false 00:20:38.882 }, 00:20:38.882 "method": "bdev_nvme_attach_controller" 00:20:38.882 },{ 00:20:38.882 "params": { 00:20:38.882 "name": "Nvme5", 00:20:38.882 "trtype": "tcp", 00:20:38.882 "traddr": "10.0.0.2", 00:20:38.882 "adrfam": "ipv4", 00:20:38.882 "trsvcid": "4420", 00:20:38.882 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:38.882 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:38.882 "hdgst": false, 00:20:38.882 "ddgst": false 00:20:38.882 }, 00:20:38.882 "method": "bdev_nvme_attach_controller" 00:20:38.882 },{ 00:20:38.882 "params": { 00:20:38.882 "name": "Nvme6", 00:20:38.882 "trtype": "tcp", 00:20:38.882 "traddr": "10.0.0.2", 00:20:38.882 "adrfam": "ipv4", 00:20:38.882 "trsvcid": "4420", 00:20:38.882 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:38.882 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:38.882 "hdgst": false, 00:20:38.882 "ddgst": false 00:20:38.882 }, 00:20:38.882 "method": "bdev_nvme_attach_controller" 00:20:38.882 },{ 00:20:38.882 "params": { 00:20:38.882 "name": "Nvme7", 00:20:38.882 "trtype": "tcp", 00:20:38.882 "traddr": "10.0.0.2", 00:20:38.882 "adrfam": "ipv4", 00:20:38.882 "trsvcid": "4420", 00:20:38.882 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:38.882 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:38.882 "hdgst": false, 00:20:38.882 "ddgst": false 00:20:38.882 }, 00:20:38.882 "method": "bdev_nvme_attach_controller" 00:20:38.882 },{ 00:20:38.882 "params": { 00:20:38.882 "name": "Nvme8", 00:20:38.882 "trtype": "tcp", 00:20:38.882 "traddr": "10.0.0.2", 00:20:38.882 "adrfam": "ipv4", 00:20:38.882 "trsvcid": "4420", 00:20:38.882 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:38.882 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:38.882 "hdgst": false, 00:20:38.882 "ddgst": false 00:20:38.882 }, 00:20:38.882 "method": "bdev_nvme_attach_controller" 00:20:38.882 },{ 00:20:38.882 "params": { 00:20:38.882 "name": "Nvme9", 00:20:38.882 "trtype": "tcp", 00:20:38.882 "traddr": "10.0.0.2", 00:20:38.882 "adrfam": "ipv4", 00:20:38.882 "trsvcid": "4420", 00:20:38.882 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:38.882 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:38.882 "hdgst": false, 00:20:38.882 "ddgst": false 00:20:38.882 }, 00:20:38.882 "method": "bdev_nvme_attach_controller" 00:20:38.882 },{ 00:20:38.882 "params": { 00:20:38.882 "name": "Nvme10", 00:20:38.882 "trtype": "tcp", 00:20:38.882 "traddr": "10.0.0.2", 00:20:38.882 "adrfam": "ipv4", 00:20:38.882 "trsvcid": "4420", 00:20:38.882 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:38.882 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:38.882 "hdgst": false, 00:20:38.882 "ddgst": false 00:20:38.882 }, 00:20:38.882 "method": "bdev_nvme_attach_controller" 00:20:38.882 }' 00:20:38.882 [2024-04-26 12:16:40.025309] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.882 [2024-04-26 12:16:40.088607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:40.794 Running I/O for 10 seconds... 00:20:40.794 12:16:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:40.794 12:16:41 -- common/autotest_common.sh@850 -- # return 0 00:20:40.794 12:16:41 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:40.794 12:16:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:40.794 12:16:41 -- common/autotest_common.sh@10 -- # set +x 00:20:40.794 12:16:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:40.794 12:16:41 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:40.794 12:16:41 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:40.794 12:16:41 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:40.794 12:16:41 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:20:40.794 12:16:41 -- target/shutdown.sh@57 -- # local ret=1 00:20:40.794 12:16:41 -- target/shutdown.sh@58 -- # local i 00:20:40.794 12:16:41 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:20:40.795 12:16:41 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:40.795 12:16:41 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:40.795 12:16:41 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:40.795 12:16:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:40.795 12:16:41 -- common/autotest_common.sh@10 -- # set +x 00:20:40.795 12:16:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:40.795 12:16:41 -- target/shutdown.sh@60 -- # read_io_count=3 00:20:40.795 12:16:41 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:20:40.795 12:16:41 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:41.055 12:16:42 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:41.055 12:16:42 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:41.055 12:16:42 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:41.055 12:16:42 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:41.055 12:16:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:41.055 12:16:42 -- common/autotest_common.sh@10 -- # set +x 00:20:41.055 12:16:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:41.055 12:16:42 -- target/shutdown.sh@60 -- # read_io_count=67 00:20:41.055 12:16:42 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:20:41.055 12:16:42 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:41.322 12:16:42 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:41.322 12:16:42 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:41.323 12:16:42 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:41.323 12:16:42 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:41.323 12:16:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:41.323 12:16:42 -- common/autotest_common.sh@10 -- # set +x 00:20:41.323 12:16:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:41.323 12:16:42 -- target/shutdown.sh@60 -- # read_io_count=131 00:20:41.323 12:16:42 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:20:41.323 12:16:42 -- target/shutdown.sh@64 -- # ret=0 00:20:41.323 12:16:42 -- target/shutdown.sh@65 -- # break 00:20:41.323 12:16:42 -- target/shutdown.sh@69 -- # return 0 00:20:41.323 12:16:42 -- target/shutdown.sh@135 -- # killprocess 3462395 00:20:41.323 12:16:42 -- common/autotest_common.sh@936 -- # '[' -z 3462395 ']' 00:20:41.323 12:16:42 -- common/autotest_common.sh@940 -- # kill -0 3462395 00:20:41.323 12:16:42 -- common/autotest_common.sh@941 -- # uname 00:20:41.323 12:16:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:41.323 12:16:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3462395 00:20:41.323 12:16:42 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:41.323 12:16:42 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:41.323 12:16:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3462395' 00:20:41.323 killing process with pid 3462395 00:20:41.323 12:16:42 -- common/autotest_common.sh@955 -- # kill 3462395 00:20:41.323 12:16:42 -- common/autotest_common.sh@960 -- # wait 3462395 00:20:41.323 [2024-04-26 12:16:42.501772] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8a60 is same with the state(5) to be set 00:20:41.323 [2024-04-26 12:16:42.501836] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8a60 is same with the state(5) to be set 00:20:41.323 [2024-04-26 12:16:42.501848] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8a60 is same with the state(5) to be set 00:20:41.323 [2024-04-26 12:16:42.501854] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8a60 is same with the state(5) to be set 00:20:41.323 [2024-04-26 12:16:42.501858] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8a60 is same with the state(5) to be set 00:20:41.323 [2024-04-26 12:16:42.501863] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8a60 is same with the state(5) to be set 00:20:41.323 [2024-04-26 12:16:42.501868] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8a60 is same with the state(5) to be set 00:20:41.323 [2024-04-26 12:16:42.501872] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8a60 is same with the state(5) to be set 00:20:41.323 [2024-04-26 12:16:42.501877] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8a60 is same with the state(5) to be set 00:20:41.323 [2024-04-26 12:16:42.501881] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8a60 is same with the state(5) to be set 00:20:41.323 [2024-04-26 12:16:42.501886] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8a60 is same with the state(5) to be set 00:20:41.323 [2024-04-26 12:16:42.501890] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8a60 is same with the state(5) to be set 00:20:41.323 [2024-04-26 12:16:42.501900] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8a60 is same with the state(5) to be set 00:20:41.323 [2024-04-26 12:16:42.501904] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8a60 is same with the state(5) to be set 00:20:41.323 [2024-04-26 12:16:42.501909] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8a60 is same with the state(5) to be set 00:20:41.323 [2024-04-26 12:16:42.501913] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8a60 is same with the state(5) to be set 00:20:41.323 [2024-04-26 12:16:42.501918] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8a60 is same with the state(5) to be set 00:20:41.323 [2024-04-26 12:16:42.501922] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8a60 is same with the state(5) to be set 00:20:41.323 [2024-04-26 12:16:42.501927] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8a60 is same with the state(5) to be set 00:20:41.323 [2024-04-26 12:16:42.501931] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8a60 is same with the state(5) to be set 00:20:41.323 [2024-04-26 12:16:42.501935] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8a60 is same with the state(5) to be set 00:20:41.323 [2024-04-26 12:16:42.501940] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8a60 is same with the state(5) to be set 00:20:41.323 [2024-04-26 12:16:42.501944] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8a60 is same with the state(5) to be set 00:20:41.323 [2024-04-26 12:16:42.501948] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8a60 is same with the state(5) to be set 00:20:41.323 [2024-04-26 12:16:42.501953] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8a60 is same with the state(5) to be set 00:20:41.323 [2024-04-26 12:16:42.501958] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8a60 is same with the state(5) to be set 00:20:41.323 [2024-04-26 12:16:42.501962] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8a60 is same with the state(5) to be set 00:20:41.323 [2024-04-26 12:16:42.501966] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8a60 is same with the state(5) to be set 00:20:41.323 [2024-04-26 12:16:42.501971] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8a60 is same with the state(5) to be set 00:20:41.323 [2024-04-26 12:16:42.501975] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8a60 is same with the state(5) to be set 00:20:41.323 [2024-04-26 12:16:42.501980] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8a60 is same with the state(5) to be set 00:20:41.323 [2024-04-26 12:16:42.501984] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8a60 is same with the state(5) to be set 00:20:41.323 [2024-04-26 12:16:42.501989] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8a60 is same with the state(5) to be set 00:20:41.323 [2024-04-26 12:16:42.501993] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8a60 is same with the state(5) to be set 00:20:41.323 [2024-04-26 12:16:42.501998] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8a60 is same with the state(5) to be set 00:20:41.323 [2024-04-26 12:16:42.502002] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8a60 is same with the state(5) to be set 00:20:41.323 [2024-04-26 12:16:42.502006] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8a60 is same with the state(5) to be set 00:20:41.323 [2024-04-26 12:16:42.502012] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8a60 is same with the state(5) to be set 00:20:41.323 [2024-04-26 12:16:42.502017] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8a60 is same with the state(5) to be set 00:20:41.323 [2024-04-26 12:16:42.502023] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8a60 is same with the state(5) to be set 00:20:41.323 [2024-04-26 12:16:42.502027] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8a60 is same with the state(5) to be set 00:20:41.323 [2024-04-26 12:16:42.502031] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8a60 is same with the state(5) to be set 00:20:41.323 [2024-04-26 12:16:42.502036] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8a60 is same with the state(5) to be set 00:20:41.323 [2024-04-26 12:16:42.502040] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8a60 is same with the state(5) to be set 00:20:41.323 [2024-04-26 12:16:42.502045] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8a60 is same with the state(5) to be set 00:20:41.323 [2024-04-26 12:16:42.502050] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8a60 is same with the state(5) to be set 00:20:41.323 [2024-04-26 12:16:42.502054] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8a60 is same with the state(5) to be set 00:20:41.323 [2024-04-26 12:16:42.502058] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8a60 is same with the state(5) to be set 00:20:41.323 [2024-04-26 12:16:42.502063] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8a60 is same with the state(5) to be set 00:20:41.323 [2024-04-26 12:16:42.502067] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8a60 is same with the state(5) to be set 00:20:41.323 [2024-04-26 12:16:42.502071] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8a60 is same with the state(5) to be set 00:20:41.323 [2024-04-26 12:16:42.502075] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8a60 is same with the state(5) to be set 00:20:41.323 [2024-04-26 12:16:42.502080] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8a60 is same with the state(5) to be set 00:20:41.323 [2024-04-26 12:16:42.502084] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8a60 is same with the state(5) to be set 00:20:41.323 [2024-04-26 12:16:42.502088] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8a60 is same with the state(5) to be set 00:20:41.323 [2024-04-26 12:16:42.502092] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8a60 is same with the state(5) to be set 00:20:41.323 [2024-04-26 12:16:42.502097] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8a60 is same with the state(5) to be set 00:20:41.323 [2024-04-26 12:16:42.502101] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8a60 is same with the state(5) to be set 00:20:41.323 [2024-04-26 12:16:42.502106] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8a60 is same with the state(5) to be set 00:20:41.323 [2024-04-26 12:16:42.502110] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8a60 is same with the state(5) to be set 00:20:41.323 [2024-04-26 12:16:42.502115] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8a60 is same with the state(5) to be set 00:20:41.323 [2024-04-26 12:16:42.502119] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8a60 is same with the state(5) to be set 00:20:41.323 [2024-04-26 12:16:42.502123] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8a60 is same with the state(5) to be set 00:20:41.323 [2024-04-26 12:16:42.503254] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bb390 is same with the state(5) to be set 00:20:41.323 [2024-04-26 12:16:42.503280] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bb390 is same with the state(5) to be set 00:20:41.323 [2024-04-26 12:16:42.503286] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bb390 is same with the state(5) to be set 00:20:41.323 [2024-04-26 12:16:42.503294] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bb390 is same with the state(5) to be set 00:20:41.323 [2024-04-26 12:16:42.503299] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bb390 is same with the state(5) to be set 00:20:41.323 [2024-04-26 12:16:42.503303] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bb390 is same with the state(5) to be set 00:20:41.323 [2024-04-26 12:16:42.503308] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bb390 is same with the state(5) to be set 00:20:41.323 [2024-04-26 12:16:42.503312] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bb390 is same with the state(5) to be set 00:20:41.323 [2024-04-26 12:16:42.503317] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bb390 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.503322] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bb390 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.503326] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bb390 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.503331] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bb390 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.503335] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bb390 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.503340] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bb390 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.503345] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bb390 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.503349] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bb390 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.503354] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bb390 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.503358] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bb390 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.503363] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bb390 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.503367] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bb390 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.503372] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bb390 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.503376] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bb390 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.503381] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bb390 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.503385] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bb390 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.503390] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bb390 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.503394] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bb390 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.503398] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bb390 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.503403] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bb390 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.503407] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bb390 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.503412] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bb390 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.503416] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bb390 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.503422] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bb390 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.503426] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bb390 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.503431] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bb390 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.503435] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bb390 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.503440] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bb390 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.503445] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bb390 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.503449] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bb390 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.503454] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bb390 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.503458] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bb390 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.503463] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bb390 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.503467] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bb390 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.503471] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bb390 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.503476] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bb390 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.503480] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bb390 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.503485] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bb390 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.503489] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bb390 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.503493] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bb390 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.503497] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bb390 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.503502] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bb390 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.503506] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bb390 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.503511] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bb390 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.503515] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bb390 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.503519] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bb390 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.503524] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bb390 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.503528] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bb390 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.503532] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bb390 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.503536] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bb390 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.503542] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bb390 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.503546] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bb390 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.503551] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bb390 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.503556] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bb390 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.503560] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bb390 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.504459] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8ef0 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.504469] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8ef0 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.504475] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8ef0 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.504479] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8ef0 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.504484] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8ef0 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.504488] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8ef0 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.504493] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8ef0 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.504497] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8ef0 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.504502] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8ef0 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.504506] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8ef0 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.504511] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8ef0 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.504515] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8ef0 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.504520] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8ef0 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.504524] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8ef0 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.504528] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8ef0 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.504533] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8ef0 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.504537] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8ef0 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.504542] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8ef0 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.504547] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8ef0 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.504551] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8ef0 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.504556] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8ef0 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.504561] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8ef0 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.504568] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8ef0 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.504572] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8ef0 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.504577] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8ef0 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.504581] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8ef0 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.504586] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8ef0 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.504590] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8ef0 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.504594] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8ef0 is same with the state(5) to be set 00:20:41.324 [2024-04-26 12:16:42.504599] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8ef0 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.504604] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8ef0 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.504608] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8ef0 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.504613] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8ef0 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.504617] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8ef0 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.504621] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8ef0 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.504626] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8ef0 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.504630] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8ef0 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.504634] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8ef0 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.504639] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8ef0 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.504643] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8ef0 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.504647] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8ef0 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.504652] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8ef0 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.504656] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8ef0 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.504661] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8ef0 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.504665] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8ef0 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.504669] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8ef0 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.504673] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8ef0 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.504678] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8ef0 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.504682] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8ef0 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.504687] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8ef0 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.504692] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8ef0 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.504697] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8ef0 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.504701] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8ef0 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.504705] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8ef0 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.504709] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8ef0 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.504714] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8ef0 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.504718] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8ef0 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.504722] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8ef0 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.504726] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8ef0 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.504731] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8ef0 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.504735] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8ef0 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.504739] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8ef0 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.504744] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b8ef0 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.505886] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9380 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.505910] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9380 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.505916] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9380 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.505921] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9380 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.505926] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9380 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.505930] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9380 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.505935] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9380 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.505939] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9380 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.505944] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9380 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.505948] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9380 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.505953] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9380 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.505957] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9380 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.505962] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9380 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.505966] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9380 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.505974] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9380 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.505979] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9380 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.505983] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9380 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.505987] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9380 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.505992] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9380 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.505997] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9380 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.506001] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9380 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.506006] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9380 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.506010] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9380 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.506014] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9380 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.506019] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9380 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.506023] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9380 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.506028] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9380 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.506032] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9380 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.506036] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9380 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.506040] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9380 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.506045] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9380 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.506049] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9380 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.506053] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9380 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.506058] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9380 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.506062] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9380 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.506066] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9380 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.506071] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9380 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.506075] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9380 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.506079] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9380 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.506083] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9380 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.506088] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9380 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.506094] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9380 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.506098] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9380 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.506103] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9380 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.506107] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9380 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.506111] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9380 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.506116] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9380 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.506120] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9380 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.506125] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9380 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.506129] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9380 is same with the state(5) to be set 00:20:41.325 [2024-04-26 12:16:42.506134] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9380 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.506138] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9380 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.506143] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9380 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.506147] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9380 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.506151] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9380 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.506156] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9380 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.506161] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9380 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.506165] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9380 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.506170] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9380 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.506174] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9380 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.506179] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9380 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.506183] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9380 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.506187] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9380 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.506964] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9810 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.506987] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9810 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.506993] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9810 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.506997] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9810 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.507003] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9810 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.507007] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9810 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.507016] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9810 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.507021] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9810 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.507025] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9810 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.507030] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9810 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.507034] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9810 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.507039] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9810 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.507043] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9810 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.507048] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9810 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.507052] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9810 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.507057] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9810 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.507061] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9810 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.507065] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9810 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.507070] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9810 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.507074] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9810 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.507079] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9810 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.507083] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9810 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.507088] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9810 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.507092] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9810 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.507097] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9810 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.507101] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9810 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.507105] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9810 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.507110] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9810 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.507114] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9810 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.507119] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9810 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.507123] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9810 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.507128] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9810 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.507133] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9810 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.507140] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9810 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.507144] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9810 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.507149] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9810 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.507153] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9810 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.507158] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9810 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.507162] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9810 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.507166] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9810 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.507171] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9810 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.507176] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9810 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.507180] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9810 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.507185] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9810 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.507189] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9810 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.507193] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9810 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.507198] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9810 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.507202] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9810 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.507206] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9810 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.507211] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9810 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.507215] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9810 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.507219] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9810 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.507224] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9810 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.507229] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9810 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.507233] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9810 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.507237] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9810 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.507242] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9810 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.507246] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9810 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.507250] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9810 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.507254] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9810 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.507260] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9810 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.507264] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9810 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.507270] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9810 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.507700] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9cc0 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.507715] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9cc0 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.507720] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9cc0 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.507724] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9cc0 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.507729] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9cc0 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.507733] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9cc0 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.507738] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9cc0 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.507743] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9cc0 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.507747] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9cc0 is same with the state(5) to be set 00:20:41.326 [2024-04-26 12:16:42.507752] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9cc0 is same with the state(5) to be set 00:20:41.327 [2024-04-26 12:16:42.507756] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9cc0 is same with the state(5) to be set 00:20:41.327 [2024-04-26 12:16:42.507761] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9cc0 is same with the state(5) to be set 00:20:41.327 [2024-04-26 12:16:42.507765] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9cc0 is same with the state(5) to be set 00:20:41.327 [2024-04-26 12:16:42.507769] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9cc0 is same with the state(5) to be set 00:20:41.327 [2024-04-26 12:16:42.507774] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9cc0 is same with the state(5) to be set 00:20:41.327 [2024-04-26 12:16:42.507778] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9cc0 is same with the state(5) to be set 00:20:41.327 [2024-04-26 12:16:42.507782] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9cc0 is same with the state(5) to be set 00:20:41.327 [2024-04-26 12:16:42.507787] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9cc0 is same with the state(5) to be set 00:20:41.327 [2024-04-26 12:16:42.507791] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9cc0 is same with the state(5) to be set 00:20:41.327 [2024-04-26 12:16:42.507796] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9cc0 is same with the state(5) to be set 00:20:41.327 [2024-04-26 12:16:42.507800] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9cc0 is same with the state(5) to be set 00:20:41.327 [2024-04-26 12:16:42.507805] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9cc0 is same with the state(5) to be set 00:20:41.327 [2024-04-26 12:16:42.507809] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9cc0 is same with the state(5) to be set 00:20:41.327 [2024-04-26 12:16:42.507813] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9cc0 is same with the state(5) to be set 00:20:41.327 [2024-04-26 12:16:42.507818] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9cc0 is same with the state(5) to be set 00:20:41.327 [2024-04-26 12:16:42.507828] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9cc0 is same with the state(5) to be set 00:20:41.327 [2024-04-26 12:16:42.507832] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9cc0 is same with the state(5) to be set 00:20:41.327 [2024-04-26 12:16:42.507840] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9cc0 is same with the state(5) to be set 00:20:41.327 [2024-04-26 12:16:42.507845] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9cc0 is same with the state(5) to be set 00:20:41.327 [2024-04-26 12:16:42.507850] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9cc0 is same with the state(5) to be set 00:20:41.327 [2024-04-26 12:16:42.507854] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9cc0 is same with the state(5) to be set 00:20:41.327 [2024-04-26 12:16:42.507859] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9cc0 is same with the state(5) to be set 00:20:41.327 [2024-04-26 12:16:42.507863] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9cc0 is same with the state(5) to be set 00:20:41.327 [2024-04-26 12:16:42.507868] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9cc0 is same with the state(5) to be set 00:20:41.327 [2024-04-26 12:16:42.507872] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9cc0 is same with the state(5) to be set 00:20:41.327 [2024-04-26 12:16:42.507876] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9cc0 is same with the state(5) to be set 00:20:41.327 [2024-04-26 12:16:42.507881] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9cc0 is same with the state(5) to be set 00:20:41.327 [2024-04-26 12:16:42.507885] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9cc0 is same with the state(5) to be set 00:20:41.327 [2024-04-26 12:16:42.507889] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9cc0 is same with the state(5) to be set 00:20:41.327 [2024-04-26 12:16:42.507894] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9cc0 is same with the state(5) to be set 00:20:41.327 [2024-04-26 12:16:42.507899] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9cc0 is same with the state(5) to be set 00:20:41.327 [2024-04-26 12:16:42.507903] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9cc0 is same with the state(5) to be set 00:20:41.327 [2024-04-26 12:16:42.507908] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9cc0 is same with the state(5) to be set 00:20:41.327 [2024-04-26 12:16:42.507912] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9cc0 is same with the state(5) to be set 00:20:41.327 [2024-04-26 12:16:42.507917] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9cc0 is same with the state(5) to be set 00:20:41.327 [2024-04-26 12:16:42.507921] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9cc0 is same with the state(5) to be set 00:20:41.327 [2024-04-26 12:16:42.507925] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9cc0 is same with the state(5) to be set 00:20:41.327 [2024-04-26 12:16:42.507930] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9cc0 is same with the state(5) to be set 00:20:41.327 [2024-04-26 12:16:42.507934] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9cc0 is same with the state(5) to be set 00:20:41.327 [2024-04-26 12:16:42.507938] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9cc0 is same with the state(5) to be set 00:20:41.327 [2024-04-26 12:16:42.507943] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9cc0 is same with the state(5) to be set 00:20:41.327 [2024-04-26 12:16:42.507947] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9cc0 is same with the state(5) to be set 00:20:41.327 [2024-04-26 12:16:42.507953] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9cc0 is same with the state(5) to be set 00:20:41.327 [2024-04-26 12:16:42.507957] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9cc0 is same with the state(5) to be set 00:20:41.327 [2024-04-26 12:16:42.507962] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9cc0 is same with the state(5) to be set 00:20:41.327 [2024-04-26 12:16:42.507966] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9cc0 is same with the state(5) to be set 00:20:41.327 [2024-04-26 12:16:42.507970] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9cc0 is same with the state(5) to be set 00:20:41.327 [2024-04-26 12:16:42.507975] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9cc0 is same with the state(5) to be set 00:20:41.327 [2024-04-26 12:16:42.507979] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9cc0 is same with the state(5) to be set 00:20:41.327 [2024-04-26 12:16:42.507983] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9cc0 is same with the state(5) to be set 00:20:41.327 [2024-04-26 12:16:42.507987] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9cc0 is same with the state(5) to be set 00:20:41.327 [2024-04-26 12:16:42.507992] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9cc0 is same with the state(5) to be set 00:20:41.327 [2024-04-26 12:16:42.507996] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9cc0 is same with the state(5) to be set 00:20:41.327 [2024-04-26 12:16:42.508688] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba150 is same with the state(5) to be set 00:20:41.327 [2024-04-26 12:16:42.508698] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba150 is same with the state(5) to be set 00:20:41.327 [2024-04-26 12:16:42.508702] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba150 is same with the state(5) to be set 00:20:41.327 [2024-04-26 12:16:42.508707] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba150 is same with the state(5) to be set 00:20:41.327 [2024-04-26 12:16:42.508712] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba150 is same with the state(5) to be set 00:20:41.327 [2024-04-26 12:16:42.508716] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba150 is same with the state(5) to be set 00:20:41.327 [2024-04-26 12:16:42.508721] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba150 is same with the state(5) to be set 00:20:41.327 [2024-04-26 12:16:42.508726] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba150 is same with the state(5) to be set 00:20:41.327 [2024-04-26 12:16:42.508730] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba150 is same with the state(5) to be set 00:20:41.327 [2024-04-26 12:16:42.508735] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba150 is same with the state(5) to be set 00:20:41.327 [2024-04-26 12:16:42.508739] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba150 is same with the state(5) to be set 00:20:41.327 [2024-04-26 12:16:42.508743] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba150 is same with the state(5) to be set 00:20:41.327 [2024-04-26 12:16:42.508748] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba150 is same with the state(5) to be set 00:20:41.327 [2024-04-26 12:16:42.508753] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba150 is same with the state(5) to be set 00:20:41.327 [2024-04-26 12:16:42.508757] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba150 is same with the state(5) to be set 00:20:41.327 [2024-04-26 12:16:42.508762] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba150 is same with the state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.508768] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba150 is same with the state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.508773] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba150 is same with the state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.508777] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba150 is same with the state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.508782] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba150 is same with the state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.508787] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba150 is same with the state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.508791] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba150 is same with the state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.508796] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba150 is same with the state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.508801] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba150 is same with the state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.508805] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba150 is same with the state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.508809] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba150 is same with the state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.508814] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba150 is same with the state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.508818] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba150 is same with the state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.508823] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba150 is same with the state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.508827] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba150 is same with the state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.508831] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba150 is same with the state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.508836] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba150 is same with the state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.508844] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba150 is same with the state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.518761] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba150 is same with the state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.518780] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba150 is same with the state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.518786] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba150 is same with the state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.518792] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba150 is same with the state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.518797] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba150 is same with the state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.518802] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba150 is same with the state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.518807] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba150 is same with the state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.518812] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba150 is same with the state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.518816] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba150 is same with the state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.518820] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba150 is same with the state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.518829] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba150 is same with the state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.518834] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba150 is same with the state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.518843] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba150 is same with the state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.518848] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba150 is same with the state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.518852] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba150 is same with the state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.518856] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba150 is same with the state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.518860] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba150 is same with the state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.518865] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba150 is same with the state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.518870] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba150 is same with the state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.518874] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba150 is same with the state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.518878] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba150 is same with the state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.518883] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba150 is same with the state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.518887] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba150 is same with the state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.518892] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba150 is same with the state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.518897] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba150 is same with the state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.518901] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba150 is same with the state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.518905] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba150 is same with the state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.518910] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba150 is same with the state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.518914] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba150 is same with the state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.518918] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba150 is same with the state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.519671] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba5e0 is same with the state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.519685] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba5e0 is same with the state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.519690] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba5e0 is same with the state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.519694] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba5e0 is same with the state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.519699] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba5e0 is same with the state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.519704] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba5e0 is same with the state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.519709] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba5e0 is same with the state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.519696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-04-26 12:16:42.519714] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba5e0 is same with tid:0 cdw10:00000000 cdw11:00000000 00:20:41.328 he state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.519725] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba5e0 is same with the state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.519730] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba5e0 is same with the state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.519734] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba5e0 is same with the state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.519733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.328 [2024-04-26 12:16:42.519740] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba5e0 is same with the state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.519745] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba5e0 is same with the state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.519746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:41.328 [2024-04-26 12:16:42.519750] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba5e0 is same with the state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.519754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-04-26 12:16:42.519756] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba5e0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.328 he state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.519762] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba5e0 is same with the state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.519764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:41.328 [2024-04-26 12:16:42.519767] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba5e0 is same with the state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.519772] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba5e0 is same with the state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.519772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.328 [2024-04-26 12:16:42.519778] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba5e0 is same with the state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.519782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-04-26 12:16:42.519784] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba5e0 is same with tid:0 cdw10:00000000 cdw11:00000000 00:20:41.328 he state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.519791] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba5e0 is same with the state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.519791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.328 [2024-04-26 12:16:42.519796] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba5e0 is same with the state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.519799] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1174790 is same [2024-04-26 12:16:42.519801] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba5e0 is same with twith the state(5) to be set 00:20:41.328 he state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.519807] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba5e0 is same with the state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.519812] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba5e0 is same with the state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.519819] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba5e0 is same with the state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.519824] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba5e0 is same with the state(5) to be set 00:20:41.328 [2024-04-26 12:16:42.519829] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba5e0 is same with the state(5) to be set 00:20:41.329 [2024-04-26 12:16:42.519829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:41.329 [2024-04-26 12:16:42.519833] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba5e0 is same with the state(5) to be set 00:20:41.329 [2024-04-26 12:16:42.519842] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba5e0 is same with the state(5) to be set 00:20:41.329 [2024-04-26 12:16:42.519847] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba5e0 is same with the state(5) to be set 00:20:41.329 [2024-04-26 12:16:42.519848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.329 [2024-04-26 12:16:42.519852] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba5e0 is same with the state(5) to be set 00:20:41.329 [2024-04-26 12:16:42.519857] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba5e0 is same with the state(5) to be set 00:20:41.329 [2024-04-26 12:16:42.519857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:41.329 [2024-04-26 12:16:42.519863] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba5e0 is same with the state(5) to be set 00:20:41.329 [2024-04-26 12:16:42.519866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.329 [2024-04-26 12:16:42.519868] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba5e0 is same with the state(5) to be set 00:20:41.329 [2024-04-26 12:16:42.519874] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba5e0 is same with the state(5) to be set 00:20:41.329 [2024-04-26 12:16:42.519875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:41.329 [2024-04-26 12:16:42.519879] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba5e0 is same with the state(5) to be set 00:20:41.329 [2024-04-26 12:16:42.519883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-04-26 12:16:42.519884] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba5e0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.329 he state(5) to be set 00:20:41.329 [2024-04-26 12:16:42.519892] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba5e0 is same with the state(5) to be set 00:20:41.329 [2024-04-26 12:16:42.519893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:41.329 [2024-04-26 12:16:42.519896] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba5e0 is same with the state(5) to be set 00:20:41.329 [2024-04-26 12:16:42.519901] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba5e0 is same with t[2024-04-26 12:16:42.519900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(5) to be set 00:20:41.329 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.329 [2024-04-26 12:16:42.519908] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba5e0 is same with the state(5) to be set 00:20:41.329 [2024-04-26 12:16:42.519910] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11510c0 is same with the state(5) to be set 00:20:41.329 [2024-04-26 12:16:42.519913] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba5e0 is same with the state(5) to be set 00:20:41.329 [2024-04-26 12:16:42.519920] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba5e0 is same with the state(5) to be set 00:20:41.329 [2024-04-26 12:16:42.519924] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba5e0 is same with the state(5) to be set 00:20:41.329 [2024-04-26 12:16:42.519929] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba5e0 is same with the state(5) to be set 00:20:41.329 [2024-04-26 12:16:42.519933] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba5e0 is same with the state(5) to be set 00:20:41.329 [2024-04-26 12:16:42.519933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:41.329 [2024-04-26 12:16:42.519941] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba5e0 is same with the state(5) to be set 00:20:41.329 [2024-04-26 12:16:42.519943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.329 [2024-04-26 12:16:42.519949] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba5e0 is same with the state(5) to be set 00:20:41.329 [2024-04-26 12:16:42.519951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:41.329 [2024-04-26 12:16:42.519957] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba5e0 is same with t[2024-04-26 12:16:42.519959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(5) to be set 00:20:41.329 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.329 [2024-04-26 12:16:42.519967] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba5e0 is same with the state(5) to be set 00:20:41.329 [2024-04-26 12:16:42.519969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:41.329 [2024-04-26 12:16:42.519972] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba5e0 is same with the state(5) to be set 00:20:41.329 [2024-04-26 12:16:42.519977] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba5e0 is same with the state(5) to be set 00:20:41.329 [2024-04-26 12:16:42.519977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.329 [2024-04-26 12:16:42.519982] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba5e0 is same with the state(5) to be set 00:20:41.329 [2024-04-26 12:16:42.519987] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba5e0 is same with t[2024-04-26 12:16:42.519986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nshe state(5) to be set 00:20:41.329 id:0 cdw10:00000000 cdw11:00000000 00:20:41.329 [2024-04-26 12:16:42.519993] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba5e0 is same with the state(5) to be set 00:20:41.329 [2024-04-26 12:16:42.519995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.329 [2024-04-26 12:16:42.519998] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba5e0 is same with the state(5) to be set 00:20:41.329 [2024-04-26 12:16:42.520003] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb1fd0 is same with the state(5) to be set 00:20:41.329 [2024-04-26 12:16:42.520005] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba5e0 is same with the state(5) to be set 00:20:41.329 [2024-04-26 12:16:42.520011] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba5e0 is same with the state(5) to be set 00:20:41.329 [2024-04-26 12:16:42.520016] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba5e0 is same with the state(5) to be set 00:20:41.329 [2024-04-26 12:16:42.520020] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba5e0 is same with the state(5) to be set 00:20:41.329 [2024-04-26 12:16:42.520024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-04-26 12:16:42.520026] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba5e0 is same with tid:0 cdw10:00000000 cdw11:00000000 00:20:41.329 he state(5) to be set 00:20:41.329 [2024-04-26 12:16:42.520035] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba5e0 is same with the state(5) to be set 00:20:41.329 [2024-04-26 12:16:42.520035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.329 [2024-04-26 12:16:42.520044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:41.329 [2024-04-26 12:16:42.520051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.329 [2024-04-26 12:16:42.520059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:41.329 [2024-04-26 12:16:42.520066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.329 [2024-04-26 12:16:42.520074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:41.329 [2024-04-26 12:16:42.520081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.329 [2024-04-26 12:16:42.520088] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117c8c0 is same with the state(5) to be set 00:20:41.329 [2024-04-26 12:16:42.520111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:41.329 [2024-04-26 12:16:42.520119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.329 [2024-04-26 12:16:42.520127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:41.329 [2024-04-26 12:16:42.520134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.329 [2024-04-26 12:16:42.520142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:41.329 [2024-04-26 12:16:42.520149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.329 [2024-04-26 12:16:42.520157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:41.329 [2024-04-26 12:16:42.520164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.329 [2024-04-26 12:16:42.520171] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1240400 is same with the state(5) to be set 00:20:41.329 [2024-04-26 12:16:42.520193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:41.329 [2024-04-26 12:16:42.520201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.329 [2024-04-26 12:16:42.520209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:41.329 [2024-04-26 12:16:42.520216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.329 [2024-04-26 12:16:42.520224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:41.329 [2024-04-26 12:16:42.520233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.329 [2024-04-26 12:16:42.520242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:41.329 [2024-04-26 12:16:42.520249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.329 [2024-04-26 12:16:42.520255] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235a30 is same with the state(5) to be set 00:20:41.329 [2024-04-26 12:16:42.520288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:41.329 [2024-04-26 12:16:42.520300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.329 [2024-04-26 12:16:42.520308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:41.330 [2024-04-26 12:16:42.520318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.330 [2024-04-26 12:16:42.520331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:41.330 [2024-04-26 12:16:42.520343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.330 [2024-04-26 12:16:42.520352] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:41.330 [2024-04-26 12:16:42.520359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.330 [2024-04-26 12:16:42.520366] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1199c10 is same with the state(5) to be set 00:20:41.330 [2024-04-26 12:16:42.520403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:41.330 [2024-04-26 12:16:42.520412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.330 [2024-04-26 12:16:42.520421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:41.330 [2024-04-26 12:16:42.520428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.330 [2024-04-26 12:16:42.520436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:41.330 [2024-04-26 12:16:42.520443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.330 [2024-04-26 12:16:42.520451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:41.330 [2024-04-26 12:16:42.520458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.330 [2024-04-26 12:16:42.520465] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x115f8f0 is same with the state(5) to be set 00:20:41.330 [2024-04-26 12:16:42.520973] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baa70 is same with the state(5) to be set 00:20:41.330 [2024-04-26 12:16:42.520988] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baa70 is same with the state(5) to be set 00:20:41.330 [2024-04-26 12:16:42.520993] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baa70 is same with the state(5) to be set 00:20:41.330 [2024-04-26 12:16:42.520998] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baa70 is same with the state(5) to be set 00:20:41.330 [2024-04-26 12:16:42.521005] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baa70 is same with the state(5) to be set 00:20:41.330 [2024-04-26 12:16:42.521010] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baa70 is same with the state(5) to be set 00:20:41.330 [2024-04-26 12:16:42.521014] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baa70 is same with the state(5) to be set 00:20:41.330 [2024-04-26 12:16:42.521019] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baa70 is same with the state(5) to be set 00:20:41.330 [2024-04-26 12:16:42.521024] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baa70 is same with the state(5) to be set 00:20:41.330 [2024-04-26 12:16:42.521028] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baa70 is same with the state(5) to be set 00:20:41.330 [2024-04-26 12:16:42.521032] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baa70 is same with the state(5) to be set 00:20:41.330 [2024-04-26 12:16:42.521037] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baa70 is same with the state(5) to be set 00:20:41.330 [2024-04-26 12:16:42.521041] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baa70 is same with the state(5) to be set 00:20:41.330 [2024-04-26 12:16:42.521046] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baa70 is same with the state(5) to be set 00:20:41.330 [2024-04-26 12:16:42.521051] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baa70 is same with the state(5) to be set 00:20:41.330 [2024-04-26 12:16:42.521055] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baa70 is same with the state(5) to be set 00:20:41.330 [2024-04-26 12:16:42.521060] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baa70 is same with the state(5) to be set 00:20:41.330 [2024-04-26 12:16:42.521064] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baa70 is same with the state(5) to be set 00:20:41.330 [2024-04-26 12:16:42.521069] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baa70 is same with the state(5) to be set 00:20:41.330 [2024-04-26 12:16:42.521073] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baa70 is same with the state(5) to be set 00:20:41.330 [2024-04-26 12:16:42.521077] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baa70 is same with the state(5) to be set 00:20:41.330 [2024-04-26 12:16:42.521081] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baa70 is same with the state(5) to be set 00:20:41.330 [2024-04-26 12:16:42.521085] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baa70 is same with the state(5) to be set 00:20:41.330 [2024-04-26 12:16:42.521090] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baa70 is same with the state(5) to be set 00:20:41.330 [2024-04-26 12:16:42.521094] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baa70 is same with the state(5) to be set 00:20:41.330 [2024-04-26 12:16:42.521098] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baa70 is same with the state(5) to be set 00:20:41.330 [2024-04-26 12:16:42.521102] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baa70 is same with the state(5) to be set 00:20:41.330 [2024-04-26 12:16:42.521107] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baa70 is same with the state(5) to be set 00:20:41.330 [2024-04-26 12:16:42.521111] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baa70 is same with the state(5) to be set 00:20:41.330 [2024-04-26 12:16:42.521115] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baa70 is same with the state(5) to be set 00:20:41.330 [2024-04-26 12:16:42.521120] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baa70 is same with the state(5) to be set 00:20:41.330 [2024-04-26 12:16:42.521124] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baa70 is same with the state(5) to be set 00:20:41.330 [2024-04-26 12:16:42.521129] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baa70 is same with the state(5) to be set 00:20:41.330 [2024-04-26 12:16:42.521133] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baa70 is same with the state(5) to be set 00:20:41.330 [2024-04-26 12:16:42.521138] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baa70 is same with the state(5) to be set 00:20:41.330 [2024-04-26 12:16:42.521142] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baa70 is same with the state(5) to be set 00:20:41.330 [2024-04-26 12:16:42.521146] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baa70 is same with the state(5) to be set 00:20:41.330 [2024-04-26 12:16:42.521151] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baa70 is same with the state(5) to be set 00:20:41.330 [2024-04-26 12:16:42.521155] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baa70 is same with the state(5) to be set 00:20:41.330 [2024-04-26 12:16:42.521159] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baa70 is same with the state(5) to be set 00:20:41.330 [2024-04-26 12:16:42.521163] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baa70 is same with the state(5) to be set 00:20:41.330 [2024-04-26 12:16:42.521168] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baa70 is same with the state(5) to be set 00:20:41.330 [2024-04-26 12:16:42.521172] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baa70 is same with the state(5) to be set 00:20:41.330 [2024-04-26 12:16:42.521176] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baa70 is same with the state(5) to be set 00:20:41.330 [2024-04-26 12:16:42.521180] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baa70 is same with the state(5) to be set 00:20:41.330 [2024-04-26 12:16:42.521185] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baa70 is same with the state(5) to be set 00:20:41.330 [2024-04-26 12:16:42.521189] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baa70 is same with the state(5) to be set 00:20:41.330 [2024-04-26 12:16:42.521193] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baa70 is same with the state(5) to be set 00:20:41.330 [2024-04-26 12:16:42.521198] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baa70 is same with the state(5) to be set 00:20:41.330 [2024-04-26 12:16:42.521202] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baa70 is same with the state(5) to be set 00:20:41.330 [2024-04-26 12:16:42.521206] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baa70 is same with the state(5) to be set 00:20:41.330 [2024-04-26 12:16:42.521211] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baa70 is same with the state(5) to be set 00:20:41.330 [2024-04-26 12:16:42.521215] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baa70 is same with the state(5) to be set 00:20:41.330 [2024-04-26 12:16:42.521219] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baa70 is same with the state(5) to be set 00:20:41.330 [2024-04-26 12:16:42.521223] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baa70 is same with the state(5) to be set 00:20:41.330 [2024-04-26 12:16:42.521228] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baa70 is same with the state(5) to be set 00:20:41.330 [2024-04-26 12:16:42.521232] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baa70 is same with the state(5) to be set 00:20:41.330 [2024-04-26 12:16:42.521236] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baa70 is same with the state(5) to be set 00:20:41.330 [2024-04-26 12:16:42.521241] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baa70 is same with the state(5) to be set 00:20:41.330 [2024-04-26 12:16:42.521246] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baa70 is same with the state(5) to be set 00:20:41.330 [2024-04-26 12:16:42.521250] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baa70 is same with the state(5) to be set 00:20:41.330 [2024-04-26 12:16:42.521255] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baa70 is same with the state(5) to be set 00:20:41.330 [2024-04-26 12:16:42.521259] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baa70 is same with the state(5) to be set 00:20:41.330 [2024-04-26 12:16:42.521672] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baf00 is same with the state(5) to be set 00:20:41.330 [2024-04-26 12:16:42.521685] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baf00 is same with the state(5) to be set 00:20:41.330 [2024-04-26 12:16:42.521690] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baf00 is same with the state(5) to be set 00:20:41.330 [2024-04-26 12:16:42.521694] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baf00 is same with the state(5) to be set 00:20:41.330 [2024-04-26 12:16:42.521699] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baf00 is same with the state(5) to be set 00:20:41.330 [2024-04-26 12:16:42.521704] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baf00 is same with the state(5) to be set 00:20:41.330 [2024-04-26 12:16:42.521708] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baf00 is same with the state(5) to be set 00:20:41.331 [2024-04-26 12:16:42.521713] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baf00 is same with the state(5) to be set 00:20:41.331 [2024-04-26 12:16:42.521717] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baf00 is same with the state(5) to be set 00:20:41.331 [2024-04-26 12:16:42.521722] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baf00 is same with the state(5) to be set 00:20:41.331 [2024-04-26 12:16:42.521726] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baf00 is same with the state(5) to be set 00:20:41.331 [2024-04-26 12:16:42.521731] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baf00 is same with the state(5) to be set 00:20:41.331 [2024-04-26 12:16:42.521735] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baf00 is same with the state(5) to be set 00:20:41.331 [2024-04-26 12:16:42.521739] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baf00 is same with the state(5) to be set 00:20:41.331 [2024-04-26 12:16:42.521744] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baf00 is same with the state(5) to be set 00:20:41.331 [2024-04-26 12:16:42.521748] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baf00 is same with the state(5) to be set 00:20:41.331 [2024-04-26 12:16:42.521753] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baf00 is same with the state(5) to be set 00:20:41.331 [2024-04-26 12:16:42.521758] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baf00 is same with the state(5) to be set 00:20:41.331 [2024-04-26 12:16:42.521762] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baf00 is same with the state(5) to be set 00:20:41.331 [2024-04-26 12:16:42.521767] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baf00 is same with the state(5) to be set 00:20:41.331 [2024-04-26 12:16:42.521771] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baf00 is same with the state(5) to be set 00:20:41.331 [2024-04-26 12:16:42.521776] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baf00 is same with the state(5) to be set 00:20:41.331 [2024-04-26 12:16:42.521769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:12[2024-04-26 12:16:42.521781] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baf00 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.331 he state(5) to be set 00:20:41.331 [2024-04-26 12:16:42.521791] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baf00 is same with the state(5) to be set 00:20:41.331 [2024-04-26 12:16:42.521796] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baf00 is same with the state(5) to be set 00:20:41.331 [2024-04-26 12:16:42.521797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.331 [2024-04-26 12:16:42.521801] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baf00 is same with the state(5) to be set 00:20:41.331 [2024-04-26 12:16:42.521806] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baf00 is same with the state(5) to be set 00:20:41.331 [2024-04-26 12:16:42.521811] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baf00 is same with the state(5) to be set 00:20:41.331 [2024-04-26 12:16:42.521814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:12[2024-04-26 12:16:42.521815] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baf00 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.331 he state(5) to be set 00:20:41.331 [2024-04-26 12:16:42.521823] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baf00 is same with the state(5) to be set 00:20:41.331 [2024-04-26 12:16:42.521823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.331 [2024-04-26 12:16:42.521827] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baf00 is same with the state(5) to be set 00:20:41.331 [2024-04-26 12:16:42.521832] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baf00 is same with the state(5) to be set 00:20:41.331 [2024-04-26 12:16:42.521833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.331 [2024-04-26 12:16:42.521842] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baf00 is same with the state(5) to be set 00:20:41.331 [2024-04-26 12:16:42.521847] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baf00 is same with t[2024-04-26 12:16:42.521847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:20:41.331 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.331 [2024-04-26 12:16:42.521854] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baf00 is same with the state(5) to be set 00:20:41.331 [2024-04-26 12:16:42.521859] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baf00 is same with t[2024-04-26 12:16:42.521859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:12he state(5) to be set 00:20:41.331 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.331 [2024-04-26 12:16:42.521866] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baf00 is same with the state(5) to be set 00:20:41.331 [2024-04-26 12:16:42.521868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.331 [2024-04-26 12:16:42.521871] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baf00 is same with the state(5) to be set 00:20:41.331 [2024-04-26 12:16:42.521876] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baf00 is same with the state(5) to be set 00:20:41.331 [2024-04-26 12:16:42.521878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.331 [2024-04-26 12:16:42.521881] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baf00 is same with the state(5) to be set 00:20:41.331 [2024-04-26 12:16:42.521887] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baf00 is same with t[2024-04-26 12:16:42.521886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:20:41.331 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.331 [2024-04-26 12:16:42.521896] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baf00 is same with the state(5) to be set 00:20:41.331 [2024-04-26 12:16:42.521901] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baf00 is same with t[2024-04-26 12:16:42.521901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:12he state(5) to be set 00:20:41.331 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.331 [2024-04-26 12:16:42.521908] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baf00 is same with the state(5) to be set 00:20:41.331 [2024-04-26 12:16:42.521911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.331 [2024-04-26 12:16:42.521914] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baf00 is same with the state(5) to be set 00:20:41.331 [2024-04-26 12:16:42.521919] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baf00 is same with the state(5) to be set 00:20:41.331 [2024-04-26 12:16:42.521920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.331 [2024-04-26 12:16:42.521924] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baf00 is same with the state(5) to be set 00:20:41.331 [2024-04-26 12:16:42.521928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-26 12:16:42.521929] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baf00 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.331 he state(5) to be set 00:20:41.331 [2024-04-26 12:16:42.521937] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baf00 is same with the state(5) to be set 00:20:41.331 [2024-04-26 12:16:42.521940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:12[2024-04-26 12:16:42.521941] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baf00 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.331 he state(5) to be set 00:20:41.331 [2024-04-26 12:16:42.521949] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baf00 is same with the state(5) to be set 00:20:41.331 [2024-04-26 12:16:42.521949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.331 [2024-04-26 12:16:42.521954] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baf00 is same with the state(5) to be set 00:20:41.331 [2024-04-26 12:16:42.521959] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baf00 is same with the state(5) to be set 00:20:41.331 [2024-04-26 12:16:42.521959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.331 [2024-04-26 12:16:42.521963] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baf00 is same with the state(5) to be set 00:20:41.331 [2024-04-26 12:16:42.521967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-26 12:16:42.521969] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baf00 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.331 he state(5) to be set 00:20:41.331 [2024-04-26 12:16:42.521975] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baf00 is same with the state(5) to be set 00:20:41.331 [2024-04-26 12:16:42.521978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.331 [2024-04-26 12:16:42.521980] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baf00 is same with the state(5) to be set 00:20:41.331 [2024-04-26 12:16:42.521986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-26 12:16:42.521987] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baf00 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.331 he state(5) to be set 00:20:41.331 [2024-04-26 12:16:42.521994] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baf00 is same with the state(5) to be set 00:20:41.331 [2024-04-26 12:16:42.521997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:1[2024-04-26 12:16:42.521999] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baf00 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.331 he state(5) to be set 00:20:41.331 [2024-04-26 12:16:42.522006] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baf00 is same with the state(5) to be set 00:20:41.331 [2024-04-26 12:16:42.522006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.331 [2024-04-26 12:16:42.522010] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baf00 is same with the state(5) to be set 00:20:41.332 [2024-04-26 12:16:42.522015] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baf00 is same with the state(5) to be set 00:20:41.332 [2024-04-26 12:16:42.522016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.332 [2024-04-26 12:16:42.522024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.332 [2024-04-26 12:16:42.522033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.332 [2024-04-26 12:16:42.522040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.332 [2024-04-26 12:16:42.522049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.332 [2024-04-26 12:16:42.522056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.332 [2024-04-26 12:16:42.522065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.332 [2024-04-26 12:16:42.522072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.332 [2024-04-26 12:16:42.522081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.332 [2024-04-26 12:16:42.522089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.332 [2024-04-26 12:16:42.522098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.332 [2024-04-26 12:16:42.522105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.332 [2024-04-26 12:16:42.522114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.332 [2024-04-26 12:16:42.522121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.332 [2024-04-26 12:16:42.522131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.332 [2024-04-26 12:16:42.522138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.332 [2024-04-26 12:16:42.522147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.332 [2024-04-26 12:16:42.522156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.332 [2024-04-26 12:16:42.522165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.332 [2024-04-26 12:16:42.522172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.332 [2024-04-26 12:16:42.522181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.332 [2024-04-26 12:16:42.522188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.332 [2024-04-26 12:16:42.522197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.332 [2024-04-26 12:16:42.522204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.332 [2024-04-26 12:16:42.522213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.332 [2024-04-26 12:16:42.522220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.332 [2024-04-26 12:16:42.522229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.332 [2024-04-26 12:16:42.522236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.332 [2024-04-26 12:16:42.522245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.332 [2024-04-26 12:16:42.522252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.332 [2024-04-26 12:16:42.522261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.332 [2024-04-26 12:16:42.522268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.332 [2024-04-26 12:16:42.522277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.332 [2024-04-26 12:16:42.522284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.332 [2024-04-26 12:16:42.522293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.332 [2024-04-26 12:16:42.522301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.332 [2024-04-26 12:16:42.522311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.332 [2024-04-26 12:16:42.522318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.332 [2024-04-26 12:16:42.522328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.332 [2024-04-26 12:16:42.522335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.332 [2024-04-26 12:16:42.522344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.332 [2024-04-26 12:16:42.522351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.332 [2024-04-26 12:16:42.522362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.332 [2024-04-26 12:16:42.522369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.332 [2024-04-26 12:16:42.522378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.332 [2024-04-26 12:16:42.522386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.332 [2024-04-26 12:16:42.522395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.332 [2024-04-26 12:16:42.522402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.332 [2024-04-26 12:16:42.522411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.332 [2024-04-26 12:16:42.522418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.332 [2024-04-26 12:16:42.522427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.332 [2024-04-26 12:16:42.522434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.332 [2024-04-26 12:16:42.522443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.332 [2024-04-26 12:16:42.522450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.332 [2024-04-26 12:16:42.522459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.332 [2024-04-26 12:16:42.522466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.332 [2024-04-26 12:16:42.522475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.332 [2024-04-26 12:16:42.522483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.332 [2024-04-26 12:16:42.522492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.332 [2024-04-26 12:16:42.522499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.332 [2024-04-26 12:16:42.522508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.332 [2024-04-26 12:16:42.522515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.332 [2024-04-26 12:16:42.522524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.332 [2024-04-26 12:16:42.522531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.332 [2024-04-26 12:16:42.522541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.332 [2024-04-26 12:16:42.522548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.332 [2024-04-26 12:16:42.522557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.332 [2024-04-26 12:16:42.522565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.332 [2024-04-26 12:16:42.522575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.332 [2024-04-26 12:16:42.522582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.332 [2024-04-26 12:16:42.522591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.332 [2024-04-26 12:16:42.522598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.332 [2024-04-26 12:16:42.522607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.333 [2024-04-26 12:16:42.522614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.333 [2024-04-26 12:16:42.522623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.333 [2024-04-26 12:16:42.522630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.333 [2024-04-26 12:16:42.522639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.333 [2024-04-26 12:16:42.522647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.333 [2024-04-26 12:16:42.522656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.333 [2024-04-26 12:16:42.522663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.333 [2024-04-26 12:16:42.522673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.333 [2024-04-26 12:16:42.522679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.333 [2024-04-26 12:16:42.522688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.333 [2024-04-26 12:16:42.522696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.333 [2024-04-26 12:16:42.522705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.333 [2024-04-26 12:16:42.522712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.333 [2024-04-26 12:16:42.522720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.333 [2024-04-26 12:16:42.522728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.333 [2024-04-26 12:16:42.522737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.333 [2024-04-26 12:16:42.522744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.333 [2024-04-26 12:16:42.522753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.333 [2024-04-26 12:16:42.522760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.333 [2024-04-26 12:16:42.522771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.333 [2024-04-26 12:16:42.522778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.333 [2024-04-26 12:16:42.522787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.333 [2024-04-26 12:16:42.522794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.333 [2024-04-26 12:16:42.522803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.333 [2024-04-26 12:16:42.522810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.333 [2024-04-26 12:16:42.522819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.333 [2024-04-26 12:16:42.522826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.333 [2024-04-26 12:16:42.522835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.333 [2024-04-26 12:16:42.522846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.333 [2024-04-26 12:16:42.522855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.333 [2024-04-26 12:16:42.522862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.333 [2024-04-26 12:16:42.522872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.333 [2024-04-26 12:16:42.522879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.333 [2024-04-26 12:16:42.522903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:41.333 [2024-04-26 12:16:42.522945] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x114c460 was disconnected and freed. reset controller. 00:20:41.608 [2024-04-26 12:16:42.562999] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:20:41.608 [2024-04-26 12:16:42.563078] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f30f0 (9): Bad file descriptor 00:20:41.608 [2024-04-26 12:16:42.563112] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1174790 (9): Bad file descriptor 00:20:41.608 [2024-04-26 12:16:42.563132] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11510c0 (9): Bad file descriptor 00:20:41.608 [2024-04-26 12:16:42.563148] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcb1fd0 (9): Bad file descriptor 00:20:41.608 [2024-04-26 12:16:42.563163] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x117c8c0 (9): Bad file descriptor 00:20:41.608 [2024-04-26 12:16:42.563179] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1240400 (9): Bad file descriptor 00:20:41.608 [2024-04-26 12:16:42.563197] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1235a30 (9): Bad file descriptor 00:20:41.608 [2024-04-26 12:16:42.563228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:41.608 [2024-04-26 12:16:42.563239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.608 [2024-04-26 12:16:42.563257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:41.608 [2024-04-26 12:16:42.563265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.608 [2024-04-26 12:16:42.563272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:41.608 [2024-04-26 12:16:42.563279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.608 [2024-04-26 12:16:42.563287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:41.608 [2024-04-26 12:16:42.563294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.608 [2024-04-26 12:16:42.563301] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11737c0 is same with the state(5) to be set 00:20:41.608 [2024-04-26 12:16:42.563318] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1199c10 (9): Bad file descriptor 00:20:41.608 [2024-04-26 12:16:42.563336] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x115f8f0 (9): Bad file descriptor 00:20:41.608 [2024-04-26 12:16:42.563384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.608 [2024-04-26 12:16:42.563393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.608 [2024-04-26 12:16:42.563407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.608 [2024-04-26 12:16:42.563414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.608 [2024-04-26 12:16:42.563423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.608 [2024-04-26 12:16:42.563431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.608 [2024-04-26 12:16:42.563440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.608 [2024-04-26 12:16:42.563447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.608 [2024-04-26 12:16:42.563457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.608 [2024-04-26 12:16:42.563464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.608 [2024-04-26 12:16:42.563474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.608 [2024-04-26 12:16:42.563481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.608 [2024-04-26 12:16:42.563490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.608 [2024-04-26 12:16:42.563497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.608 [2024-04-26 12:16:42.563507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.608 [2024-04-26 12:16:42.563514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.608 [2024-04-26 12:16:42.563526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.608 [2024-04-26 12:16:42.563533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.608 [2024-04-26 12:16:42.563542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.608 [2024-04-26 12:16:42.563550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.608 [2024-04-26 12:16:42.563559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.608 [2024-04-26 12:16:42.563566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.608 [2024-04-26 12:16:42.563575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.608 [2024-04-26 12:16:42.563583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.608 [2024-04-26 12:16:42.563591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.608 [2024-04-26 12:16:42.563599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.608 [2024-04-26 12:16:42.563608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.608 [2024-04-26 12:16:42.563615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.608 [2024-04-26 12:16:42.563624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.608 [2024-04-26 12:16:42.563631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.608 [2024-04-26 12:16:42.563641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.608 [2024-04-26 12:16:42.563648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.608 [2024-04-26 12:16:42.563657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.608 [2024-04-26 12:16:42.563665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.608 [2024-04-26 12:16:42.563675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.608 [2024-04-26 12:16:42.563683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.608 [2024-04-26 12:16:42.563692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.608 [2024-04-26 12:16:42.563700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.608 [2024-04-26 12:16:42.563709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.608 [2024-04-26 12:16:42.563716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.608 [2024-04-26 12:16:42.563725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.608 [2024-04-26 12:16:42.563734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.608 [2024-04-26 12:16:42.563743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.608 [2024-04-26 12:16:42.563750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.608 [2024-04-26 12:16:42.563760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.608 [2024-04-26 12:16:42.563767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.608 [2024-04-26 12:16:42.563776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.608 [2024-04-26 12:16:42.563783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.608 [2024-04-26 12:16:42.563792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.608 [2024-04-26 12:16:42.563799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.608 [2024-04-26 12:16:42.563808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.608 [2024-04-26 12:16:42.563815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.608 [2024-04-26 12:16:42.563824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.608 [2024-04-26 12:16:42.563832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.608 [2024-04-26 12:16:42.563847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.608 [2024-04-26 12:16:42.563854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.608 [2024-04-26 12:16:42.563863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.608 [2024-04-26 12:16:42.563871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.608 [2024-04-26 12:16:42.563880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.609 [2024-04-26 12:16:42.563888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.609 [2024-04-26 12:16:42.563897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.609 [2024-04-26 12:16:42.563904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.609 [2024-04-26 12:16:42.563913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.609 [2024-04-26 12:16:42.563920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.609 [2024-04-26 12:16:42.563929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.609 [2024-04-26 12:16:42.563937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.609 [2024-04-26 12:16:42.563948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.609 [2024-04-26 12:16:42.563955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.609 [2024-04-26 12:16:42.563964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.609 [2024-04-26 12:16:42.563971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.609 [2024-04-26 12:16:42.563981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.609 [2024-04-26 12:16:42.563988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.609 [2024-04-26 12:16:42.563997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.609 [2024-04-26 12:16:42.564004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.609 [2024-04-26 12:16:42.564013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.609 [2024-04-26 12:16:42.564021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.609 [2024-04-26 12:16:42.564030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.609 [2024-04-26 12:16:42.564037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.609 [2024-04-26 12:16:42.564046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.609 [2024-04-26 12:16:42.564053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.609 [2024-04-26 12:16:42.564062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.609 [2024-04-26 12:16:42.564070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.609 [2024-04-26 12:16:42.564079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.609 [2024-04-26 12:16:42.564087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.609 [2024-04-26 12:16:42.564096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.609 [2024-04-26 12:16:42.564103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.609 [2024-04-26 12:16:42.564112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.609 [2024-04-26 12:16:42.564119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.609 [2024-04-26 12:16:42.564129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.609 [2024-04-26 12:16:42.564136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.609 [2024-04-26 12:16:42.564145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.609 [2024-04-26 12:16:42.564154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.609 [2024-04-26 12:16:42.564163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.609 [2024-04-26 12:16:42.564170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.609 [2024-04-26 12:16:42.564179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.609 [2024-04-26 12:16:42.564186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.609 [2024-04-26 12:16:42.564196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.609 [2024-04-26 12:16:42.564204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.609 [2024-04-26 12:16:42.564213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.609 [2024-04-26 12:16:42.564220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.609 [2024-04-26 12:16:42.564229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.609 [2024-04-26 12:16:42.564236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.609 [2024-04-26 12:16:42.564246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.609 [2024-04-26 12:16:42.564253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.609 [2024-04-26 12:16:42.564262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.609 [2024-04-26 12:16:42.564269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.609 [2024-04-26 12:16:42.564278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.609 [2024-04-26 12:16:42.564286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.609 [2024-04-26 12:16:42.564295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.609 [2024-04-26 12:16:42.564302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.609 [2024-04-26 12:16:42.564311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.609 [2024-04-26 12:16:42.564318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.609 [2024-04-26 12:16:42.564327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.609 [2024-04-26 12:16:42.564334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.609 [2024-04-26 12:16:42.564343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.609 [2024-04-26 12:16:42.564350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.609 [2024-04-26 12:16:42.564361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.609 [2024-04-26 12:16:42.564368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.609 [2024-04-26 12:16:42.564377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.609 [2024-04-26 12:16:42.564384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.609 [2024-04-26 12:16:42.564393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.609 [2024-04-26 12:16:42.564401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.609 [2024-04-26 12:16:42.564410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.609 [2024-04-26 12:16:42.564418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.609 [2024-04-26 12:16:42.564427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.609 [2024-04-26 12:16:42.564434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.609 [2024-04-26 12:16:42.564443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.609 [2024-04-26 12:16:42.564451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.609 [2024-04-26 12:16:42.564507] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1210eb0 was disconnected and freed. reset controller. 00:20:41.609 [2024-04-26 12:16:42.564586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.609 [2024-04-26 12:16:42.564596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.609 [2024-04-26 12:16:42.564608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.609 [2024-04-26 12:16:42.564615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.609 [2024-04-26 12:16:42.564625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.609 [2024-04-26 12:16:42.564632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.609 [2024-04-26 12:16:42.564642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.609 [2024-04-26 12:16:42.564649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.609 [2024-04-26 12:16:42.564658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.609 [2024-04-26 12:16:42.564666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.610 [2024-04-26 12:16:42.564675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.610 [2024-04-26 12:16:42.564682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.610 [2024-04-26 12:16:42.564695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.610 [2024-04-26 12:16:42.564702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.610 [2024-04-26 12:16:42.564711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.610 [2024-04-26 12:16:42.564719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.610 [2024-04-26 12:16:42.564728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.610 [2024-04-26 12:16:42.564735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.610 [2024-04-26 12:16:42.564745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.610 [2024-04-26 12:16:42.564752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.610 [2024-04-26 12:16:42.564761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.610 [2024-04-26 12:16:42.564768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.610 [2024-04-26 12:16:42.564778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.610 [2024-04-26 12:16:42.564785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.610 [2024-04-26 12:16:42.564794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.610 [2024-04-26 12:16:42.564801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.610 [2024-04-26 12:16:42.564810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.610 [2024-04-26 12:16:42.564817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.610 [2024-04-26 12:16:42.564826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.610 [2024-04-26 12:16:42.564834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.610 [2024-04-26 12:16:42.564850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.610 [2024-04-26 12:16:42.564858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.610 [2024-04-26 12:16:42.564867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.610 [2024-04-26 12:16:42.564875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.610 [2024-04-26 12:16:42.564884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.610 [2024-04-26 12:16:42.564892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.610 [2024-04-26 12:16:42.564902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.610 [2024-04-26 12:16:42.564910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.610 [2024-04-26 12:16:42.564919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.610 [2024-04-26 12:16:42.564927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.610 [2024-04-26 12:16:42.564936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.610 [2024-04-26 12:16:42.564943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.610 [2024-04-26 12:16:42.564952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.610 [2024-04-26 12:16:42.564959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.610 [2024-04-26 12:16:42.564969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.610 [2024-04-26 12:16:42.564976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.610 [2024-04-26 12:16:42.564985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.610 [2024-04-26 12:16:42.564992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.610 [2024-04-26 12:16:42.565002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.610 [2024-04-26 12:16:42.565010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.610 [2024-04-26 12:16:42.565019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.610 [2024-04-26 12:16:42.565026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.610 [2024-04-26 12:16:42.565035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.610 [2024-04-26 12:16:42.565042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.610 [2024-04-26 12:16:42.565051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.610 [2024-04-26 12:16:42.565058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.610 [2024-04-26 12:16:42.565068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.610 [2024-04-26 12:16:42.565075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.610 [2024-04-26 12:16:42.565084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.610 [2024-04-26 12:16:42.565092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.610 [2024-04-26 12:16:42.565102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.610 [2024-04-26 12:16:42.565109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.610 [2024-04-26 12:16:42.565120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.610 [2024-04-26 12:16:42.565127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.610 [2024-04-26 12:16:42.565137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.610 [2024-04-26 12:16:42.565145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.610 [2024-04-26 12:16:42.565154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.610 [2024-04-26 12:16:42.565161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.610 [2024-04-26 12:16:42.565170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.610 [2024-04-26 12:16:42.565178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.610 [2024-04-26 12:16:42.565187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.610 [2024-04-26 12:16:42.565194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.610 [2024-04-26 12:16:42.565203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.610 [2024-04-26 12:16:42.565210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.610 [2024-04-26 12:16:42.565219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.610 [2024-04-26 12:16:42.565227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.610 [2024-04-26 12:16:42.565236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.610 [2024-04-26 12:16:42.565243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.610 [2024-04-26 12:16:42.565252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.610 [2024-04-26 12:16:42.565259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.610 [2024-04-26 12:16:42.565269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.610 [2024-04-26 12:16:42.565276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.610 [2024-04-26 12:16:42.565285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.610 [2024-04-26 12:16:42.565292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.610 [2024-04-26 12:16:42.565301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.610 [2024-04-26 12:16:42.565308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.610 [2024-04-26 12:16:42.565318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.610 [2024-04-26 12:16:42.565327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.611 [2024-04-26 12:16:42.565336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.611 [2024-04-26 12:16:42.565343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.611 [2024-04-26 12:16:42.565353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.611 [2024-04-26 12:16:42.565360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.611 [2024-04-26 12:16:42.565369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.611 [2024-04-26 12:16:42.565376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.611 [2024-04-26 12:16:42.565385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.611 [2024-04-26 12:16:42.565393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.611 [2024-04-26 12:16:42.565402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.611 [2024-04-26 12:16:42.565409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.611 [2024-04-26 12:16:42.565418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.611 [2024-04-26 12:16:42.565425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.611 [2024-04-26 12:16:42.565434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.611 [2024-04-26 12:16:42.565442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.611 [2024-04-26 12:16:42.565451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.611 [2024-04-26 12:16:42.565458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.611 [2024-04-26 12:16:42.565467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.611 [2024-04-26 12:16:42.565474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.611 [2024-04-26 12:16:42.565483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.611 [2024-04-26 12:16:42.565490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.611 [2024-04-26 12:16:42.565499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.611 [2024-04-26 12:16:42.565506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.611 [2024-04-26 12:16:42.565515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.611 [2024-04-26 12:16:42.565522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.611 [2024-04-26 12:16:42.565533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.611 [2024-04-26 12:16:42.565540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.611 [2024-04-26 12:16:42.565549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.611 [2024-04-26 12:16:42.565556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.611 [2024-04-26 12:16:42.565565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.611 [2024-04-26 12:16:42.565572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.611 [2024-04-26 12:16:42.565582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.611 [2024-04-26 12:16:42.565589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.611 [2024-04-26 12:16:42.565599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.611 [2024-04-26 12:16:42.565606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.611 [2024-04-26 12:16:42.565615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.611 [2024-04-26 12:16:42.565622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.611 [2024-04-26 12:16:42.565631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.611 [2024-04-26 12:16:42.565639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.611 [2024-04-26 12:16:42.565648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.611 [2024-04-26 12:16:42.565655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.611 [2024-04-26 12:16:42.565707] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x12e9b90 was disconnected and freed. reset controller. 00:20:41.611 [2024-04-26 12:16:42.566011] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:41.611 [2024-04-26 12:16:42.567245] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:41.611 [2024-04-26 12:16:42.568466] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:41.611 [2024-04-26 12:16:42.568508] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:41.611 [2024-04-26 12:16:42.568548] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:41.611 [2024-04-26 12:16:42.568588] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:41.611 [2024-04-26 12:16:42.568927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.611 [2024-04-26 12:16:42.568940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.611 [2024-04-26 12:16:42.568952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.611 [2024-04-26 12:16:42.568959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.611 [2024-04-26 12:16:42.568973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.611 [2024-04-26 12:16:42.568980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.611 [2024-04-26 12:16:42.568989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.611 [2024-04-26 12:16:42.568996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.611 [2024-04-26 12:16:42.569006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.611 [2024-04-26 12:16:42.569014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.611 [2024-04-26 12:16:42.569023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.611 [2024-04-26 12:16:42.569030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.611 [2024-04-26 12:16:42.569039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.611 [2024-04-26 12:16:42.569046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.611 [2024-04-26 12:16:42.569055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.611 [2024-04-26 12:16:42.569063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.611 [2024-04-26 12:16:42.569072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.611 [2024-04-26 12:16:42.569079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.611 [2024-04-26 12:16:42.569088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.611 [2024-04-26 12:16:42.569095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.611 [2024-04-26 12:16:42.569105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.611 [2024-04-26 12:16:42.569112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.611 [2024-04-26 12:16:42.569121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.611 [2024-04-26 12:16:42.569128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.611 [2024-04-26 12:16:42.569137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.611 [2024-04-26 12:16:42.569144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.611 [2024-04-26 12:16:42.569154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.611 [2024-04-26 12:16:42.569161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.611 [2024-04-26 12:16:42.569170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.611 [2024-04-26 12:16:42.569179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.611 [2024-04-26 12:16:42.569188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.611 [2024-04-26 12:16:42.569195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.611 [2024-04-26 12:16:42.569205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.611 [2024-04-26 12:16:42.569212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.612 [2024-04-26 12:16:42.569222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.612 [2024-04-26 12:16:42.569229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.612 [2024-04-26 12:16:42.569238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.612 [2024-04-26 12:16:42.569245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.612 [2024-04-26 12:16:42.569254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.612 [2024-04-26 12:16:42.569261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.612 [2024-04-26 12:16:42.569270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.612 [2024-04-26 12:16:42.569277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.612 [2024-04-26 12:16:42.569286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.612 [2024-04-26 12:16:42.569294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.612 [2024-04-26 12:16:42.569303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.612 [2024-04-26 12:16:42.569310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.612 [2024-04-26 12:16:42.569319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.612 [2024-04-26 12:16:42.569326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.612 [2024-04-26 12:16:42.569335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.612 [2024-04-26 12:16:42.569343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.612 [2024-04-26 12:16:42.569352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.612 [2024-04-26 12:16:42.569359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.612 [2024-04-26 12:16:42.569368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.612 [2024-04-26 12:16:42.569375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.612 [2024-04-26 12:16:42.569386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.612 [2024-04-26 12:16:42.569393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.612 [2024-04-26 12:16:42.569402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.612 [2024-04-26 12:16:42.569409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.612 [2024-04-26 12:16:42.569418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.612 [2024-04-26 12:16:42.569425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.612 [2024-04-26 12:16:42.569435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.612 [2024-04-26 12:16:42.569442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.612 [2024-04-26 12:16:42.569451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.612 [2024-04-26 12:16:42.569458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.612 [2024-04-26 12:16:42.569467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.612 [2024-04-26 12:16:42.569474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.612 [2024-04-26 12:16:42.569483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.612 [2024-04-26 12:16:42.569491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.612 [2024-04-26 12:16:42.569500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.612 [2024-04-26 12:16:42.569507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.612 [2024-04-26 12:16:42.569516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.612 [2024-04-26 12:16:42.569523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.612 [2024-04-26 12:16:42.569532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.612 [2024-04-26 12:16:42.569539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.612 [2024-04-26 12:16:42.569549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.612 [2024-04-26 12:16:42.569556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.612 [2024-04-26 12:16:42.569565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.612 [2024-04-26 12:16:42.569572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.612 [2024-04-26 12:16:42.569581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.612 [2024-04-26 12:16:42.569589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.612 [2024-04-26 12:16:42.569598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.612 [2024-04-26 12:16:42.569605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.612 [2024-04-26 12:16:42.569615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.612 [2024-04-26 12:16:42.569622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.612 [2024-04-26 12:16:42.569631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.612 [2024-04-26 12:16:42.569639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.612 [2024-04-26 12:16:42.569648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.612 [2024-04-26 12:16:42.569655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.612 [2024-04-26 12:16:42.569664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.612 [2024-04-26 12:16:42.569671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.612 [2024-04-26 12:16:42.569680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.612 [2024-04-26 12:16:42.569687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.612 [2024-04-26 12:16:42.569696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.612 [2024-04-26 12:16:42.569703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.612 [2024-04-26 12:16:42.569712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.612 [2024-04-26 12:16:42.569719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.612 [2024-04-26 12:16:42.569728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.612 [2024-04-26 12:16:42.569736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.613 [2024-04-26 12:16:42.569745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.613 [2024-04-26 12:16:42.569752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.613 [2024-04-26 12:16:42.569762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.613 [2024-04-26 12:16:42.569769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.613 [2024-04-26 12:16:42.569778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.613 [2024-04-26 12:16:42.569785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.613 [2024-04-26 12:16:42.569794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.613 [2024-04-26 12:16:42.569806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.613 [2024-04-26 12:16:42.569815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.613 [2024-04-26 12:16:42.569822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.613 [2024-04-26 12:16:42.569832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.613 [2024-04-26 12:16:42.569844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.613 [2024-04-26 12:16:42.569853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.613 [2024-04-26 12:16:42.569860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.613 [2024-04-26 12:16:42.569869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.613 [2024-04-26 12:16:42.569876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.613 [2024-04-26 12:16:42.569886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.613 [2024-04-26 12:16:42.569897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.613 [2024-04-26 12:16:42.569907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.613 [2024-04-26 12:16:42.569914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.613 [2024-04-26 12:16:42.569923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.613 [2024-04-26 12:16:42.569931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.613 [2024-04-26 12:16:42.569940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.613 [2024-04-26 12:16:42.569947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.613 [2024-04-26 12:16:42.569957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.613 [2024-04-26 12:16:42.569964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.613 [2024-04-26 12:16:42.569973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.613 [2024-04-26 12:16:42.569981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.613 [2024-04-26 12:16:42.569990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.613 [2024-04-26 12:16:42.569998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.613 [2024-04-26 12:16:42.570006] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1209b40 is same with the state(5) to be set 00:20:41.613 [2024-04-26 12:16:42.570331] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1209b40 was disconnected and freed. reset controller. 00:20:41.613 [2024-04-26 12:16:42.570361] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:41.613 [2024-04-26 12:16:42.570374] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:20:41.613 [2024-04-26 12:16:42.570617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.613 [2024-04-26 12:16:42.570795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.613 [2024-04-26 12:16:42.570806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f30f0 with addr=10.0.0.2, port=4420 00:20:41.613 [2024-04-26 12:16:42.570814] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f30f0 is same with the state(5) to be set 00:20:41.613 [2024-04-26 12:16:42.570947] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:41.613 [2024-04-26 12:16:42.572153] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:20:41.613 [2024-04-26 12:16:42.572379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.613 [2024-04-26 12:16:42.572702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.613 [2024-04-26 12:16:42.572711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11510c0 with addr=10.0.0.2, port=4420 00:20:41.613 [2024-04-26 12:16:42.572718] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11510c0 is same with the state(5) to be set 00:20:41.613 [2024-04-26 12:16:42.572925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.613 [2024-04-26 12:16:42.573277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.613 [2024-04-26 12:16:42.573286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x117c8c0 with addr=10.0.0.2, port=4420 00:20:41.613 [2024-04-26 12:16:42.573294] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117c8c0 is same with the state(5) to be set 00:20:41.613 [2024-04-26 12:16:42.573305] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f30f0 (9): Bad file descriptor 00:20:41.613 [2024-04-26 12:16:42.574295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.613 [2024-04-26 12:16:42.574667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.613 [2024-04-26 12:16:42.574676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1235a30 with addr=10.0.0.2, port=4420 00:20:41.613 [2024-04-26 12:16:42.574683] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235a30 is same with the state(5) to be set 00:20:41.613 [2024-04-26 12:16:42.574692] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11510c0 (9): Bad file descriptor 00:20:41.613 [2024-04-26 12:16:42.574702] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x117c8c0 (9): Bad file descriptor 00:20:41.613 [2024-04-26 12:16:42.574710] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:20:41.613 [2024-04-26 12:16:42.574716] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:20:41.613 [2024-04-26 12:16:42.574725] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:20:41.613 [2024-04-26 12:16:42.574762] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11737c0 (9): Bad file descriptor 00:20:41.613 [2024-04-26 12:16:42.575096] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:41.613 [2024-04-26 12:16:42.575126] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1235a30 (9): Bad file descriptor 00:20:41.613 [2024-04-26 12:16:42.575134] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:41.613 [2024-04-26 12:16:42.575141] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:41.613 [2024-04-26 12:16:42.575151] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:41.613 [2024-04-26 12:16:42.575162] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:20:41.613 [2024-04-26 12:16:42.575169] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:20:41.613 [2024-04-26 12:16:42.575176] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:20:41.613 [2024-04-26 12:16:42.575228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.613 [2024-04-26 12:16:42.575238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.613 [2024-04-26 12:16:42.575249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.613 [2024-04-26 12:16:42.575256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.613 [2024-04-26 12:16:42.575266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.613 [2024-04-26 12:16:42.575273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.613 [2024-04-26 12:16:42.575282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.613 [2024-04-26 12:16:42.575289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.613 [2024-04-26 12:16:42.575299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.613 [2024-04-26 12:16:42.575306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.613 [2024-04-26 12:16:42.575315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.613 [2024-04-26 12:16:42.575323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.613 [2024-04-26 12:16:42.575333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.613 [2024-04-26 12:16:42.575341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.613 [2024-04-26 12:16:42.575350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.613 [2024-04-26 12:16:42.575357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.613 [2024-04-26 12:16:42.575366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.613 [2024-04-26 12:16:42.575374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.613 [2024-04-26 12:16:42.575383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.614 [2024-04-26 12:16:42.575390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.614 [2024-04-26 12:16:42.575399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.614 [2024-04-26 12:16:42.575406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.614 [2024-04-26 12:16:42.575418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.614 [2024-04-26 12:16:42.575425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.614 [2024-04-26 12:16:42.575434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.614 [2024-04-26 12:16:42.575441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.614 [2024-04-26 12:16:42.575450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.614 [2024-04-26 12:16:42.575457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.614 [2024-04-26 12:16:42.575466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.614 [2024-04-26 12:16:42.575473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.614 [2024-04-26 12:16:42.575483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.614 [2024-04-26 12:16:42.575490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.614 [2024-04-26 12:16:42.575499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.614 [2024-04-26 12:16:42.575506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.614 [2024-04-26 12:16:42.575515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.614 [2024-04-26 12:16:42.575522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.614 [2024-04-26 12:16:42.575531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.614 [2024-04-26 12:16:42.575539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.614 [2024-04-26 12:16:42.575548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.614 [2024-04-26 12:16:42.575555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.614 [2024-04-26 12:16:42.575564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.614 [2024-04-26 12:16:42.575570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.614 [2024-04-26 12:16:42.575580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.614 [2024-04-26 12:16:42.575587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.614 [2024-04-26 12:16:42.575596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.614 [2024-04-26 12:16:42.575603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.614 [2024-04-26 12:16:42.575612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.614 [2024-04-26 12:16:42.575620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.614 [2024-04-26 12:16:42.575629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.614 [2024-04-26 12:16:42.575637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.614 [2024-04-26 12:16:42.575646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.614 [2024-04-26 12:16:42.575653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.614 [2024-04-26 12:16:42.575663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.614 [2024-04-26 12:16:42.575670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.614 [2024-04-26 12:16:42.575680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.614 [2024-04-26 12:16:42.575687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.614 [2024-04-26 12:16:42.575696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.614 [2024-04-26 12:16:42.575703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.614 [2024-04-26 12:16:42.575712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.614 [2024-04-26 12:16:42.575719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.614 [2024-04-26 12:16:42.575729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.614 [2024-04-26 12:16:42.575736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.614 [2024-04-26 12:16:42.575745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.614 [2024-04-26 12:16:42.575752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.614 [2024-04-26 12:16:42.575761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.614 [2024-04-26 12:16:42.575768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.614 [2024-04-26 12:16:42.575777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.614 [2024-04-26 12:16:42.575784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.614 [2024-04-26 12:16:42.575793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.614 [2024-04-26 12:16:42.575800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.614 [2024-04-26 12:16:42.575809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.614 [2024-04-26 12:16:42.575816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.614 [2024-04-26 12:16:42.575827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.614 [2024-04-26 12:16:42.575834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.614 [2024-04-26 12:16:42.575848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.614 [2024-04-26 12:16:42.575855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.614 [2024-04-26 12:16:42.575864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.614 [2024-04-26 12:16:42.575872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.614 [2024-04-26 12:16:42.575881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.614 [2024-04-26 12:16:42.575888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.614 [2024-04-26 12:16:42.575897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.614 [2024-04-26 12:16:42.575904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.614 [2024-04-26 12:16:42.575913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.614 [2024-04-26 12:16:42.575921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.614 [2024-04-26 12:16:42.575930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.614 [2024-04-26 12:16:42.575937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.614 [2024-04-26 12:16:42.575946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.614 [2024-04-26 12:16:42.575953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.614 [2024-04-26 12:16:42.575962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.614 [2024-04-26 12:16:42.575969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.614 [2024-04-26 12:16:42.575978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.614 [2024-04-26 12:16:42.575985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.614 [2024-04-26 12:16:42.575998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.614 [2024-04-26 12:16:42.576005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.614 [2024-04-26 12:16:42.576014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.614 [2024-04-26 12:16:42.576021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.614 [2024-04-26 12:16:42.576030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.614 [2024-04-26 12:16:42.576039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.615 [2024-04-26 12:16:42.576048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.615 [2024-04-26 12:16:42.576055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.615 [2024-04-26 12:16:42.576064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.615 [2024-04-26 12:16:42.583503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.615 [2024-04-26 12:16:42.583559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.615 [2024-04-26 12:16:42.583568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.615 [2024-04-26 12:16:42.583578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.615 [2024-04-26 12:16:42.583585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.615 [2024-04-26 12:16:42.583595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.615 [2024-04-26 12:16:42.583602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.615 [2024-04-26 12:16:42.583611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.615 [2024-04-26 12:16:42.583619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.615 [2024-04-26 12:16:42.583628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.615 [2024-04-26 12:16:42.583635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.615 [2024-04-26 12:16:42.583645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.615 [2024-04-26 12:16:42.583652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.615 [2024-04-26 12:16:42.583661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.615 [2024-04-26 12:16:42.583668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.615 [2024-04-26 12:16:42.583677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.615 [2024-04-26 12:16:42.583684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.615 [2024-04-26 12:16:42.583693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.615 [2024-04-26 12:16:42.583700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.615 [2024-04-26 12:16:42.583709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.615 [2024-04-26 12:16:42.583717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.615 [2024-04-26 12:16:42.583732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.615 [2024-04-26 12:16:42.583739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.615 [2024-04-26 12:16:42.583748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.615 [2024-04-26 12:16:42.583755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.615 [2024-04-26 12:16:42.583764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.615 [2024-04-26 12:16:42.583771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.615 [2024-04-26 12:16:42.583780] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1212140 is same with the state(5) to be set 00:20:41.615 [2024-04-26 12:16:42.585123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.615 [2024-04-26 12:16:42.585141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.615 [2024-04-26 12:16:42.585156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.615 [2024-04-26 12:16:42.585165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.615 [2024-04-26 12:16:42.585175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.615 [2024-04-26 12:16:42.585182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.615 [2024-04-26 12:16:42.585192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.615 [2024-04-26 12:16:42.585199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.615 [2024-04-26 12:16:42.585208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.615 [2024-04-26 12:16:42.585216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.615 [2024-04-26 12:16:42.585225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.615 [2024-04-26 12:16:42.585232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.615 [2024-04-26 12:16:42.585241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.615 [2024-04-26 12:16:42.585248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.615 [2024-04-26 12:16:42.585258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.615 [2024-04-26 12:16:42.585265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.615 [2024-04-26 12:16:42.585274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.615 [2024-04-26 12:16:42.585280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.615 [2024-04-26 12:16:42.585294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.615 [2024-04-26 12:16:42.585301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.615 [2024-04-26 12:16:42.585310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.615 [2024-04-26 12:16:42.585319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.615 [2024-04-26 12:16:42.585328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.615 [2024-04-26 12:16:42.585335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.615 [2024-04-26 12:16:42.585344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.615 [2024-04-26 12:16:42.585351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.615 [2024-04-26 12:16:42.585360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.615 [2024-04-26 12:16:42.585368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.615 [2024-04-26 12:16:42.585378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.615 [2024-04-26 12:16:42.585385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.615 [2024-04-26 12:16:42.585395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.615 [2024-04-26 12:16:42.585402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.615 [2024-04-26 12:16:42.585411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.615 [2024-04-26 12:16:42.585418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.615 [2024-04-26 12:16:42.585427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.615 [2024-04-26 12:16:42.585434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.615 [2024-04-26 12:16:42.585443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.615 [2024-04-26 12:16:42.585450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.615 [2024-04-26 12:16:42.585459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.615 [2024-04-26 12:16:42.585466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.615 [2024-04-26 12:16:42.585475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.615 [2024-04-26 12:16:42.585482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.615 [2024-04-26 12:16:42.585491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.615 [2024-04-26 12:16:42.585499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.615 [2024-04-26 12:16:42.585509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.615 [2024-04-26 12:16:42.585516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.615 [2024-04-26 12:16:42.585525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.615 [2024-04-26 12:16:42.585532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.615 [2024-04-26 12:16:42.585541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.616 [2024-04-26 12:16:42.585548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.616 [2024-04-26 12:16:42.585557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.616 [2024-04-26 12:16:42.585564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.616 [2024-04-26 12:16:42.585573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.616 [2024-04-26 12:16:42.585580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.616 [2024-04-26 12:16:42.585589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.616 [2024-04-26 12:16:42.585596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.616 [2024-04-26 12:16:42.585605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.616 [2024-04-26 12:16:42.585612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.616 [2024-04-26 12:16:42.585621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.616 [2024-04-26 12:16:42.585628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.616 [2024-04-26 12:16:42.585637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.616 [2024-04-26 12:16:42.585644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.616 [2024-04-26 12:16:42.585653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.616 [2024-04-26 12:16:42.585660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.616 [2024-04-26 12:16:42.585669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.616 [2024-04-26 12:16:42.585677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.616 [2024-04-26 12:16:42.585686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.616 [2024-04-26 12:16:42.585693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.616 [2024-04-26 12:16:42.585703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.616 [2024-04-26 12:16:42.585711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.616 [2024-04-26 12:16:42.585720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.616 [2024-04-26 12:16:42.585727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.616 [2024-04-26 12:16:42.585736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.616 [2024-04-26 12:16:42.585743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.616 [2024-04-26 12:16:42.585752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.616 [2024-04-26 12:16:42.585759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.616 [2024-04-26 12:16:42.585768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.616 [2024-04-26 12:16:42.585775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.616 [2024-04-26 12:16:42.585784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.616 [2024-04-26 12:16:42.585791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.616 [2024-04-26 12:16:42.585800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.616 [2024-04-26 12:16:42.585807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.616 [2024-04-26 12:16:42.585816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.616 [2024-04-26 12:16:42.585823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.616 [2024-04-26 12:16:42.585832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.616 [2024-04-26 12:16:42.585855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.616 [2024-04-26 12:16:42.585865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.616 [2024-04-26 12:16:42.585872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.616 [2024-04-26 12:16:42.585881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.616 [2024-04-26 12:16:42.585888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.616 [2024-04-26 12:16:42.585898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.616 [2024-04-26 12:16:42.585905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.616 [2024-04-26 12:16:42.585914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.616 [2024-04-26 12:16:42.585923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.616 [2024-04-26 12:16:42.585932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.616 [2024-04-26 12:16:42.585939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.616 [2024-04-26 12:16:42.585949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.616 [2024-04-26 12:16:42.585956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.616 [2024-04-26 12:16:42.585966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.616 [2024-04-26 12:16:42.585973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.616 [2024-04-26 12:16:42.585982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.616 [2024-04-26 12:16:42.585990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.616 [2024-04-26 12:16:42.585999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.616 [2024-04-26 12:16:42.586006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.616 [2024-04-26 12:16:42.586015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.616 [2024-04-26 12:16:42.586022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.616 [2024-04-26 12:16:42.586031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.616 [2024-04-26 12:16:42.586038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.616 [2024-04-26 12:16:42.586047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.616 [2024-04-26 12:16:42.586055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.616 [2024-04-26 12:16:42.586064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.616 [2024-04-26 12:16:42.586071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.616 [2024-04-26 12:16:42.586080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.616 [2024-04-26 12:16:42.586086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.616 [2024-04-26 12:16:42.586096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.616 [2024-04-26 12:16:42.586103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.616 [2024-04-26 12:16:42.586112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.616 [2024-04-26 12:16:42.586119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.616 [2024-04-26 12:16:42.586130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.616 [2024-04-26 12:16:42.586137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.617 [2024-04-26 12:16:42.586146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.617 [2024-04-26 12:16:42.586153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.617 [2024-04-26 12:16:42.586163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.617 [2024-04-26 12:16:42.586170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.617 [2024-04-26 12:16:42.586179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.617 [2024-04-26 12:16:42.586185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.617 [2024-04-26 12:16:42.586195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.617 [2024-04-26 12:16:42.586202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.617 [2024-04-26 12:16:42.586210] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12eaeb0 is same with the state(5) to be set 00:20:41.617 [2024-04-26 12:16:42.587481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.617 [2024-04-26 12:16:42.587493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.617 [2024-04-26 12:16:42.587506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.617 [2024-04-26 12:16:42.587515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.617 [2024-04-26 12:16:42.587526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.617 [2024-04-26 12:16:42.587535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.617 [2024-04-26 12:16:42.587545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.617 [2024-04-26 12:16:42.587554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.617 [2024-04-26 12:16:42.587564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.617 [2024-04-26 12:16:42.587572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.617 [2024-04-26 12:16:42.587581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.617 [2024-04-26 12:16:42.587588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.617 [2024-04-26 12:16:42.587597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.617 [2024-04-26 12:16:42.587604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.617 [2024-04-26 12:16:42.587616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.617 [2024-04-26 12:16:42.587623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.617 [2024-04-26 12:16:42.587633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.617 [2024-04-26 12:16:42.587640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.617 [2024-04-26 12:16:42.587649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.617 [2024-04-26 12:16:42.587657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.617 [2024-04-26 12:16:42.587666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.617 [2024-04-26 12:16:42.587673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.617 [2024-04-26 12:16:42.587683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.617 [2024-04-26 12:16:42.587690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.617 [2024-04-26 12:16:42.587699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.617 [2024-04-26 12:16:42.587707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.617 [2024-04-26 12:16:42.587716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.617 [2024-04-26 12:16:42.587723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.617 [2024-04-26 12:16:42.587732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.617 [2024-04-26 12:16:42.587739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.617 [2024-04-26 12:16:42.587748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.617 [2024-04-26 12:16:42.587756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.617 [2024-04-26 12:16:42.587765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.617 [2024-04-26 12:16:42.587772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.617 [2024-04-26 12:16:42.587781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.617 [2024-04-26 12:16:42.587788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.617 [2024-04-26 12:16:42.587797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.617 [2024-04-26 12:16:42.587804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.617 [2024-04-26 12:16:42.587813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.617 [2024-04-26 12:16:42.587821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.617 [2024-04-26 12:16:42.587830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.617 [2024-04-26 12:16:42.587841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.617 [2024-04-26 12:16:42.587851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.617 [2024-04-26 12:16:42.587857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.617 [2024-04-26 12:16:42.587866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.617 [2024-04-26 12:16:42.587874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.617 [2024-04-26 12:16:42.587883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.617 [2024-04-26 12:16:42.587890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.617 [2024-04-26 12:16:42.587900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.617 [2024-04-26 12:16:42.587906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.617 [2024-04-26 12:16:42.587915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.617 [2024-04-26 12:16:42.587923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.617 [2024-04-26 12:16:42.587932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.617 [2024-04-26 12:16:42.587939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.617 [2024-04-26 12:16:42.587948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.617 [2024-04-26 12:16:42.587955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.617 [2024-04-26 12:16:42.587964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.617 [2024-04-26 12:16:42.587971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.617 [2024-04-26 12:16:42.587980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.617 [2024-04-26 12:16:42.587987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.617 [2024-04-26 12:16:42.587996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.617 [2024-04-26 12:16:42.588003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.617 [2024-04-26 12:16:42.588012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.617 [2024-04-26 12:16:42.588019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.617 [2024-04-26 12:16:42.588033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.617 [2024-04-26 12:16:42.588041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.617 [2024-04-26 12:16:42.588050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.617 [2024-04-26 12:16:42.588057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.617 [2024-04-26 12:16:42.588066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.618 [2024-04-26 12:16:42.588073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.618 [2024-04-26 12:16:42.588082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.618 [2024-04-26 12:16:42.588089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.618 [2024-04-26 12:16:42.588099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.618 [2024-04-26 12:16:42.588106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.618 [2024-04-26 12:16:42.588115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.618 [2024-04-26 12:16:42.588122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.618 [2024-04-26 12:16:42.588132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.618 [2024-04-26 12:16:42.588139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.618 [2024-04-26 12:16:42.588148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.618 [2024-04-26 12:16:42.588155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.618 [2024-04-26 12:16:42.588164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.618 [2024-04-26 12:16:42.588171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.618 [2024-04-26 12:16:42.588181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.618 [2024-04-26 12:16:42.588188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.618 [2024-04-26 12:16:42.588197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.618 [2024-04-26 12:16:42.588205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.618 [2024-04-26 12:16:42.588214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.618 [2024-04-26 12:16:42.588221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.618 [2024-04-26 12:16:42.588230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.618 [2024-04-26 12:16:42.588239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.618 [2024-04-26 12:16:42.588248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.618 [2024-04-26 12:16:42.588255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.618 [2024-04-26 12:16:42.588264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.618 [2024-04-26 12:16:42.588271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.618 [2024-04-26 12:16:42.588280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.618 [2024-04-26 12:16:42.588288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.618 [2024-04-26 12:16:42.588297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.618 [2024-04-26 12:16:42.588304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.618 [2024-04-26 12:16:42.588313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.618 [2024-04-26 12:16:42.588320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.618 [2024-04-26 12:16:42.588329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.618 [2024-04-26 12:16:42.588337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.618 [2024-04-26 12:16:42.588346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.618 [2024-04-26 12:16:42.588353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.618 [2024-04-26 12:16:42.588362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.618 [2024-04-26 12:16:42.588369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.618 [2024-04-26 12:16:42.588378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.618 [2024-04-26 12:16:42.588385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.618 [2024-04-26 12:16:42.588394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.618 [2024-04-26 12:16:42.588401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.618 [2024-04-26 12:16:42.588411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.618 [2024-04-26 12:16:42.588418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.618 [2024-04-26 12:16:42.588427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.618 [2024-04-26 12:16:42.588434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.618 [2024-04-26 12:16:42.588444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.618 [2024-04-26 12:16:42.588452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.618 [2024-04-26 12:16:42.588461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.618 [2024-04-26 12:16:42.588468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.618 [2024-04-26 12:16:42.588477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.618 [2024-04-26 12:16:42.588484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.618 [2024-04-26 12:16:42.588493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.618 [2024-04-26 12:16:42.588500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.618 [2024-04-26 12:16:42.588510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.618 [2024-04-26 12:16:42.588517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.618 [2024-04-26 12:16:42.588526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.618 [2024-04-26 12:16:42.588533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.618 [2024-04-26 12:16:42.588542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.618 [2024-04-26 12:16:42.588549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.618 [2024-04-26 12:16:42.588557] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1148670 is same with the state(5) to be set 00:20:41.618 [2024-04-26 12:16:42.589826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.618 [2024-04-26 12:16:42.589841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.618 [2024-04-26 12:16:42.589853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.618 [2024-04-26 12:16:42.589860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.618 [2024-04-26 12:16:42.589869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.618 [2024-04-26 12:16:42.589877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.618 [2024-04-26 12:16:42.589886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.618 [2024-04-26 12:16:42.589893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.618 [2024-04-26 12:16:42.589902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.618 [2024-04-26 12:16:42.589910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.618 [2024-04-26 12:16:42.589919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.618 [2024-04-26 12:16:42.589929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.618 [2024-04-26 12:16:42.589938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.618 [2024-04-26 12:16:42.589945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.618 [2024-04-26 12:16:42.589954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.618 [2024-04-26 12:16:42.589961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.618 [2024-04-26 12:16:42.589970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.618 [2024-04-26 12:16:42.589978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.618 [2024-04-26 12:16:42.589987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.618 [2024-04-26 12:16:42.589994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.619 [2024-04-26 12:16:42.590003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.619 [2024-04-26 12:16:42.590010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.619 [2024-04-26 12:16:42.590020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.619 [2024-04-26 12:16:42.590027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.619 [2024-04-26 12:16:42.590036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.619 [2024-04-26 12:16:42.590043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.619 [2024-04-26 12:16:42.590052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.619 [2024-04-26 12:16:42.590059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.619 [2024-04-26 12:16:42.590068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.619 [2024-04-26 12:16:42.590075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.619 [2024-04-26 12:16:42.590084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.619 [2024-04-26 12:16:42.590091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.619 [2024-04-26 12:16:42.590100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.619 [2024-04-26 12:16:42.590107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.619 [2024-04-26 12:16:42.590116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.619 [2024-04-26 12:16:42.590123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.619 [2024-04-26 12:16:42.590134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.619 [2024-04-26 12:16:42.590141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.619 [2024-04-26 12:16:42.590150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.619 [2024-04-26 12:16:42.590157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.619 [2024-04-26 12:16:42.590167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.619 [2024-04-26 12:16:42.590173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.619 [2024-04-26 12:16:42.590182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.619 [2024-04-26 12:16:42.590190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.619 [2024-04-26 12:16:42.590199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.619 [2024-04-26 12:16:42.590206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.619 [2024-04-26 12:16:42.590215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.619 [2024-04-26 12:16:42.590223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.619 [2024-04-26 12:16:42.590232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.619 [2024-04-26 12:16:42.590239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.619 [2024-04-26 12:16:42.590248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.619 [2024-04-26 12:16:42.590255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.619 [2024-04-26 12:16:42.590264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.619 [2024-04-26 12:16:42.590271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.619 [2024-04-26 12:16:42.590280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.619 [2024-04-26 12:16:42.590287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.619 [2024-04-26 12:16:42.590296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.619 [2024-04-26 12:16:42.590303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.619 [2024-04-26 12:16:42.590312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.619 [2024-04-26 12:16:42.590319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.619 [2024-04-26 12:16:42.590328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.619 [2024-04-26 12:16:42.590337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.619 [2024-04-26 12:16:42.590346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.619 [2024-04-26 12:16:42.590353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.619 [2024-04-26 12:16:42.590362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.619 [2024-04-26 12:16:42.590369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.619 [2024-04-26 12:16:42.590378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.619 [2024-04-26 12:16:42.590385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.619 [2024-04-26 12:16:42.590394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.619 [2024-04-26 12:16:42.590401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.619 [2024-04-26 12:16:42.590410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.619 [2024-04-26 12:16:42.590417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.619 [2024-04-26 12:16:42.590426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.619 [2024-04-26 12:16:42.590433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.619 [2024-04-26 12:16:42.590442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.619 [2024-04-26 12:16:42.590449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.619 [2024-04-26 12:16:42.590458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.619 [2024-04-26 12:16:42.590465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.619 [2024-04-26 12:16:42.590475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.619 [2024-04-26 12:16:42.590482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.619 [2024-04-26 12:16:42.590491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.619 [2024-04-26 12:16:42.590498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.619 [2024-04-26 12:16:42.590507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.619 [2024-04-26 12:16:42.590515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.619 [2024-04-26 12:16:42.590524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.619 [2024-04-26 12:16:42.590531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.619 [2024-04-26 12:16:42.590542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.619 [2024-04-26 12:16:42.590549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.619 [2024-04-26 12:16:42.590558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.619 [2024-04-26 12:16:42.590565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.619 [2024-04-26 12:16:42.590574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.619 [2024-04-26 12:16:42.590581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.619 [2024-04-26 12:16:42.590590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.619 [2024-04-26 12:16:42.590597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.619 [2024-04-26 12:16:42.590606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.619 [2024-04-26 12:16:42.590613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.619 [2024-04-26 12:16:42.590622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.619 [2024-04-26 12:16:42.590629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.619 [2024-04-26 12:16:42.590638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.619 [2024-04-26 12:16:42.590645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.620 [2024-04-26 12:16:42.590654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.620 [2024-04-26 12:16:42.590661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.620 [2024-04-26 12:16:42.590671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.620 [2024-04-26 12:16:42.590678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.620 [2024-04-26 12:16:42.590687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.620 [2024-04-26 12:16:42.590694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.620 [2024-04-26 12:16:42.590703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.620 [2024-04-26 12:16:42.590710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.620 [2024-04-26 12:16:42.590719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.620 [2024-04-26 12:16:42.590726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.620 [2024-04-26 12:16:42.590736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.620 [2024-04-26 12:16:42.590744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.620 [2024-04-26 12:16:42.590753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.620 [2024-04-26 12:16:42.590760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.620 [2024-04-26 12:16:42.590769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.620 [2024-04-26 12:16:42.590776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.620 [2024-04-26 12:16:42.590785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.620 [2024-04-26 12:16:42.590792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.620 [2024-04-26 12:16:42.590801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.620 [2024-04-26 12:16:42.590809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.620 [2024-04-26 12:16:42.590818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.620 [2024-04-26 12:16:42.590825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.620 [2024-04-26 12:16:42.590834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.620 [2024-04-26 12:16:42.590845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.620 [2024-04-26 12:16:42.590854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.620 [2024-04-26 12:16:42.590862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.620 [2024-04-26 12:16:42.590871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.620 [2024-04-26 12:16:42.590878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.620 [2024-04-26 12:16:42.590887] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1149b00 is same with the state(5) to be set 00:20:41.620 [2024-04-26 12:16:42.592168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.620 [2024-04-26 12:16:42.592181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.620 [2024-04-26 12:16:42.592194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.620 [2024-04-26 12:16:42.592202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.620 [2024-04-26 12:16:42.592213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.620 [2024-04-26 12:16:42.592221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.620 [2024-04-26 12:16:42.592232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.620 [2024-04-26 12:16:42.592243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.620 [2024-04-26 12:16:42.592253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.620 [2024-04-26 12:16:42.592260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.620 [2024-04-26 12:16:42.592269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.620 [2024-04-26 12:16:42.592276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.620 [2024-04-26 12:16:42.592285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.620 [2024-04-26 12:16:42.592292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.620 [2024-04-26 12:16:42.592301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.620 [2024-04-26 12:16:42.592308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.620 [2024-04-26 12:16:42.592317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.620 [2024-04-26 12:16:42.592324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.620 [2024-04-26 12:16:42.592334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.620 [2024-04-26 12:16:42.592342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.620 [2024-04-26 12:16:42.592351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.620 [2024-04-26 12:16:42.592358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.620 [2024-04-26 12:16:42.592368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.620 [2024-04-26 12:16:42.592375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.620 [2024-04-26 12:16:42.592385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.620 [2024-04-26 12:16:42.592392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.620 [2024-04-26 12:16:42.592401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.620 [2024-04-26 12:16:42.592408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.620 [2024-04-26 12:16:42.592417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.620 [2024-04-26 12:16:42.592424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.620 [2024-04-26 12:16:42.592433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.620 [2024-04-26 12:16:42.592440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.620 [2024-04-26 12:16:42.592451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.620 [2024-04-26 12:16:42.592458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.620 [2024-04-26 12:16:42.592467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.620 [2024-04-26 12:16:42.592474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.620 [2024-04-26 12:16:42.592483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.620 [2024-04-26 12:16:42.592490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.620 [2024-04-26 12:16:42.592499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.620 [2024-04-26 12:16:42.592506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.620 [2024-04-26 12:16:42.592515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.620 [2024-04-26 12:16:42.592521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.620 [2024-04-26 12:16:42.592530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.620 [2024-04-26 12:16:42.592538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.620 [2024-04-26 12:16:42.592547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.620 [2024-04-26 12:16:42.592554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.620 [2024-04-26 12:16:42.592563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.620 [2024-04-26 12:16:42.592570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.620 [2024-04-26 12:16:42.592579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.620 [2024-04-26 12:16:42.592586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.620 [2024-04-26 12:16:42.592595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.621 [2024-04-26 12:16:42.592603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.621 [2024-04-26 12:16:42.592612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.621 [2024-04-26 12:16:42.592619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.621 [2024-04-26 12:16:42.592628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.621 [2024-04-26 12:16:42.592635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.621 [2024-04-26 12:16:42.592644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.621 [2024-04-26 12:16:42.592652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.621 [2024-04-26 12:16:42.592662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.621 [2024-04-26 12:16:42.592669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.621 [2024-04-26 12:16:42.592678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.621 [2024-04-26 12:16:42.592685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.621 [2024-04-26 12:16:42.592694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.621 [2024-04-26 12:16:42.592701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.621 [2024-04-26 12:16:42.592710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.621 [2024-04-26 12:16:42.592717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.621 [2024-04-26 12:16:42.592726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.621 [2024-04-26 12:16:42.592733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.621 [2024-04-26 12:16:42.592742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.621 [2024-04-26 12:16:42.592749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.621 [2024-04-26 12:16:42.592758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.621 [2024-04-26 12:16:42.592765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.621 [2024-04-26 12:16:42.592774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.621 [2024-04-26 12:16:42.592781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.621 [2024-04-26 12:16:42.592790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.621 [2024-04-26 12:16:42.592797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.621 [2024-04-26 12:16:42.592806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.621 [2024-04-26 12:16:42.592813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.621 [2024-04-26 12:16:42.592822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.621 [2024-04-26 12:16:42.592829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.621 [2024-04-26 12:16:42.592843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.621 [2024-04-26 12:16:42.592851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.621 [2024-04-26 12:16:42.592862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.621 [2024-04-26 12:16:42.592869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.621 [2024-04-26 12:16:42.592878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.621 [2024-04-26 12:16:42.592885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.621 [2024-04-26 12:16:42.592895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.621 [2024-04-26 12:16:42.592902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.621 [2024-04-26 12:16:42.592911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.621 [2024-04-26 12:16:42.592918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.621 [2024-04-26 12:16:42.592928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.621 [2024-04-26 12:16:42.592934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.621 [2024-04-26 12:16:42.592944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.621 [2024-04-26 12:16:42.592951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.621 [2024-04-26 12:16:42.592960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.621 [2024-04-26 12:16:42.592967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.621 [2024-04-26 12:16:42.592976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.621 [2024-04-26 12:16:42.592983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.621 [2024-04-26 12:16:42.592992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.621 [2024-04-26 12:16:42.592999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.621 [2024-04-26 12:16:42.593008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.621 [2024-04-26 12:16:42.593015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.621 [2024-04-26 12:16:42.593025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.621 [2024-04-26 12:16:42.593031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.621 [2024-04-26 12:16:42.593041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.621 [2024-04-26 12:16:42.593048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.621 [2024-04-26 12:16:42.593057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.621 [2024-04-26 12:16:42.593066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.621 [2024-04-26 12:16:42.593075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.621 [2024-04-26 12:16:42.593082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.621 [2024-04-26 12:16:42.593091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.621 [2024-04-26 12:16:42.593098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.621 [2024-04-26 12:16:42.593107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.621 [2024-04-26 12:16:42.593114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.621 [2024-04-26 12:16:42.593123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.621 [2024-04-26 12:16:42.593130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.621 [2024-04-26 12:16:42.593139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.621 [2024-04-26 12:16:42.593146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.621 [2024-04-26 12:16:42.593156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.622 [2024-04-26 12:16:42.593163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.622 [2024-04-26 12:16:42.593172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.622 [2024-04-26 12:16:42.593179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.622 [2024-04-26 12:16:42.593188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.622 [2024-04-26 12:16:42.593195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.622 [2024-04-26 12:16:42.593205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.622 [2024-04-26 12:16:42.593212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.622 [2024-04-26 12:16:42.593221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.622 [2024-04-26 12:16:42.593228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.622 [2024-04-26 12:16:42.593236] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114afb0 is same with the state(5) to be set 00:20:41.622 [2024-04-26 12:16:42.594507] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:41.622 [2024-04-26 12:16:42.594520] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:41.622 [2024-04-26 12:16:42.594528] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:20:41.622 [2024-04-26 12:16:42.594540] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:20:41.622 [2024-04-26 12:16:42.594554] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:20:41.622 [2024-04-26 12:16:42.594589] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:20:41.622 [2024-04-26 12:16:42.594596] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:20:41.622 [2024-04-26 12:16:42.594605] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:20:41.622 [2024-04-26 12:16:42.594657] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:41.622 [2024-04-26 12:16:42.594673] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:41.622 [2024-04-26 12:16:42.594688] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:41.622 [2024-04-26 12:16:42.594764] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:20:41.622 [2024-04-26 12:16:42.594775] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:20:41.622 [2024-04-26 12:16:42.594784] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:41.622 [2024-04-26 12:16:42.595285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.622 [2024-04-26 12:16:42.595647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.622 [2024-04-26 12:16:42.595659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcb1fd0 with addr=10.0.0.2, port=4420 00:20:41.622 [2024-04-26 12:16:42.595670] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb1fd0 is same with the state(5) to be set 00:20:41.622 [2024-04-26 12:16:42.596117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.622 [2024-04-26 12:16:42.596357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.622 [2024-04-26 12:16:42.596369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1174790 with addr=10.0.0.2, port=4420 00:20:41.622 [2024-04-26 12:16:42.596379] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1174790 is same with the state(5) to be set 00:20:41.622 [2024-04-26 12:16:42.596747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.622 [2024-04-26 12:16:42.596966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.622 [2024-04-26 12:16:42.596975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115f8f0 with addr=10.0.0.2, port=4420 00:20:41.622 [2024-04-26 12:16:42.596983] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x115f8f0 is same with the state(5) to be set 00:20:41.622 [2024-04-26 12:16:42.598325] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:20:41.622 [2024-04-26 12:16:42.598341] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:20:41.622 [2024-04-26 12:16:42.598350] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:41.622 [2024-04-26 12:16:42.598571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.622 [2024-04-26 12:16:42.598895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.622 [2024-04-26 12:16:42.598905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1240400 with addr=10.0.0.2, port=4420 00:20:41.622 [2024-04-26 12:16:42.598913] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1240400 is same with the state(5) to be set 00:20:41.622 [2024-04-26 12:16:42.599246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.622 [2024-04-26 12:16:42.599555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.622 [2024-04-26 12:16:42.599565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1199c10 with addr=10.0.0.2, port=4420 00:20:41.622 [2024-04-26 12:16:42.599577] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1199c10 is same with the state(5) to be set 00:20:41.622 [2024-04-26 12:16:42.599589] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcb1fd0 (9): Bad file descriptor 00:20:41.622 [2024-04-26 12:16:42.599599] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1174790 (9): Bad file descriptor 00:20:41.622 [2024-04-26 12:16:42.599607] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x115f8f0 (9): Bad file descriptor 00:20:41.622 [2024-04-26 12:16:42.599687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.622 [2024-04-26 12:16:42.599697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.622 [2024-04-26 12:16:42.599713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.622 [2024-04-26 12:16:42.599721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.622 [2024-04-26 12:16:42.599730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.622 [2024-04-26 12:16:42.599738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.622 [2024-04-26 12:16:42.599747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.622 [2024-04-26 12:16:42.599754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.622 [2024-04-26 12:16:42.599763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.622 [2024-04-26 12:16:42.599771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.622 [2024-04-26 12:16:42.599780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.622 [2024-04-26 12:16:42.599787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.622 [2024-04-26 12:16:42.599796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.622 [2024-04-26 12:16:42.599803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.622 [2024-04-26 12:16:42.599813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.622 [2024-04-26 12:16:42.599819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.622 [2024-04-26 12:16:42.599829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.622 [2024-04-26 12:16:42.599842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.622 [2024-04-26 12:16:42.599852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.622 [2024-04-26 12:16:42.599859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.622 [2024-04-26 12:16:42.599868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.622 [2024-04-26 12:16:42.599875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.622 [2024-04-26 12:16:42.599891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.622 [2024-04-26 12:16:42.599898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.622 [2024-04-26 12:16:42.599907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.622 [2024-04-26 12:16:42.599915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.622 [2024-04-26 12:16:42.599924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.622 [2024-04-26 12:16:42.599931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.622 [2024-04-26 12:16:42.599941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.622 [2024-04-26 12:16:42.599948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.622 [2024-04-26 12:16:42.599957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.622 [2024-04-26 12:16:42.599964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.622 [2024-04-26 12:16:42.599973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.622 [2024-04-26 12:16:42.599981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.622 [2024-04-26 12:16:42.599990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.622 [2024-04-26 12:16:42.599997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.623 [2024-04-26 12:16:42.600006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.623 [2024-04-26 12:16:42.600014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.623 [2024-04-26 12:16:42.600023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.623 [2024-04-26 12:16:42.600030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.623 [2024-04-26 12:16:42.600040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.623 [2024-04-26 12:16:42.600047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.623 [2024-04-26 12:16:42.600056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.623 [2024-04-26 12:16:42.600063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.623 [2024-04-26 12:16:42.600072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.623 [2024-04-26 12:16:42.600079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.623 [2024-04-26 12:16:42.600088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.623 [2024-04-26 12:16:42.600097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.623 [2024-04-26 12:16:42.600106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.623 [2024-04-26 12:16:42.600114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.623 [2024-04-26 12:16:42.600123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.623 [2024-04-26 12:16:42.600130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.623 [2024-04-26 12:16:42.600139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.623 [2024-04-26 12:16:42.600146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.623 [2024-04-26 12:16:42.600156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.623 [2024-04-26 12:16:42.600163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.623 [2024-04-26 12:16:42.600172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.623 [2024-04-26 12:16:42.600179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.623 [2024-04-26 12:16:42.600188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.623 [2024-04-26 12:16:42.600196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.623 [2024-04-26 12:16:42.600205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.623 [2024-04-26 12:16:42.600212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.623 [2024-04-26 12:16:42.600221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.623 [2024-04-26 12:16:42.600228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.623 [2024-04-26 12:16:42.600237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.623 [2024-04-26 12:16:42.600244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.623 [2024-04-26 12:16:42.600254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.623 [2024-04-26 12:16:42.600260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.623 [2024-04-26 12:16:42.600269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.623 [2024-04-26 12:16:42.600276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.623 [2024-04-26 12:16:42.600286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.623 [2024-04-26 12:16:42.600293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.623 [2024-04-26 12:16:42.600303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.623 [2024-04-26 12:16:42.600310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.623 [2024-04-26 12:16:42.600320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.623 [2024-04-26 12:16:42.600327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.623 [2024-04-26 12:16:42.600337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.623 [2024-04-26 12:16:42.600344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.623 [2024-04-26 12:16:42.600353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.623 [2024-04-26 12:16:42.600360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.623 [2024-04-26 12:16:42.600369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.623 [2024-04-26 12:16:42.600376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.623 [2024-04-26 12:16:42.600385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.623 [2024-04-26 12:16:42.600392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.623 [2024-04-26 12:16:42.600401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.623 [2024-04-26 12:16:42.600409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.623 [2024-04-26 12:16:42.600418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.623 [2024-04-26 12:16:42.600425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.623 [2024-04-26 12:16:42.600434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.623 [2024-04-26 12:16:42.600441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.623 [2024-04-26 12:16:42.600451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.623 [2024-04-26 12:16:42.600458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.623 [2024-04-26 12:16:42.600467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.623 [2024-04-26 12:16:42.600474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.623 [2024-04-26 12:16:42.600483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.623 [2024-04-26 12:16:42.600490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.623 [2024-04-26 12:16:42.600500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.623 [2024-04-26 12:16:42.600509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.623 [2024-04-26 12:16:42.600518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.623 [2024-04-26 12:16:42.600526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.623 [2024-04-26 12:16:42.600535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.623 [2024-04-26 12:16:42.600542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.623 [2024-04-26 12:16:42.600551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.623 [2024-04-26 12:16:42.600558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.623 [2024-04-26 12:16:42.600567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.623 [2024-04-26 12:16:42.600575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.623 [2024-04-26 12:16:42.600584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.623 [2024-04-26 12:16:42.600591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.623 [2024-04-26 12:16:42.600600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.623 [2024-04-26 12:16:42.600608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.623 [2024-04-26 12:16:42.600617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.623 [2024-04-26 12:16:42.600625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.623 [2024-04-26 12:16:42.600634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.623 [2024-04-26 12:16:42.600641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.623 [2024-04-26 12:16:42.600650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.623 [2024-04-26 12:16:42.600657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.624 [2024-04-26 12:16:42.600666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.624 [2024-04-26 12:16:42.600674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.624 [2024-04-26 12:16:42.600683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.624 [2024-04-26 12:16:42.600690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.624 [2024-04-26 12:16:42.600699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.624 [2024-04-26 12:16:42.600706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.624 [2024-04-26 12:16:42.600717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.624 [2024-04-26 12:16:42.600724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.624 [2024-04-26 12:16:42.600733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.624 [2024-04-26 12:16:42.600740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.624 [2024-04-26 12:16:42.600749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.624 [2024-04-26 12:16:42.600756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.624 [2024-04-26 12:16:42.600764] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114d7e0 is same with the state(5) to be set 00:20:41.624 task offset: 24576 on job bdev=Nvme8n1 fails 00:20:41.624 00:20:41.624 Latency(us) 00:20:41.624 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:41.624 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:41.624 Job: Nvme1n1 ended in about 0.98 seconds with error 00:20:41.624 Verification LBA range: start 0x0 length 0x400 00:20:41.624 Nvme1n1 : 0.98 193.70 12.11 65.59 0.00 244107.57 18677.76 262144.00 00:20:41.624 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:41.624 Job: Nvme2n1 ended in about 0.99 seconds with error 00:20:41.624 Verification LBA range: start 0x0 length 0x400 00:20:41.624 Nvme2n1 : 0.99 128.84 8.05 64.42 0.00 321402.31 25340.59 279620.27 00:20:41.624 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:41.624 Job: Nvme3n1 ended in about 0.98 seconds with error 00:20:41.624 Verification LBA range: start 0x0 length 0x400 00:20:41.624 Nvme3n1 : 0.98 195.51 12.22 65.51 0.00 233000.61 6116.69 281367.89 00:20:41.624 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:41.624 Job: Nvme4n1 ended in about 1.00 seconds with error 00:20:41.624 Verification LBA range: start 0x0 length 0x400 00:20:41.624 Nvme4n1 : 1.00 195.81 12.24 64.27 0.00 229404.15 12997.97 286610.77 00:20:41.624 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:41.624 Job: Nvme5n1 ended in about 1.00 seconds with error 00:20:41.624 Verification LBA range: start 0x0 length 0x400 00:20:41.624 Nvme5n1 : 1.00 128.23 8.01 64.11 0.00 304041.24 19442.35 304087.04 00:20:41.624 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:41.624 Job: Nvme6n1 ended in about 1.00 seconds with error 00:20:41.624 Verification LBA range: start 0x0 length 0x400 00:20:41.624 Nvme6n1 : 1.00 127.93 8.00 63.97 0.00 298441.67 18677.76 265639.25 00:20:41.624 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:41.624 Job: Nvme7n1 ended in about 1.00 seconds with error 00:20:41.624 Verification LBA range: start 0x0 length 0x400 00:20:41.624 Nvme7n1 : 1.00 127.63 7.98 63.82 0.00 292939.66 22500.69 272629.76 00:20:41.624 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:41.624 Job: Nvme8n1 ended in about 0.97 seconds with error 00:20:41.624 Verification LBA range: start 0x0 length 0x400 00:20:41.624 Nvme8n1 : 0.97 197.68 12.36 65.89 0.00 207121.49 22500.69 277872.64 00:20:41.624 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:41.624 Job: Nvme9n1 ended in about 1.01 seconds with error 00:20:41.624 Verification LBA range: start 0x0 length 0x400 00:20:41.624 Nvme9n1 : 1.01 126.68 7.92 63.34 0.00 283036.16 26214.40 279620.27 00:20:41.624 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:41.624 Job: Nvme10n1 ended in about 0.98 seconds with error 00:20:41.624 Verification LBA range: start 0x0 length 0x400 00:20:41.624 Nvme10n1 : 0.98 195.79 12.24 65.26 0.00 200013.65 8246.61 274377.39 00:20:41.624 =================================================================================================================== 00:20:41.624 Total : 1617.81 101.11 646.19 0.00 255824.01 6116.69 304087.04 00:20:41.624 [2024-04-26 12:16:42.630896] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:41.624 [2024-04-26 12:16:42.630962] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:20:41.624 [2024-04-26 12:16:42.631423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.624 [2024-04-26 12:16:42.631631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.624 [2024-04-26 12:16:42.631642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f30f0 with addr=10.0.0.2, port=4420 00:20:41.624 [2024-04-26 12:16:42.631652] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f30f0 is same with the state(5) to be set 00:20:41.624 [2024-04-26 12:16:42.631995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.624 [2024-04-26 12:16:42.632368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.624 [2024-04-26 12:16:42.632377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x117c8c0 with addr=10.0.0.2, port=4420 00:20:41.624 [2024-04-26 12:16:42.632385] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117c8c0 is same with the state(5) to be set 00:20:41.624 [2024-04-26 12:16:42.632594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.624 [2024-04-26 12:16:42.632942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.624 [2024-04-26 12:16:42.632951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11510c0 with addr=10.0.0.2, port=4420 00:20:41.624 [2024-04-26 12:16:42.632958] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11510c0 is same with the state(5) to be set 00:20:41.624 [2024-04-26 12:16:42.632971] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1240400 (9): Bad file descriptor 00:20:41.624 [2024-04-26 12:16:42.632983] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1199c10 (9): Bad file descriptor 00:20:41.624 [2024-04-26 12:16:42.632992] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:41.624 [2024-04-26 12:16:42.632999] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:20:41.624 [2024-04-26 12:16:42.633008] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:41.624 [2024-04-26 12:16:42.633023] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:20:41.624 [2024-04-26 12:16:42.633029] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:20:41.624 [2024-04-26 12:16:42.633036] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:20:41.624 [2024-04-26 12:16:42.633047] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:20:41.624 [2024-04-26 12:16:42.633053] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:20:41.624 [2024-04-26 12:16:42.633060] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:20:41.624 [2024-04-26 12:16:42.633176] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:41.624 [2024-04-26 12:16:42.633188] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:41.624 [2024-04-26 12:16:42.633194] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:41.624 [2024-04-26 12:16:42.633540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.624 [2024-04-26 12:16:42.633727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.624 [2024-04-26 12:16:42.633736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11737c0 with addr=10.0.0.2, port=4420 00:20:41.624 [2024-04-26 12:16:42.633744] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11737c0 is same with the state(5) to be set 00:20:41.624 [2024-04-26 12:16:42.633753] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f30f0 (9): Bad file descriptor 00:20:41.624 [2024-04-26 12:16:42.633763] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x117c8c0 (9): Bad file descriptor 00:20:41.624 [2024-04-26 12:16:42.633772] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11510c0 (9): Bad file descriptor 00:20:41.624 [2024-04-26 12:16:42.633780] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:20:41.624 [2024-04-26 12:16:42.633786] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:20:41.624 [2024-04-26 12:16:42.633793] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:20:41.624 [2024-04-26 12:16:42.633804] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:20:41.624 [2024-04-26 12:16:42.633810] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:20:41.624 [2024-04-26 12:16:42.633817] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:20:41.624 [2024-04-26 12:16:42.633864] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:41.624 [2024-04-26 12:16:42.633876] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:41.624 [2024-04-26 12:16:42.633895] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:41.624 [2024-04-26 12:16:42.633906] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:41.624 [2024-04-26 12:16:42.633917] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:41.624 [2024-04-26 12:16:42.634219] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:41.624 [2024-04-26 12:16:42.634228] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:41.624 [2024-04-26 12:16:42.634248] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11737c0 (9): Bad file descriptor 00:20:41.624 [2024-04-26 12:16:42.634257] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:20:41.625 [2024-04-26 12:16:42.634264] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:20:41.625 [2024-04-26 12:16:42.634270] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:20:41.625 [2024-04-26 12:16:42.634280] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:20:41.625 [2024-04-26 12:16:42.634286] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:20:41.625 [2024-04-26 12:16:42.634293] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:20:41.625 [2024-04-26 12:16:42.634303] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:41.625 [2024-04-26 12:16:42.634309] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:41.625 [2024-04-26 12:16:42.634316] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:41.625 [2024-04-26 12:16:42.634583] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:20:41.625 [2024-04-26 12:16:42.634595] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:20:41.625 [2024-04-26 12:16:42.634604] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:20:41.625 [2024-04-26 12:16:42.634613] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:20:41.625 [2024-04-26 12:16:42.634621] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:41.625 [2024-04-26 12:16:42.634627] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:41.625 [2024-04-26 12:16:42.634633] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:41.625 [2024-04-26 12:16:42.634661] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:20:41.625 [2024-04-26 12:16:42.634668] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:20:41.625 [2024-04-26 12:16:42.634675] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:20:41.625 [2024-04-26 12:16:42.634709] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:41.625 [2024-04-26 12:16:42.634912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.625 [2024-04-26 12:16:42.635280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.625 [2024-04-26 12:16:42.635290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1235a30 with addr=10.0.0.2, port=4420 00:20:41.625 [2024-04-26 12:16:42.635297] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235a30 is same with the state(5) to be set 00:20:41.625 [2024-04-26 12:16:42.635646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.625 [2024-04-26 12:16:42.635947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.625 [2024-04-26 12:16:42.635957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115f8f0 with addr=10.0.0.2, port=4420 00:20:41.625 [2024-04-26 12:16:42.635964] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x115f8f0 is same with the state(5) to be set 00:20:41.625 [2024-04-26 12:16:42.636293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.625 [2024-04-26 12:16:42.636672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.625 [2024-04-26 12:16:42.636681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1174790 with addr=10.0.0.2, port=4420 00:20:41.625 [2024-04-26 12:16:42.636688] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1174790 is same with the state(5) to be set 00:20:41.625 [2024-04-26 12:16:42.637098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.625 [2024-04-26 12:16:42.637421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.625 [2024-04-26 12:16:42.637430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcb1fd0 with addr=10.0.0.2, port=4420 00:20:41.625 [2024-04-26 12:16:42.637437] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb1fd0 is same with the state(5) to be set 00:20:41.625 [2024-04-26 12:16:42.637467] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1235a30 (9): Bad file descriptor 00:20:41.625 [2024-04-26 12:16:42.637478] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x115f8f0 (9): Bad file descriptor 00:20:41.625 [2024-04-26 12:16:42.637486] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1174790 (9): Bad file descriptor 00:20:41.625 [2024-04-26 12:16:42.637495] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcb1fd0 (9): Bad file descriptor 00:20:41.625 [2024-04-26 12:16:42.637535] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:20:41.625 [2024-04-26 12:16:42.637543] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:20:41.625 [2024-04-26 12:16:42.637550] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:20:41.625 [2024-04-26 12:16:42.637559] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:20:41.625 [2024-04-26 12:16:42.637565] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:20:41.625 [2024-04-26 12:16:42.637572] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:20:41.625 [2024-04-26 12:16:42.637581] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:20:41.625 [2024-04-26 12:16:42.637587] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:20:41.625 [2024-04-26 12:16:42.637593] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:20:41.625 [2024-04-26 12:16:42.637602] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:41.625 [2024-04-26 12:16:42.637609] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:20:41.625 [2024-04-26 12:16:42.637615] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:41.625 [2024-04-26 12:16:42.637644] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:41.625 [2024-04-26 12:16:42.637651] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:41.625 [2024-04-26 12:16:42.637657] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:41.625 [2024-04-26 12:16:42.637663] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:41.625 12:16:42 -- target/shutdown.sh@136 -- # nvmfpid= 00:20:41.625 12:16:42 -- target/shutdown.sh@139 -- # sleep 1 00:20:42.648 12:16:43 -- target/shutdown.sh@142 -- # kill -9 3462764 00:20:42.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (3462764) - No such process 00:20:42.648 12:16:43 -- target/shutdown.sh@142 -- # true 00:20:42.648 12:16:43 -- target/shutdown.sh@144 -- # stoptarget 00:20:42.648 12:16:43 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:42.648 12:16:43 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:42.648 12:16:43 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:42.648 12:16:43 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:42.648 12:16:43 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:42.648 12:16:43 -- nvmf/common.sh@117 -- # sync 00:20:42.648 12:16:43 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:42.648 12:16:43 -- nvmf/common.sh@120 -- # set +e 00:20:42.648 12:16:43 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:42.648 12:16:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:42.648 rmmod nvme_tcp 00:20:42.648 rmmod nvme_fabrics 00:20:42.648 rmmod nvme_keyring 00:20:42.648 12:16:43 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:42.648 12:16:43 -- nvmf/common.sh@124 -- # set -e 00:20:42.648 12:16:43 -- nvmf/common.sh@125 -- # return 0 00:20:42.648 12:16:43 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:20:42.648 12:16:43 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:42.648 12:16:43 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:42.648 12:16:43 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:42.648 12:16:43 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:42.648 12:16:43 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:42.648 12:16:43 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:42.648 12:16:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:42.648 12:16:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:45.190 12:16:45 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:45.190 00:20:45.190 real 0m7.762s 00:20:45.190 user 0m18.889s 00:20:45.190 sys 0m1.220s 00:20:45.190 12:16:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:45.190 12:16:45 -- common/autotest_common.sh@10 -- # set +x 00:20:45.190 ************************************ 00:20:45.190 END TEST nvmf_shutdown_tc3 00:20:45.190 ************************************ 00:20:45.190 12:16:45 -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:20:45.190 00:20:45.190 real 0m32.826s 00:20:45.190 user 1m16.084s 00:20:45.190 sys 0m9.281s 00:20:45.190 12:16:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:45.190 12:16:45 -- common/autotest_common.sh@10 -- # set +x 00:20:45.190 ************************************ 00:20:45.190 END TEST nvmf_shutdown 00:20:45.190 ************************************ 00:20:45.190 12:16:46 -- nvmf/nvmf.sh@84 -- # timing_exit target 00:20:45.190 12:16:46 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:45.190 12:16:46 -- common/autotest_common.sh@10 -- # set +x 00:20:45.190 12:16:46 -- nvmf/nvmf.sh@86 -- # timing_enter host 00:20:45.190 12:16:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:45.190 12:16:46 -- common/autotest_common.sh@10 -- # set +x 00:20:45.190 12:16:46 -- nvmf/nvmf.sh@88 -- # [[ 0 -eq 0 ]] 00:20:45.190 12:16:46 -- nvmf/nvmf.sh@89 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:45.190 12:16:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:45.190 12:16:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:45.190 12:16:46 -- common/autotest_common.sh@10 -- # set +x 00:20:45.190 ************************************ 00:20:45.190 START TEST nvmf_multicontroller 00:20:45.190 ************************************ 00:20:45.190 12:16:46 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:45.190 * Looking for test storage... 00:20:45.190 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:45.191 12:16:46 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:45.191 12:16:46 -- nvmf/common.sh@7 -- # uname -s 00:20:45.191 12:16:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:45.191 12:16:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:45.191 12:16:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:45.191 12:16:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:45.191 12:16:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:45.191 12:16:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:45.191 12:16:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:45.191 12:16:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:45.191 12:16:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:45.191 12:16:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:45.191 12:16:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:45.191 12:16:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:45.191 12:16:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:45.191 12:16:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:45.191 12:16:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:45.191 12:16:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:45.191 12:16:46 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:45.191 12:16:46 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:45.191 12:16:46 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:45.191 12:16:46 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:45.191 12:16:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.191 12:16:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.191 12:16:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.191 12:16:46 -- paths/export.sh@5 -- # export PATH 00:20:45.191 12:16:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.191 12:16:46 -- nvmf/common.sh@47 -- # : 0 00:20:45.191 12:16:46 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:45.191 12:16:46 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:45.191 12:16:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:45.191 12:16:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:45.191 12:16:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:45.191 12:16:46 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:45.191 12:16:46 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:45.191 12:16:46 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:45.191 12:16:46 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:45.191 12:16:46 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:45.191 12:16:46 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:20:45.191 12:16:46 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:20:45.191 12:16:46 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:45.191 12:16:46 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:20:45.191 12:16:46 -- host/multicontroller.sh@23 -- # nvmftestinit 00:20:45.191 12:16:46 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:45.191 12:16:46 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:45.191 12:16:46 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:45.191 12:16:46 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:45.191 12:16:46 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:45.191 12:16:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:45.191 12:16:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:45.191 12:16:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:45.191 12:16:46 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:45.191 12:16:46 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:45.191 12:16:46 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:45.191 12:16:46 -- common/autotest_common.sh@10 -- # set +x 00:20:53.356 12:16:53 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:53.356 12:16:53 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:53.356 12:16:53 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:53.356 12:16:53 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:53.356 12:16:53 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:53.356 12:16:53 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:53.356 12:16:53 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:53.356 12:16:53 -- nvmf/common.sh@295 -- # net_devs=() 00:20:53.356 12:16:53 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:53.356 12:16:53 -- nvmf/common.sh@296 -- # e810=() 00:20:53.356 12:16:53 -- nvmf/common.sh@296 -- # local -ga e810 00:20:53.356 12:16:53 -- nvmf/common.sh@297 -- # x722=() 00:20:53.356 12:16:53 -- nvmf/common.sh@297 -- # local -ga x722 00:20:53.356 12:16:53 -- nvmf/common.sh@298 -- # mlx=() 00:20:53.356 12:16:53 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:53.356 12:16:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:53.356 12:16:53 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:53.356 12:16:53 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:53.356 12:16:53 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:53.356 12:16:53 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:53.356 12:16:53 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:53.356 12:16:53 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:53.356 12:16:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:53.356 12:16:53 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:53.356 12:16:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:53.356 12:16:53 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:53.356 12:16:53 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:53.357 12:16:53 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:53.357 12:16:53 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:53.357 12:16:53 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:53.357 12:16:53 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:53.357 12:16:53 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:53.357 12:16:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:53.357 12:16:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:53.357 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:53.357 12:16:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:53.357 12:16:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:53.357 12:16:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:53.357 12:16:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:53.357 12:16:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:53.357 12:16:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:53.357 12:16:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:53.357 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:53.357 12:16:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:53.357 12:16:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:53.357 12:16:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:53.357 12:16:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:53.357 12:16:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:53.357 12:16:53 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:53.357 12:16:53 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:53.357 12:16:53 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:53.357 12:16:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:53.357 12:16:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:53.357 12:16:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:53.357 12:16:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:53.357 12:16:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:53.357 Found net devices under 0000:31:00.0: cvl_0_0 00:20:53.357 12:16:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:53.357 12:16:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:53.357 12:16:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:53.357 12:16:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:53.357 12:16:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:53.357 12:16:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:53.357 Found net devices under 0000:31:00.1: cvl_0_1 00:20:53.357 12:16:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:53.357 12:16:53 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:53.357 12:16:53 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:53.357 12:16:53 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:53.357 12:16:53 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:53.357 12:16:53 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:53.357 12:16:53 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:53.357 12:16:53 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:53.357 12:16:53 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:53.357 12:16:53 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:53.357 12:16:53 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:53.357 12:16:53 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:53.357 12:16:53 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:53.357 12:16:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:53.357 12:16:53 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:53.357 12:16:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:53.357 12:16:53 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:53.357 12:16:53 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:53.357 12:16:53 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:53.357 12:16:53 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:53.357 12:16:53 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:53.357 12:16:53 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:53.357 12:16:53 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:53.357 12:16:53 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:53.357 12:16:53 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:53.357 12:16:53 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:53.357 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:53.357 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.610 ms 00:20:53.357 00:20:53.357 --- 10.0.0.2 ping statistics --- 00:20:53.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:53.357 rtt min/avg/max/mdev = 0.610/0.610/0.610/0.000 ms 00:20:53.357 12:16:53 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:53.357 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:53.357 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:20:53.357 00:20:53.357 --- 10.0.0.1 ping statistics --- 00:20:53.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:53.357 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:20:53.357 12:16:53 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:53.357 12:16:53 -- nvmf/common.sh@411 -- # return 0 00:20:53.357 12:16:53 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:53.357 12:16:53 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:53.357 12:16:53 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:53.357 12:16:53 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:53.357 12:16:53 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:53.357 12:16:53 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:53.357 12:16:53 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:53.357 12:16:53 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:20:53.357 12:16:53 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:53.357 12:16:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:53.357 12:16:53 -- common/autotest_common.sh@10 -- # set +x 00:20:53.357 12:16:53 -- nvmf/common.sh@470 -- # nvmfpid=3467896 00:20:53.357 12:16:53 -- nvmf/common.sh@471 -- # waitforlisten 3467896 00:20:53.357 12:16:53 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:53.357 12:16:53 -- common/autotest_common.sh@817 -- # '[' -z 3467896 ']' 00:20:53.357 12:16:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:53.357 12:16:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:53.357 12:16:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:53.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:53.357 12:16:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:53.357 12:16:53 -- common/autotest_common.sh@10 -- # set +x 00:20:53.357 [2024-04-26 12:16:53.678972] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:20:53.357 [2024-04-26 12:16:53.679058] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:53.357 EAL: No free 2048 kB hugepages reported on node 1 00:20:53.357 [2024-04-26 12:16:53.770674] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:53.357 [2024-04-26 12:16:53.862103] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:53.357 [2024-04-26 12:16:53.862168] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:53.357 [2024-04-26 12:16:53.862176] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:53.357 [2024-04-26 12:16:53.862183] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:53.357 [2024-04-26 12:16:53.862189] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:53.357 [2024-04-26 12:16:53.862329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:53.357 [2024-04-26 12:16:53.862534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:53.357 [2024-04-26 12:16:53.862535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:53.357 12:16:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:53.357 12:16:54 -- common/autotest_common.sh@850 -- # return 0 00:20:53.357 12:16:54 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:53.357 12:16:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:53.357 12:16:54 -- common/autotest_common.sh@10 -- # set +x 00:20:53.357 12:16:54 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:53.357 12:16:54 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:53.357 12:16:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:53.357 12:16:54 -- common/autotest_common.sh@10 -- # set +x 00:20:53.357 [2024-04-26 12:16:54.496675] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:53.357 12:16:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:53.357 12:16:54 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:53.357 12:16:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:53.357 12:16:54 -- common/autotest_common.sh@10 -- # set +x 00:20:53.357 Malloc0 00:20:53.357 12:16:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:53.357 12:16:54 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:53.357 12:16:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:53.357 12:16:54 -- common/autotest_common.sh@10 -- # set +x 00:20:53.357 12:16:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:53.357 12:16:54 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:53.357 12:16:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:53.357 12:16:54 -- common/autotest_common.sh@10 -- # set +x 00:20:53.357 12:16:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:53.357 12:16:54 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:53.357 12:16:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:53.357 12:16:54 -- common/autotest_common.sh@10 -- # set +x 00:20:53.357 [2024-04-26 12:16:54.566637] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:53.357 12:16:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:53.357 12:16:54 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:53.357 12:16:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:53.357 12:16:54 -- common/autotest_common.sh@10 -- # set +x 00:20:53.619 [2024-04-26 12:16:54.578600] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:53.619 12:16:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:53.619 12:16:54 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:53.619 12:16:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:53.619 12:16:54 -- common/autotest_common.sh@10 -- # set +x 00:20:53.619 Malloc1 00:20:53.619 12:16:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:53.619 12:16:54 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:20:53.619 12:16:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:53.619 12:16:54 -- common/autotest_common.sh@10 -- # set +x 00:20:53.619 12:16:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:53.619 12:16:54 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:20:53.619 12:16:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:53.619 12:16:54 -- common/autotest_common.sh@10 -- # set +x 00:20:53.619 12:16:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:53.619 12:16:54 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:53.619 12:16:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:53.619 12:16:54 -- common/autotest_common.sh@10 -- # set +x 00:20:53.619 12:16:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:53.619 12:16:54 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:20:53.619 12:16:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:53.619 12:16:54 -- common/autotest_common.sh@10 -- # set +x 00:20:53.619 12:16:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:53.619 12:16:54 -- host/multicontroller.sh@44 -- # bdevperf_pid=3468045 00:20:53.619 12:16:54 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:53.619 12:16:54 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:20:53.619 12:16:54 -- host/multicontroller.sh@47 -- # waitforlisten 3468045 /var/tmp/bdevperf.sock 00:20:53.619 12:16:54 -- common/autotest_common.sh@817 -- # '[' -z 3468045 ']' 00:20:53.619 12:16:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:53.619 12:16:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:53.619 12:16:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:53.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:53.619 12:16:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:53.619 12:16:54 -- common/autotest_common.sh@10 -- # set +x 00:20:54.559 12:16:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:54.559 12:16:55 -- common/autotest_common.sh@850 -- # return 0 00:20:54.559 12:16:55 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:54.559 12:16:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:54.559 12:16:55 -- common/autotest_common.sh@10 -- # set +x 00:20:54.559 NVMe0n1 00:20:54.559 12:16:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:54.559 12:16:55 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:54.559 12:16:55 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:20:54.559 12:16:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:54.559 12:16:55 -- common/autotest_common.sh@10 -- # set +x 00:20:54.559 12:16:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:54.559 1 00:20:54.559 12:16:55 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:54.559 12:16:55 -- common/autotest_common.sh@638 -- # local es=0 00:20:54.559 12:16:55 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:54.559 12:16:55 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:20:54.559 12:16:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:54.559 12:16:55 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:20:54.559 12:16:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:54.559 12:16:55 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:54.559 12:16:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:54.559 12:16:55 -- common/autotest_common.sh@10 -- # set +x 00:20:54.559 request: 00:20:54.559 { 00:20:54.559 "name": "NVMe0", 00:20:54.559 "trtype": "tcp", 00:20:54.559 "traddr": "10.0.0.2", 00:20:54.559 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:20:54.559 "hostaddr": "10.0.0.2", 00:20:54.559 "hostsvcid": "60000", 00:20:54.559 "adrfam": "ipv4", 00:20:54.559 "trsvcid": "4420", 00:20:54.559 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:54.559 "method": "bdev_nvme_attach_controller", 00:20:54.559 "req_id": 1 00:20:54.559 } 00:20:54.559 Got JSON-RPC error response 00:20:54.559 response: 00:20:54.559 { 00:20:54.559 "code": -114, 00:20:54.559 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:54.559 } 00:20:54.559 12:16:55 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:20:54.560 12:16:55 -- common/autotest_common.sh@641 -- # es=1 00:20:54.560 12:16:55 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:54.560 12:16:55 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:54.560 12:16:55 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:54.560 12:16:55 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:54.560 12:16:55 -- common/autotest_common.sh@638 -- # local es=0 00:20:54.560 12:16:55 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:54.560 12:16:55 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:20:54.560 12:16:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:54.560 12:16:55 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:20:54.560 12:16:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:54.560 12:16:55 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:54.560 12:16:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:54.560 12:16:55 -- common/autotest_common.sh@10 -- # set +x 00:20:54.560 request: 00:20:54.560 { 00:20:54.560 "name": "NVMe0", 00:20:54.560 "trtype": "tcp", 00:20:54.560 "traddr": "10.0.0.2", 00:20:54.560 "hostaddr": "10.0.0.2", 00:20:54.560 "hostsvcid": "60000", 00:20:54.560 "adrfam": "ipv4", 00:20:54.560 "trsvcid": "4420", 00:20:54.560 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:54.560 "method": "bdev_nvme_attach_controller", 00:20:54.560 "req_id": 1 00:20:54.560 } 00:20:54.560 Got JSON-RPC error response 00:20:54.560 response: 00:20:54.560 { 00:20:54.560 "code": -114, 00:20:54.560 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:54.560 } 00:20:54.560 12:16:55 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:20:54.560 12:16:55 -- common/autotest_common.sh@641 -- # es=1 00:20:54.560 12:16:55 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:54.560 12:16:55 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:54.560 12:16:55 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:54.560 12:16:55 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:54.560 12:16:55 -- common/autotest_common.sh@638 -- # local es=0 00:20:54.560 12:16:55 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:54.560 12:16:55 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:20:54.560 12:16:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:54.560 12:16:55 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:20:54.560 12:16:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:54.560 12:16:55 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:54.560 12:16:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:54.560 12:16:55 -- common/autotest_common.sh@10 -- # set +x 00:20:54.560 request: 00:20:54.560 { 00:20:54.560 "name": "NVMe0", 00:20:54.821 "trtype": "tcp", 00:20:54.821 "traddr": "10.0.0.2", 00:20:54.821 "hostaddr": "10.0.0.2", 00:20:54.821 "hostsvcid": "60000", 00:20:54.821 "adrfam": "ipv4", 00:20:54.821 "trsvcid": "4420", 00:20:54.821 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:54.821 "multipath": "disable", 00:20:54.821 "method": "bdev_nvme_attach_controller", 00:20:54.821 "req_id": 1 00:20:54.821 } 00:20:54.821 Got JSON-RPC error response 00:20:54.821 response: 00:20:54.821 { 00:20:54.821 "code": -114, 00:20:54.821 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:20:54.821 } 00:20:54.821 12:16:55 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:20:54.821 12:16:55 -- common/autotest_common.sh@641 -- # es=1 00:20:54.821 12:16:55 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:54.821 12:16:55 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:54.821 12:16:55 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:54.821 12:16:55 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:54.821 12:16:55 -- common/autotest_common.sh@638 -- # local es=0 00:20:54.821 12:16:55 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:54.821 12:16:55 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:20:54.821 12:16:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:54.821 12:16:55 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:20:54.821 12:16:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:54.821 12:16:55 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:54.821 12:16:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:54.821 12:16:55 -- common/autotest_common.sh@10 -- # set +x 00:20:54.821 request: 00:20:54.821 { 00:20:54.821 "name": "NVMe0", 00:20:54.821 "trtype": "tcp", 00:20:54.821 "traddr": "10.0.0.2", 00:20:54.821 "hostaddr": "10.0.0.2", 00:20:54.821 "hostsvcid": "60000", 00:20:54.821 "adrfam": "ipv4", 00:20:54.821 "trsvcid": "4420", 00:20:54.821 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:54.821 "multipath": "failover", 00:20:54.821 "method": "bdev_nvme_attach_controller", 00:20:54.821 "req_id": 1 00:20:54.821 } 00:20:54.821 Got JSON-RPC error response 00:20:54.821 response: 00:20:54.821 { 00:20:54.821 "code": -114, 00:20:54.821 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:54.821 } 00:20:54.821 12:16:55 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:20:54.821 12:16:55 -- common/autotest_common.sh@641 -- # es=1 00:20:54.821 12:16:55 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:54.821 12:16:55 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:54.821 12:16:55 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:54.821 12:16:55 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:54.821 12:16:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:54.821 12:16:55 -- common/autotest_common.sh@10 -- # set +x 00:20:54.821 00:20:54.821 12:16:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:54.821 12:16:55 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:54.821 12:16:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:54.821 12:16:55 -- common/autotest_common.sh@10 -- # set +x 00:20:54.821 12:16:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:54.821 12:16:56 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:54.821 12:16:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:54.821 12:16:56 -- common/autotest_common.sh@10 -- # set +x 00:20:55.082 00:20:55.082 12:16:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:55.082 12:16:56 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:55.082 12:16:56 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:20:55.082 12:16:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:55.082 12:16:56 -- common/autotest_common.sh@10 -- # set +x 00:20:55.082 12:16:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:55.082 12:16:56 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:20:55.082 12:16:56 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:56.022 0 00:20:56.022 12:16:57 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:20:56.022 12:16:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:56.022 12:16:57 -- common/autotest_common.sh@10 -- # set +x 00:20:56.022 12:16:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:56.022 12:16:57 -- host/multicontroller.sh@100 -- # killprocess 3468045 00:20:56.022 12:16:57 -- common/autotest_common.sh@936 -- # '[' -z 3468045 ']' 00:20:56.022 12:16:57 -- common/autotest_common.sh@940 -- # kill -0 3468045 00:20:56.022 12:16:57 -- common/autotest_common.sh@941 -- # uname 00:20:56.022 12:16:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:56.022 12:16:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3468045 00:20:56.282 12:16:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:56.282 12:16:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:56.282 12:16:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3468045' 00:20:56.282 killing process with pid 3468045 00:20:56.282 12:16:57 -- common/autotest_common.sh@955 -- # kill 3468045 00:20:56.282 12:16:57 -- common/autotest_common.sh@960 -- # wait 3468045 00:20:56.282 12:16:57 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:56.282 12:16:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:56.282 12:16:57 -- common/autotest_common.sh@10 -- # set +x 00:20:56.282 12:16:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:56.282 12:16:57 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:56.282 12:16:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:56.282 12:16:57 -- common/autotest_common.sh@10 -- # set +x 00:20:56.282 12:16:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:56.282 12:16:57 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:20:56.282 12:16:57 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:56.282 12:16:57 -- common/autotest_common.sh@1598 -- # read -r file 00:20:56.282 12:16:57 -- common/autotest_common.sh@1597 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:20:56.282 12:16:57 -- common/autotest_common.sh@1597 -- # sort -u 00:20:56.282 12:16:57 -- common/autotest_common.sh@1599 -- # cat 00:20:56.282 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:56.282 [2024-04-26 12:16:54.694475] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:20:56.282 [2024-04-26 12:16:54.694534] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3468045 ] 00:20:56.282 EAL: No free 2048 kB hugepages reported on node 1 00:20:56.282 [2024-04-26 12:16:54.754337] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.282 [2024-04-26 12:16:54.817277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:56.282 [2024-04-26 12:16:56.075660] bdev.c:4551:bdev_name_add: *ERROR*: Bdev name 61a12852-256e-41d8-b91c-530330599679 already exists 00:20:56.282 [2024-04-26 12:16:56.075692] bdev.c:7668:bdev_register: *ERROR*: Unable to add uuid:61a12852-256e-41d8-b91c-530330599679 alias for bdev NVMe1n1 00:20:56.282 [2024-04-26 12:16:56.075702] bdev_nvme.c:4276:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:20:56.282 Running I/O for 1 seconds... 00:20:56.282 00:20:56.282 Latency(us) 00:20:56.282 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:56.282 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:20:56.282 NVMe0n1 : 1.00 23026.34 89.95 0.00 0.00 5546.50 3099.31 10540.37 00:20:56.282 =================================================================================================================== 00:20:56.282 Total : 23026.34 89.95 0.00 0.00 5546.50 3099.31 10540.37 00:20:56.282 Received shutdown signal, test time was about 1.000000 seconds 00:20:56.282 00:20:56.282 Latency(us) 00:20:56.282 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:56.282 =================================================================================================================== 00:20:56.282 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:56.282 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:56.282 12:16:57 -- common/autotest_common.sh@1604 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:56.282 12:16:57 -- common/autotest_common.sh@1598 -- # read -r file 00:20:56.282 12:16:57 -- host/multicontroller.sh@108 -- # nvmftestfini 00:20:56.282 12:16:57 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:56.282 12:16:57 -- nvmf/common.sh@117 -- # sync 00:20:56.282 12:16:57 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:56.282 12:16:57 -- nvmf/common.sh@120 -- # set +e 00:20:56.282 12:16:57 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:56.282 12:16:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:56.282 rmmod nvme_tcp 00:20:56.282 rmmod nvme_fabrics 00:20:56.282 rmmod nvme_keyring 00:20:56.543 12:16:57 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:56.543 12:16:57 -- nvmf/common.sh@124 -- # set -e 00:20:56.543 12:16:57 -- nvmf/common.sh@125 -- # return 0 00:20:56.543 12:16:57 -- nvmf/common.sh@478 -- # '[' -n 3467896 ']' 00:20:56.543 12:16:57 -- nvmf/common.sh@479 -- # killprocess 3467896 00:20:56.543 12:16:57 -- common/autotest_common.sh@936 -- # '[' -z 3467896 ']' 00:20:56.543 12:16:57 -- common/autotest_common.sh@940 -- # kill -0 3467896 00:20:56.543 12:16:57 -- common/autotest_common.sh@941 -- # uname 00:20:56.543 12:16:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:56.543 12:16:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3467896 00:20:56.543 12:16:57 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:56.543 12:16:57 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:56.543 12:16:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3467896' 00:20:56.543 killing process with pid 3467896 00:20:56.543 12:16:57 -- common/autotest_common.sh@955 -- # kill 3467896 00:20:56.543 12:16:57 -- common/autotest_common.sh@960 -- # wait 3467896 00:20:56.543 12:16:57 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:56.543 12:16:57 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:56.543 12:16:57 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:56.543 12:16:57 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:56.543 12:16:57 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:56.543 12:16:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:56.543 12:16:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:56.543 12:16:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:59.091 12:16:59 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:59.091 00:20:59.091 real 0m13.587s 00:20:59.091 user 0m16.829s 00:20:59.091 sys 0m5.996s 00:20:59.091 12:16:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:59.091 12:16:59 -- common/autotest_common.sh@10 -- # set +x 00:20:59.091 ************************************ 00:20:59.091 END TEST nvmf_multicontroller 00:20:59.091 ************************************ 00:20:59.091 12:16:59 -- nvmf/nvmf.sh@90 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:59.091 12:16:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:59.091 12:16:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:59.091 12:16:59 -- common/autotest_common.sh@10 -- # set +x 00:20:59.091 ************************************ 00:20:59.091 START TEST nvmf_aer 00:20:59.091 ************************************ 00:20:59.091 12:17:00 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:59.091 * Looking for test storage... 00:20:59.091 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:59.091 12:17:00 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:59.091 12:17:00 -- nvmf/common.sh@7 -- # uname -s 00:20:59.091 12:17:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:59.091 12:17:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:59.091 12:17:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:59.091 12:17:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:59.091 12:17:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:59.091 12:17:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:59.091 12:17:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:59.091 12:17:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:59.091 12:17:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:59.091 12:17:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:59.091 12:17:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:59.091 12:17:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:59.091 12:17:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:59.091 12:17:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:59.091 12:17:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:59.091 12:17:00 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:59.091 12:17:00 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:59.091 12:17:00 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:59.091 12:17:00 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:59.091 12:17:00 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:59.091 12:17:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.091 12:17:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.091 12:17:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.091 12:17:00 -- paths/export.sh@5 -- # export PATH 00:20:59.091 12:17:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.091 12:17:00 -- nvmf/common.sh@47 -- # : 0 00:20:59.091 12:17:00 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:59.091 12:17:00 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:59.091 12:17:00 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:59.091 12:17:00 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:59.091 12:17:00 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:59.091 12:17:00 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:59.091 12:17:00 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:59.091 12:17:00 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:59.091 12:17:00 -- host/aer.sh@11 -- # nvmftestinit 00:20:59.091 12:17:00 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:59.091 12:17:00 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:59.091 12:17:00 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:59.091 12:17:00 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:59.091 12:17:00 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:59.091 12:17:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:59.091 12:17:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:59.091 12:17:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:59.091 12:17:00 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:59.091 12:17:00 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:59.091 12:17:00 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:59.091 12:17:00 -- common/autotest_common.sh@10 -- # set +x 00:21:07.228 12:17:07 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:07.228 12:17:07 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:07.228 12:17:07 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:07.228 12:17:07 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:07.228 12:17:07 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:07.228 12:17:07 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:07.228 12:17:07 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:07.228 12:17:07 -- nvmf/common.sh@295 -- # net_devs=() 00:21:07.228 12:17:07 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:07.228 12:17:07 -- nvmf/common.sh@296 -- # e810=() 00:21:07.228 12:17:07 -- nvmf/common.sh@296 -- # local -ga e810 00:21:07.228 12:17:07 -- nvmf/common.sh@297 -- # x722=() 00:21:07.228 12:17:07 -- nvmf/common.sh@297 -- # local -ga x722 00:21:07.228 12:17:07 -- nvmf/common.sh@298 -- # mlx=() 00:21:07.228 12:17:07 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:07.228 12:17:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:07.228 12:17:07 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:07.228 12:17:07 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:07.228 12:17:07 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:07.228 12:17:07 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:07.228 12:17:07 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:07.228 12:17:07 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:07.228 12:17:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:07.228 12:17:07 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:07.228 12:17:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:07.228 12:17:07 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:07.228 12:17:07 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:07.228 12:17:07 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:07.228 12:17:07 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:07.228 12:17:07 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:07.228 12:17:07 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:07.228 12:17:07 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:07.228 12:17:07 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:07.228 12:17:07 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:07.228 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:07.228 12:17:07 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:07.228 12:17:07 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:07.228 12:17:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:07.228 12:17:07 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:07.228 12:17:07 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:07.228 12:17:07 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:07.228 12:17:07 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:07.228 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:07.228 12:17:07 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:07.228 12:17:07 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:07.228 12:17:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:07.228 12:17:07 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:07.228 12:17:07 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:07.228 12:17:07 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:07.228 12:17:07 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:07.228 12:17:07 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:07.228 12:17:07 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:07.228 12:17:07 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:07.228 12:17:07 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:07.228 12:17:07 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:07.228 12:17:07 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:07.228 Found net devices under 0000:31:00.0: cvl_0_0 00:21:07.228 12:17:07 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:07.228 12:17:07 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:07.228 12:17:07 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:07.228 12:17:07 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:07.228 12:17:07 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:07.228 12:17:07 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:07.228 Found net devices under 0000:31:00.1: cvl_0_1 00:21:07.228 12:17:07 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:07.228 12:17:07 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:07.228 12:17:07 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:07.228 12:17:07 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:07.228 12:17:07 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:07.228 12:17:07 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:07.228 12:17:07 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:07.228 12:17:07 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:07.228 12:17:07 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:07.228 12:17:07 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:07.228 12:17:07 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:07.228 12:17:07 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:07.228 12:17:07 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:07.228 12:17:07 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:07.228 12:17:07 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:07.228 12:17:07 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:07.228 12:17:07 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:07.228 12:17:07 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:07.228 12:17:07 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:07.228 12:17:07 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:07.228 12:17:07 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:07.228 12:17:07 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:07.228 12:17:07 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:07.228 12:17:07 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:07.228 12:17:07 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:07.228 12:17:07 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:07.228 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:07.228 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.440 ms 00:21:07.228 00:21:07.229 --- 10.0.0.2 ping statistics --- 00:21:07.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:07.229 rtt min/avg/max/mdev = 0.440/0.440/0.440/0.000 ms 00:21:07.229 12:17:07 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:07.229 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:07.229 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.338 ms 00:21:07.229 00:21:07.229 --- 10.0.0.1 ping statistics --- 00:21:07.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:07.229 rtt min/avg/max/mdev = 0.338/0.338/0.338/0.000 ms 00:21:07.229 12:17:07 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:07.229 12:17:07 -- nvmf/common.sh@411 -- # return 0 00:21:07.229 12:17:07 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:07.229 12:17:07 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:07.229 12:17:07 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:07.229 12:17:07 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:07.229 12:17:07 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:07.229 12:17:07 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:07.229 12:17:07 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:07.229 12:17:07 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:21:07.229 12:17:07 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:07.229 12:17:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:07.229 12:17:07 -- common/autotest_common.sh@10 -- # set +x 00:21:07.229 12:17:07 -- nvmf/common.sh@470 -- # nvmfpid=3472983 00:21:07.229 12:17:07 -- nvmf/common.sh@471 -- # waitforlisten 3472983 00:21:07.229 12:17:07 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:07.229 12:17:07 -- common/autotest_common.sh@817 -- # '[' -z 3472983 ']' 00:21:07.229 12:17:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:07.229 12:17:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:07.229 12:17:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:07.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:07.229 12:17:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:07.229 12:17:07 -- common/autotest_common.sh@10 -- # set +x 00:21:07.229 [2024-04-26 12:17:07.524924] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:21:07.229 [2024-04-26 12:17:07.524973] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:07.229 EAL: No free 2048 kB hugepages reported on node 1 00:21:07.229 [2024-04-26 12:17:07.591107] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:07.229 [2024-04-26 12:17:07.655389] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:07.229 [2024-04-26 12:17:07.655428] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:07.229 [2024-04-26 12:17:07.655439] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:07.229 [2024-04-26 12:17:07.655445] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:07.229 [2024-04-26 12:17:07.655451] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:07.229 [2024-04-26 12:17:07.655593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:07.229 [2024-04-26 12:17:07.655712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:07.229 [2024-04-26 12:17:07.655883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:07.229 [2024-04-26 12:17:07.655884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:07.229 12:17:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:07.229 12:17:08 -- common/autotest_common.sh@850 -- # return 0 00:21:07.229 12:17:08 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:07.229 12:17:08 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:07.229 12:17:08 -- common/autotest_common.sh@10 -- # set +x 00:21:07.229 12:17:08 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:07.229 12:17:08 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:07.229 12:17:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:07.229 12:17:08 -- common/autotest_common.sh@10 -- # set +x 00:21:07.229 [2024-04-26 12:17:08.329478] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:07.229 12:17:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:07.229 12:17:08 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:21:07.229 12:17:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:07.229 12:17:08 -- common/autotest_common.sh@10 -- # set +x 00:21:07.229 Malloc0 00:21:07.229 12:17:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:07.229 12:17:08 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:21:07.229 12:17:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:07.229 12:17:08 -- common/autotest_common.sh@10 -- # set +x 00:21:07.229 12:17:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:07.229 12:17:08 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:07.229 12:17:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:07.229 12:17:08 -- common/autotest_common.sh@10 -- # set +x 00:21:07.229 12:17:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:07.229 12:17:08 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:07.229 12:17:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:07.229 12:17:08 -- common/autotest_common.sh@10 -- # set +x 00:21:07.229 [2024-04-26 12:17:08.388872] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:07.229 12:17:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:07.229 12:17:08 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:21:07.229 12:17:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:07.229 12:17:08 -- common/autotest_common.sh@10 -- # set +x 00:21:07.229 [2024-04-26 12:17:08.400674] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:21:07.229 [ 00:21:07.229 { 00:21:07.229 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:07.229 "subtype": "Discovery", 00:21:07.229 "listen_addresses": [], 00:21:07.229 "allow_any_host": true, 00:21:07.229 "hosts": [] 00:21:07.229 }, 00:21:07.229 { 00:21:07.229 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:07.229 "subtype": "NVMe", 00:21:07.229 "listen_addresses": [ 00:21:07.229 { 00:21:07.229 "transport": "TCP", 00:21:07.229 "trtype": "TCP", 00:21:07.229 "adrfam": "IPv4", 00:21:07.229 "traddr": "10.0.0.2", 00:21:07.229 "trsvcid": "4420" 00:21:07.229 } 00:21:07.229 ], 00:21:07.229 "allow_any_host": true, 00:21:07.229 "hosts": [], 00:21:07.229 "serial_number": "SPDK00000000000001", 00:21:07.229 "model_number": "SPDK bdev Controller", 00:21:07.229 "max_namespaces": 2, 00:21:07.229 "min_cntlid": 1, 00:21:07.229 "max_cntlid": 65519, 00:21:07.229 "namespaces": [ 00:21:07.229 { 00:21:07.229 "nsid": 1, 00:21:07.229 "bdev_name": "Malloc0", 00:21:07.229 "name": "Malloc0", 00:21:07.229 "nguid": "9956A16A22FB48149AA1CD24341433C9", 00:21:07.229 "uuid": "9956a16a-22fb-4814-9aa1-cd24341433c9" 00:21:07.229 } 00:21:07.229 ] 00:21:07.229 } 00:21:07.229 ] 00:21:07.229 12:17:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:07.229 12:17:08 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:07.229 12:17:08 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:21:07.229 12:17:08 -- host/aer.sh@33 -- # aerpid=3473026 00:21:07.229 12:17:08 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:21:07.229 12:17:08 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:21:07.229 12:17:08 -- common/autotest_common.sh@1251 -- # local i=0 00:21:07.229 12:17:08 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:07.229 12:17:08 -- common/autotest_common.sh@1253 -- # '[' 0 -lt 200 ']' 00:21:07.229 12:17:08 -- common/autotest_common.sh@1254 -- # i=1 00:21:07.229 12:17:08 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:21:07.490 EAL: No free 2048 kB hugepages reported on node 1 00:21:07.490 12:17:08 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:07.490 12:17:08 -- common/autotest_common.sh@1253 -- # '[' 1 -lt 200 ']' 00:21:07.490 12:17:08 -- common/autotest_common.sh@1254 -- # i=2 00:21:07.490 12:17:08 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:21:07.490 12:17:08 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:07.490 12:17:08 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:07.490 12:17:08 -- common/autotest_common.sh@1262 -- # return 0 00:21:07.490 12:17:08 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:07.490 12:17:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:07.490 12:17:08 -- common/autotest_common.sh@10 -- # set +x 00:21:07.490 Malloc1 00:21:07.490 12:17:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:07.490 12:17:08 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:07.490 12:17:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:07.490 12:17:08 -- common/autotest_common.sh@10 -- # set +x 00:21:07.490 12:17:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:07.490 12:17:08 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:07.490 12:17:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:07.490 12:17:08 -- common/autotest_common.sh@10 -- # set +x 00:21:07.490 Asynchronous Event Request test 00:21:07.490 Attaching to 10.0.0.2 00:21:07.490 Attached to 10.0.0.2 00:21:07.490 Registering asynchronous event callbacks... 00:21:07.490 Starting namespace attribute notice tests for all controllers... 00:21:07.490 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:07.490 aer_cb - Changed Namespace 00:21:07.490 Cleaning up... 00:21:07.490 [ 00:21:07.490 { 00:21:07.490 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:07.490 "subtype": "Discovery", 00:21:07.490 "listen_addresses": [], 00:21:07.490 "allow_any_host": true, 00:21:07.491 "hosts": [] 00:21:07.491 }, 00:21:07.491 { 00:21:07.491 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:07.491 "subtype": "NVMe", 00:21:07.491 "listen_addresses": [ 00:21:07.491 { 00:21:07.491 "transport": "TCP", 00:21:07.491 "trtype": "TCP", 00:21:07.491 "adrfam": "IPv4", 00:21:07.491 "traddr": "10.0.0.2", 00:21:07.491 "trsvcid": "4420" 00:21:07.491 } 00:21:07.491 ], 00:21:07.491 "allow_any_host": true, 00:21:07.491 "hosts": [], 00:21:07.491 "serial_number": "SPDK00000000000001", 00:21:07.491 "model_number": "SPDK bdev Controller", 00:21:07.491 "max_namespaces": 2, 00:21:07.491 "min_cntlid": 1, 00:21:07.491 "max_cntlid": 65519, 00:21:07.491 "namespaces": [ 00:21:07.491 { 00:21:07.491 "nsid": 1, 00:21:07.491 "bdev_name": "Malloc0", 00:21:07.491 "name": "Malloc0", 00:21:07.491 "nguid": "9956A16A22FB48149AA1CD24341433C9", 00:21:07.491 "uuid": "9956a16a-22fb-4814-9aa1-cd24341433c9" 00:21:07.491 }, 00:21:07.491 { 00:21:07.491 "nsid": 2, 00:21:07.491 "bdev_name": "Malloc1", 00:21:07.491 "name": "Malloc1", 00:21:07.491 "nguid": "2C3997917BCD409EAF4BCA72EA551745", 00:21:07.491 "uuid": "2c399791-7bcd-409e-af4b-ca72ea551745" 00:21:07.491 } 00:21:07.491 ] 00:21:07.491 } 00:21:07.491 ] 00:21:07.491 12:17:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:07.491 12:17:08 -- host/aer.sh@43 -- # wait 3473026 00:21:07.491 12:17:08 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:07.491 12:17:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:07.491 12:17:08 -- common/autotest_common.sh@10 -- # set +x 00:21:07.491 12:17:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:07.491 12:17:08 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:07.491 12:17:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:07.491 12:17:08 -- common/autotest_common.sh@10 -- # set +x 00:21:07.752 12:17:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:07.752 12:17:08 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:07.752 12:17:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:07.752 12:17:08 -- common/autotest_common.sh@10 -- # set +x 00:21:07.752 12:17:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:07.752 12:17:08 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:07.752 12:17:08 -- host/aer.sh@51 -- # nvmftestfini 00:21:07.752 12:17:08 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:07.752 12:17:08 -- nvmf/common.sh@117 -- # sync 00:21:07.752 12:17:08 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:07.752 12:17:08 -- nvmf/common.sh@120 -- # set +e 00:21:07.752 12:17:08 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:07.752 12:17:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:07.752 rmmod nvme_tcp 00:21:07.752 rmmod nvme_fabrics 00:21:07.752 rmmod nvme_keyring 00:21:07.752 12:17:08 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:07.752 12:17:08 -- nvmf/common.sh@124 -- # set -e 00:21:07.752 12:17:08 -- nvmf/common.sh@125 -- # return 0 00:21:07.752 12:17:08 -- nvmf/common.sh@478 -- # '[' -n 3472983 ']' 00:21:07.752 12:17:08 -- nvmf/common.sh@479 -- # killprocess 3472983 00:21:07.752 12:17:08 -- common/autotest_common.sh@936 -- # '[' -z 3472983 ']' 00:21:07.752 12:17:08 -- common/autotest_common.sh@940 -- # kill -0 3472983 00:21:07.752 12:17:08 -- common/autotest_common.sh@941 -- # uname 00:21:07.752 12:17:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:07.752 12:17:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3472983 00:21:07.752 12:17:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:07.752 12:17:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:07.752 12:17:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3472983' 00:21:07.752 killing process with pid 3472983 00:21:07.752 12:17:08 -- common/autotest_common.sh@955 -- # kill 3472983 00:21:07.752 [2024-04-26 12:17:08.873036] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:21:07.752 12:17:08 -- common/autotest_common.sh@960 -- # wait 3472983 00:21:08.014 12:17:09 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:08.014 12:17:09 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:08.014 12:17:09 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:08.014 12:17:09 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:08.014 12:17:09 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:08.014 12:17:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:08.014 12:17:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:08.014 12:17:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:09.926 12:17:11 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:09.926 00:21:09.926 real 0m11.058s 00:21:09.926 user 0m7.564s 00:21:09.926 sys 0m5.767s 00:21:09.926 12:17:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:09.926 12:17:11 -- common/autotest_common.sh@10 -- # set +x 00:21:09.926 ************************************ 00:21:09.926 END TEST nvmf_aer 00:21:09.926 ************************************ 00:21:09.927 12:17:11 -- nvmf/nvmf.sh@91 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:09.927 12:17:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:09.927 12:17:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:09.927 12:17:11 -- common/autotest_common.sh@10 -- # set +x 00:21:10.188 ************************************ 00:21:10.188 START TEST nvmf_async_init 00:21:10.188 ************************************ 00:21:10.188 12:17:11 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:10.188 * Looking for test storage... 00:21:10.188 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:10.188 12:17:11 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:10.188 12:17:11 -- nvmf/common.sh@7 -- # uname -s 00:21:10.188 12:17:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:10.188 12:17:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:10.188 12:17:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:10.188 12:17:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:10.188 12:17:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:10.188 12:17:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:10.188 12:17:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:10.188 12:17:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:10.188 12:17:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:10.188 12:17:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:10.188 12:17:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:10.188 12:17:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:10.188 12:17:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:10.188 12:17:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:10.188 12:17:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:10.188 12:17:11 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:10.188 12:17:11 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:10.188 12:17:11 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:10.188 12:17:11 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:10.188 12:17:11 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:10.188 12:17:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.188 12:17:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.188 12:17:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.188 12:17:11 -- paths/export.sh@5 -- # export PATH 00:21:10.188 12:17:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.188 12:17:11 -- nvmf/common.sh@47 -- # : 0 00:21:10.188 12:17:11 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:10.188 12:17:11 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:10.188 12:17:11 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:10.188 12:17:11 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:10.188 12:17:11 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:10.188 12:17:11 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:10.188 12:17:11 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:10.188 12:17:11 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:10.449 12:17:11 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:21:10.449 12:17:11 -- host/async_init.sh@14 -- # null_block_size=512 00:21:10.449 12:17:11 -- host/async_init.sh@15 -- # null_bdev=null0 00:21:10.449 12:17:11 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:21:10.449 12:17:11 -- host/async_init.sh@20 -- # uuidgen 00:21:10.449 12:17:11 -- host/async_init.sh@20 -- # tr -d - 00:21:10.449 12:17:11 -- host/async_init.sh@20 -- # nguid=2a7dc4fbfdfb4d3dad40c85dcde60114 00:21:10.449 12:17:11 -- host/async_init.sh@22 -- # nvmftestinit 00:21:10.449 12:17:11 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:10.449 12:17:11 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:10.449 12:17:11 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:10.449 12:17:11 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:10.449 12:17:11 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:10.449 12:17:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:10.449 12:17:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:10.449 12:17:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.449 12:17:11 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:10.449 12:17:11 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:10.449 12:17:11 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:10.449 12:17:11 -- common/autotest_common.sh@10 -- # set +x 00:21:17.034 12:17:17 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:17.034 12:17:17 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:17.034 12:17:17 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:17.034 12:17:17 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:17.034 12:17:17 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:17.034 12:17:17 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:17.034 12:17:17 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:17.034 12:17:17 -- nvmf/common.sh@295 -- # net_devs=() 00:21:17.034 12:17:17 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:17.034 12:17:17 -- nvmf/common.sh@296 -- # e810=() 00:21:17.034 12:17:17 -- nvmf/common.sh@296 -- # local -ga e810 00:21:17.034 12:17:17 -- nvmf/common.sh@297 -- # x722=() 00:21:17.034 12:17:17 -- nvmf/common.sh@297 -- # local -ga x722 00:21:17.034 12:17:17 -- nvmf/common.sh@298 -- # mlx=() 00:21:17.034 12:17:17 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:17.034 12:17:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:17.034 12:17:17 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:17.034 12:17:17 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:17.034 12:17:17 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:17.034 12:17:17 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:17.034 12:17:17 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:17.034 12:17:17 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:17.034 12:17:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:17.034 12:17:17 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:17.034 12:17:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:17.034 12:17:17 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:17.034 12:17:17 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:17.034 12:17:17 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:17.034 12:17:17 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:17.034 12:17:17 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:17.034 12:17:17 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:17.034 12:17:17 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:17.034 12:17:17 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:17.034 12:17:17 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:17.034 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:17.034 12:17:17 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:17.034 12:17:17 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:17.034 12:17:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:17.034 12:17:17 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:17.034 12:17:17 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:17.034 12:17:17 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:17.034 12:17:17 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:17.034 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:17.034 12:17:17 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:17.034 12:17:17 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:17.034 12:17:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:17.034 12:17:17 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:17.034 12:17:17 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:17.034 12:17:17 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:17.034 12:17:17 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:17.034 12:17:17 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:17.034 12:17:17 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:17.034 12:17:17 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:17.034 12:17:17 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:17.034 12:17:17 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:17.034 12:17:17 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:17.034 Found net devices under 0000:31:00.0: cvl_0_0 00:21:17.034 12:17:17 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:17.034 12:17:17 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:17.034 12:17:17 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:17.034 12:17:17 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:17.034 12:17:17 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:17.034 12:17:17 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:17.034 Found net devices under 0000:31:00.1: cvl_0_1 00:21:17.034 12:17:17 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:17.034 12:17:17 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:17.034 12:17:17 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:17.034 12:17:17 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:17.034 12:17:17 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:17.034 12:17:17 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:17.034 12:17:17 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:17.034 12:17:17 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:17.034 12:17:17 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:17.034 12:17:17 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:17.034 12:17:17 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:17.034 12:17:17 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:17.034 12:17:17 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:17.034 12:17:17 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:17.034 12:17:17 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:17.034 12:17:17 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:17.034 12:17:17 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:17.034 12:17:17 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:17.034 12:17:17 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:17.034 12:17:18 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:17.034 12:17:18 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:17.034 12:17:18 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:17.034 12:17:18 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:17.034 12:17:18 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:17.034 12:17:18 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:17.034 12:17:18 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:17.034 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:17.034 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.724 ms 00:21:17.034 00:21:17.034 --- 10.0.0.2 ping statistics --- 00:21:17.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:17.034 rtt min/avg/max/mdev = 0.724/0.724/0.724/0.000 ms 00:21:17.034 12:17:18 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:17.034 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:17.034 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.379 ms 00:21:17.034 00:21:17.034 --- 10.0.0.1 ping statistics --- 00:21:17.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:17.034 rtt min/avg/max/mdev = 0.379/0.379/0.379/0.000 ms 00:21:17.034 12:17:18 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:17.034 12:17:18 -- nvmf/common.sh@411 -- # return 0 00:21:17.034 12:17:18 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:17.034 12:17:18 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:17.034 12:17:18 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:17.034 12:17:18 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:17.034 12:17:18 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:17.034 12:17:18 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:17.034 12:17:18 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:17.295 12:17:18 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:21:17.295 12:17:18 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:17.295 12:17:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:17.295 12:17:18 -- common/autotest_common.sh@10 -- # set +x 00:21:17.295 12:17:18 -- nvmf/common.sh@470 -- # nvmfpid=3477416 00:21:17.295 12:17:18 -- nvmf/common.sh@471 -- # waitforlisten 3477416 00:21:17.295 12:17:18 -- common/autotest_common.sh@817 -- # '[' -z 3477416 ']' 00:21:17.295 12:17:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:17.295 12:17:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:17.295 12:17:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:17.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:17.295 12:17:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:17.295 12:17:18 -- common/autotest_common.sh@10 -- # set +x 00:21:17.295 12:17:18 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:17.295 [2024-04-26 12:17:18.338061] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:21:17.295 [2024-04-26 12:17:18.338130] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:17.295 EAL: No free 2048 kB hugepages reported on node 1 00:21:17.295 [2024-04-26 12:17:18.409760] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:17.295 [2024-04-26 12:17:18.481752] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:17.295 [2024-04-26 12:17:18.481790] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:17.295 [2024-04-26 12:17:18.481798] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:17.295 [2024-04-26 12:17:18.481804] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:17.295 [2024-04-26 12:17:18.481810] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:17.295 [2024-04-26 12:17:18.481835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:18.238 12:17:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:18.238 12:17:19 -- common/autotest_common.sh@850 -- # return 0 00:21:18.238 12:17:19 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:18.238 12:17:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:18.238 12:17:19 -- common/autotest_common.sh@10 -- # set +x 00:21:18.238 12:17:19 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:18.238 12:17:19 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:21:18.238 12:17:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:18.238 12:17:19 -- common/autotest_common.sh@10 -- # set +x 00:21:18.238 [2024-04-26 12:17:19.145159] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:18.238 12:17:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:18.238 12:17:19 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:21:18.238 12:17:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:18.238 12:17:19 -- common/autotest_common.sh@10 -- # set +x 00:21:18.238 null0 00:21:18.238 12:17:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:18.238 12:17:19 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:21:18.238 12:17:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:18.238 12:17:19 -- common/autotest_common.sh@10 -- # set +x 00:21:18.238 12:17:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:18.238 12:17:19 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:21:18.238 12:17:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:18.238 12:17:19 -- common/autotest_common.sh@10 -- # set +x 00:21:18.238 12:17:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:18.238 12:17:19 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 2a7dc4fbfdfb4d3dad40c85dcde60114 00:21:18.238 12:17:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:18.238 12:17:19 -- common/autotest_common.sh@10 -- # set +x 00:21:18.238 12:17:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:18.238 12:17:19 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:18.238 12:17:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:18.238 12:17:19 -- common/autotest_common.sh@10 -- # set +x 00:21:18.238 [2024-04-26 12:17:19.201403] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:18.238 12:17:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:18.238 12:17:19 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:21:18.238 12:17:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:18.238 12:17:19 -- common/autotest_common.sh@10 -- # set +x 00:21:18.238 nvme0n1 00:21:18.238 12:17:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:18.238 12:17:19 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:18.238 12:17:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:18.238 12:17:19 -- common/autotest_common.sh@10 -- # set +x 00:21:18.238 [ 00:21:18.239 { 00:21:18.239 "name": "nvme0n1", 00:21:18.239 "aliases": [ 00:21:18.239 "2a7dc4fb-fdfb-4d3d-ad40-c85dcde60114" 00:21:18.239 ], 00:21:18.239 "product_name": "NVMe disk", 00:21:18.239 "block_size": 512, 00:21:18.239 "num_blocks": 2097152, 00:21:18.239 "uuid": "2a7dc4fb-fdfb-4d3d-ad40-c85dcde60114", 00:21:18.239 "assigned_rate_limits": { 00:21:18.239 "rw_ios_per_sec": 0, 00:21:18.239 "rw_mbytes_per_sec": 0, 00:21:18.239 "r_mbytes_per_sec": 0, 00:21:18.239 "w_mbytes_per_sec": 0 00:21:18.239 }, 00:21:18.239 "claimed": false, 00:21:18.239 "zoned": false, 00:21:18.239 "supported_io_types": { 00:21:18.239 "read": true, 00:21:18.239 "write": true, 00:21:18.239 "unmap": false, 00:21:18.239 "write_zeroes": true, 00:21:18.239 "flush": true, 00:21:18.239 "reset": true, 00:21:18.239 "compare": true, 00:21:18.239 "compare_and_write": true, 00:21:18.239 "abort": true, 00:21:18.239 "nvme_admin": true, 00:21:18.239 "nvme_io": true 00:21:18.239 }, 00:21:18.239 "memory_domains": [ 00:21:18.239 { 00:21:18.239 "dma_device_id": "system", 00:21:18.239 "dma_device_type": 1 00:21:18.239 } 00:21:18.239 ], 00:21:18.239 "driver_specific": { 00:21:18.239 "nvme": [ 00:21:18.239 { 00:21:18.239 "trid": { 00:21:18.239 "trtype": "TCP", 00:21:18.239 "adrfam": "IPv4", 00:21:18.239 "traddr": "10.0.0.2", 00:21:18.239 "trsvcid": "4420", 00:21:18.239 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:18.239 }, 00:21:18.239 "ctrlr_data": { 00:21:18.239 "cntlid": 1, 00:21:18.239 "vendor_id": "0x8086", 00:21:18.239 "model_number": "SPDK bdev Controller", 00:21:18.500 "serial_number": "00000000000000000000", 00:21:18.500 "firmware_revision": "24.05", 00:21:18.500 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:18.500 "oacs": { 00:21:18.500 "security": 0, 00:21:18.500 "format": 0, 00:21:18.500 "firmware": 0, 00:21:18.500 "ns_manage": 0 00:21:18.500 }, 00:21:18.500 "multi_ctrlr": true, 00:21:18.500 "ana_reporting": false 00:21:18.500 }, 00:21:18.500 "vs": { 00:21:18.500 "nvme_version": "1.3" 00:21:18.500 }, 00:21:18.500 "ns_data": { 00:21:18.500 "id": 1, 00:21:18.500 "can_share": true 00:21:18.500 } 00:21:18.500 } 00:21:18.500 ], 00:21:18.500 "mp_policy": "active_passive" 00:21:18.500 } 00:21:18.500 } 00:21:18.500 ] 00:21:18.500 12:17:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:18.500 12:17:19 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:21:18.500 12:17:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:18.500 12:17:19 -- common/autotest_common.sh@10 -- # set +x 00:21:18.500 [2024-04-26 12:17:19.465963] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:18.500 [2024-04-26 12:17:19.466023] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1240550 (9): Bad file descriptor 00:21:18.500 [2024-04-26 12:17:19.597935] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:18.500 12:17:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:18.500 12:17:19 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:18.500 12:17:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:18.500 12:17:19 -- common/autotest_common.sh@10 -- # set +x 00:21:18.500 [ 00:21:18.500 { 00:21:18.500 "name": "nvme0n1", 00:21:18.500 "aliases": [ 00:21:18.501 "2a7dc4fb-fdfb-4d3d-ad40-c85dcde60114" 00:21:18.501 ], 00:21:18.501 "product_name": "NVMe disk", 00:21:18.501 "block_size": 512, 00:21:18.501 "num_blocks": 2097152, 00:21:18.501 "uuid": "2a7dc4fb-fdfb-4d3d-ad40-c85dcde60114", 00:21:18.501 "assigned_rate_limits": { 00:21:18.501 "rw_ios_per_sec": 0, 00:21:18.501 "rw_mbytes_per_sec": 0, 00:21:18.501 "r_mbytes_per_sec": 0, 00:21:18.501 "w_mbytes_per_sec": 0 00:21:18.501 }, 00:21:18.501 "claimed": false, 00:21:18.501 "zoned": false, 00:21:18.501 "supported_io_types": { 00:21:18.501 "read": true, 00:21:18.501 "write": true, 00:21:18.501 "unmap": false, 00:21:18.501 "write_zeroes": true, 00:21:18.501 "flush": true, 00:21:18.501 "reset": true, 00:21:18.501 "compare": true, 00:21:18.501 "compare_and_write": true, 00:21:18.501 "abort": true, 00:21:18.501 "nvme_admin": true, 00:21:18.501 "nvme_io": true 00:21:18.501 }, 00:21:18.501 "memory_domains": [ 00:21:18.501 { 00:21:18.501 "dma_device_id": "system", 00:21:18.501 "dma_device_type": 1 00:21:18.501 } 00:21:18.501 ], 00:21:18.501 "driver_specific": { 00:21:18.501 "nvme": [ 00:21:18.501 { 00:21:18.501 "trid": { 00:21:18.501 "trtype": "TCP", 00:21:18.501 "adrfam": "IPv4", 00:21:18.501 "traddr": "10.0.0.2", 00:21:18.501 "trsvcid": "4420", 00:21:18.501 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:18.501 }, 00:21:18.501 "ctrlr_data": { 00:21:18.501 "cntlid": 2, 00:21:18.501 "vendor_id": "0x8086", 00:21:18.501 "model_number": "SPDK bdev Controller", 00:21:18.501 "serial_number": "00000000000000000000", 00:21:18.501 "firmware_revision": "24.05", 00:21:18.501 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:18.501 "oacs": { 00:21:18.501 "security": 0, 00:21:18.501 "format": 0, 00:21:18.501 "firmware": 0, 00:21:18.501 "ns_manage": 0 00:21:18.501 }, 00:21:18.501 "multi_ctrlr": true, 00:21:18.501 "ana_reporting": false 00:21:18.501 }, 00:21:18.501 "vs": { 00:21:18.501 "nvme_version": "1.3" 00:21:18.501 }, 00:21:18.501 "ns_data": { 00:21:18.501 "id": 1, 00:21:18.501 "can_share": true 00:21:18.501 } 00:21:18.501 } 00:21:18.501 ], 00:21:18.501 "mp_policy": "active_passive" 00:21:18.501 } 00:21:18.501 } 00:21:18.501 ] 00:21:18.501 12:17:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:18.501 12:17:19 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:18.501 12:17:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:18.501 12:17:19 -- common/autotest_common.sh@10 -- # set +x 00:21:18.501 12:17:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:18.501 12:17:19 -- host/async_init.sh@53 -- # mktemp 00:21:18.501 12:17:19 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.l1fB0MebYh 00:21:18.501 12:17:19 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:18.501 12:17:19 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.l1fB0MebYh 00:21:18.501 12:17:19 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:21:18.501 12:17:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:18.501 12:17:19 -- common/autotest_common.sh@10 -- # set +x 00:21:18.501 12:17:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:18.501 12:17:19 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:21:18.501 12:17:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:18.501 12:17:19 -- common/autotest_common.sh@10 -- # set +x 00:21:18.501 [2024-04-26 12:17:19.662576] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:18.501 [2024-04-26 12:17:19.662691] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:18.501 12:17:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:18.501 12:17:19 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.l1fB0MebYh 00:21:18.501 12:17:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:18.501 12:17:19 -- common/autotest_common.sh@10 -- # set +x 00:21:18.501 [2024-04-26 12:17:19.674603] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:18.501 12:17:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:18.501 12:17:19 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.l1fB0MebYh 00:21:18.501 12:17:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:18.501 12:17:19 -- common/autotest_common.sh@10 -- # set +x 00:21:18.501 [2024-04-26 12:17:19.686639] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:18.501 [2024-04-26 12:17:19.686675] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:18.762 nvme0n1 00:21:18.762 12:17:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:18.762 12:17:19 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:18.762 12:17:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:18.762 12:17:19 -- common/autotest_common.sh@10 -- # set +x 00:21:18.762 [ 00:21:18.762 { 00:21:18.762 "name": "nvme0n1", 00:21:18.762 "aliases": [ 00:21:18.762 "2a7dc4fb-fdfb-4d3d-ad40-c85dcde60114" 00:21:18.762 ], 00:21:18.762 "product_name": "NVMe disk", 00:21:18.762 "block_size": 512, 00:21:18.762 "num_blocks": 2097152, 00:21:18.762 "uuid": "2a7dc4fb-fdfb-4d3d-ad40-c85dcde60114", 00:21:18.762 "assigned_rate_limits": { 00:21:18.762 "rw_ios_per_sec": 0, 00:21:18.762 "rw_mbytes_per_sec": 0, 00:21:18.762 "r_mbytes_per_sec": 0, 00:21:18.762 "w_mbytes_per_sec": 0 00:21:18.762 }, 00:21:18.762 "claimed": false, 00:21:18.762 "zoned": false, 00:21:18.762 "supported_io_types": { 00:21:18.762 "read": true, 00:21:18.762 "write": true, 00:21:18.762 "unmap": false, 00:21:18.762 "write_zeroes": true, 00:21:18.762 "flush": true, 00:21:18.762 "reset": true, 00:21:18.762 "compare": true, 00:21:18.762 "compare_and_write": true, 00:21:18.762 "abort": true, 00:21:18.762 "nvme_admin": true, 00:21:18.762 "nvme_io": true 00:21:18.762 }, 00:21:18.762 "memory_domains": [ 00:21:18.762 { 00:21:18.762 "dma_device_id": "system", 00:21:18.762 "dma_device_type": 1 00:21:18.762 } 00:21:18.762 ], 00:21:18.763 "driver_specific": { 00:21:18.763 "nvme": [ 00:21:18.763 { 00:21:18.763 "trid": { 00:21:18.763 "trtype": "TCP", 00:21:18.763 "adrfam": "IPv4", 00:21:18.763 "traddr": "10.0.0.2", 00:21:18.763 "trsvcid": "4421", 00:21:18.763 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:18.763 }, 00:21:18.763 "ctrlr_data": { 00:21:18.763 "cntlid": 3, 00:21:18.763 "vendor_id": "0x8086", 00:21:18.763 "model_number": "SPDK bdev Controller", 00:21:18.763 "serial_number": "00000000000000000000", 00:21:18.763 "firmware_revision": "24.05", 00:21:18.763 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:18.763 "oacs": { 00:21:18.763 "security": 0, 00:21:18.763 "format": 0, 00:21:18.763 "firmware": 0, 00:21:18.763 "ns_manage": 0 00:21:18.763 }, 00:21:18.763 "multi_ctrlr": true, 00:21:18.763 "ana_reporting": false 00:21:18.763 }, 00:21:18.763 "vs": { 00:21:18.763 "nvme_version": "1.3" 00:21:18.763 }, 00:21:18.763 "ns_data": { 00:21:18.763 "id": 1, 00:21:18.763 "can_share": true 00:21:18.763 } 00:21:18.763 } 00:21:18.763 ], 00:21:18.763 "mp_policy": "active_passive" 00:21:18.763 } 00:21:18.763 } 00:21:18.763 ] 00:21:18.763 12:17:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:18.763 12:17:19 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:18.763 12:17:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:18.763 12:17:19 -- common/autotest_common.sh@10 -- # set +x 00:21:18.763 12:17:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:18.763 12:17:19 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.l1fB0MebYh 00:21:18.763 12:17:19 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:21:18.763 12:17:19 -- host/async_init.sh@78 -- # nvmftestfini 00:21:18.763 12:17:19 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:18.763 12:17:19 -- nvmf/common.sh@117 -- # sync 00:21:18.763 12:17:19 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:18.763 12:17:19 -- nvmf/common.sh@120 -- # set +e 00:21:18.763 12:17:19 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:18.763 12:17:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:18.763 rmmod nvme_tcp 00:21:18.763 rmmod nvme_fabrics 00:21:18.763 rmmod nvme_keyring 00:21:18.763 12:17:19 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:18.763 12:17:19 -- nvmf/common.sh@124 -- # set -e 00:21:18.763 12:17:19 -- nvmf/common.sh@125 -- # return 0 00:21:18.763 12:17:19 -- nvmf/common.sh@478 -- # '[' -n 3477416 ']' 00:21:18.763 12:17:19 -- nvmf/common.sh@479 -- # killprocess 3477416 00:21:18.763 12:17:19 -- common/autotest_common.sh@936 -- # '[' -z 3477416 ']' 00:21:18.763 12:17:19 -- common/autotest_common.sh@940 -- # kill -0 3477416 00:21:18.763 12:17:19 -- common/autotest_common.sh@941 -- # uname 00:21:18.763 12:17:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:18.763 12:17:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3477416 00:21:18.763 12:17:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:18.763 12:17:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:18.763 12:17:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3477416' 00:21:18.763 killing process with pid 3477416 00:21:18.763 12:17:19 -- common/autotest_common.sh@955 -- # kill 3477416 00:21:18.763 [2024-04-26 12:17:19.915718] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:18.763 [2024-04-26 12:17:19.915746] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:18.763 12:17:19 -- common/autotest_common.sh@960 -- # wait 3477416 00:21:19.025 12:17:20 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:19.025 12:17:20 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:19.025 12:17:20 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:19.025 12:17:20 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:19.025 12:17:20 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:19.025 12:17:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:19.025 12:17:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:19.025 12:17:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:20.940 12:17:22 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:20.940 00:21:20.940 real 0m10.838s 00:21:20.940 user 0m3.818s 00:21:20.940 sys 0m5.420s 00:21:20.940 12:17:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:20.940 12:17:22 -- common/autotest_common.sh@10 -- # set +x 00:21:20.940 ************************************ 00:21:20.940 END TEST nvmf_async_init 00:21:20.940 ************************************ 00:21:20.940 12:17:22 -- nvmf/nvmf.sh@92 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:20.940 12:17:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:20.940 12:17:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:20.940 12:17:22 -- common/autotest_common.sh@10 -- # set +x 00:21:21.237 ************************************ 00:21:21.237 START TEST dma 00:21:21.237 ************************************ 00:21:21.237 12:17:22 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:21.237 * Looking for test storage... 00:21:21.237 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:21.237 12:17:22 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:21.237 12:17:22 -- nvmf/common.sh@7 -- # uname -s 00:21:21.237 12:17:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:21.237 12:17:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:21.237 12:17:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:21.237 12:17:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:21.237 12:17:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:21.237 12:17:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:21.237 12:17:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:21.237 12:17:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:21.237 12:17:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:21.237 12:17:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:21.237 12:17:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:21.237 12:17:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:21.237 12:17:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:21.237 12:17:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:21.237 12:17:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:21.237 12:17:22 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:21.237 12:17:22 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:21.237 12:17:22 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:21.538 12:17:22 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:21.538 12:17:22 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:21.538 12:17:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.538 12:17:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.538 12:17:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.538 12:17:22 -- paths/export.sh@5 -- # export PATH 00:21:21.538 12:17:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.538 12:17:22 -- nvmf/common.sh@47 -- # : 0 00:21:21.538 12:17:22 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:21.538 12:17:22 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:21.538 12:17:22 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:21.538 12:17:22 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:21.538 12:17:22 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:21.538 12:17:22 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:21.538 12:17:22 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:21.538 12:17:22 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:21.538 12:17:22 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:21:21.538 12:17:22 -- host/dma.sh@13 -- # exit 0 00:21:21.538 00:21:21.538 real 0m0.139s 00:21:21.538 user 0m0.065s 00:21:21.538 sys 0m0.083s 00:21:21.538 12:17:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:21.538 12:17:22 -- common/autotest_common.sh@10 -- # set +x 00:21:21.538 ************************************ 00:21:21.538 END TEST dma 00:21:21.538 ************************************ 00:21:21.538 12:17:22 -- nvmf/nvmf.sh@95 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:21.538 12:17:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:21.538 12:17:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:21.538 12:17:22 -- common/autotest_common.sh@10 -- # set +x 00:21:21.538 ************************************ 00:21:21.538 START TEST nvmf_identify 00:21:21.538 ************************************ 00:21:21.538 12:17:22 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:21.538 * Looking for test storage... 00:21:21.538 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:21.538 12:17:22 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:21.538 12:17:22 -- nvmf/common.sh@7 -- # uname -s 00:21:21.538 12:17:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:21.538 12:17:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:21.538 12:17:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:21.800 12:17:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:21.800 12:17:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:21.800 12:17:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:21.800 12:17:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:21.800 12:17:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:21.800 12:17:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:21.800 12:17:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:21.800 12:17:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:21.800 12:17:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:21.800 12:17:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:21.800 12:17:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:21.800 12:17:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:21.800 12:17:22 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:21.800 12:17:22 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:21.800 12:17:22 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:21.800 12:17:22 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:21.800 12:17:22 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:21.800 12:17:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.800 12:17:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.800 12:17:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.800 12:17:22 -- paths/export.sh@5 -- # export PATH 00:21:21.800 12:17:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.800 12:17:22 -- nvmf/common.sh@47 -- # : 0 00:21:21.800 12:17:22 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:21.800 12:17:22 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:21.800 12:17:22 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:21.800 12:17:22 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:21.800 12:17:22 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:21.800 12:17:22 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:21.800 12:17:22 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:21.800 12:17:22 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:21.800 12:17:22 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:21.800 12:17:22 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:21.800 12:17:22 -- host/identify.sh@14 -- # nvmftestinit 00:21:21.800 12:17:22 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:21.800 12:17:22 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:21.800 12:17:22 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:21.800 12:17:22 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:21.800 12:17:22 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:21.800 12:17:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:21.800 12:17:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:21.800 12:17:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:21.800 12:17:22 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:21.800 12:17:22 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:21.800 12:17:22 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:21.800 12:17:22 -- common/autotest_common.sh@10 -- # set +x 00:21:29.946 12:17:29 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:29.946 12:17:29 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:29.946 12:17:29 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:29.946 12:17:29 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:29.946 12:17:29 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:29.946 12:17:29 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:29.946 12:17:29 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:29.946 12:17:29 -- nvmf/common.sh@295 -- # net_devs=() 00:21:29.946 12:17:29 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:29.946 12:17:29 -- nvmf/common.sh@296 -- # e810=() 00:21:29.946 12:17:29 -- nvmf/common.sh@296 -- # local -ga e810 00:21:29.946 12:17:29 -- nvmf/common.sh@297 -- # x722=() 00:21:29.946 12:17:29 -- nvmf/common.sh@297 -- # local -ga x722 00:21:29.946 12:17:29 -- nvmf/common.sh@298 -- # mlx=() 00:21:29.946 12:17:29 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:29.946 12:17:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:29.946 12:17:29 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:29.946 12:17:29 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:29.946 12:17:29 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:29.946 12:17:29 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:29.946 12:17:29 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:29.946 12:17:29 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:29.946 12:17:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:29.946 12:17:29 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:29.946 12:17:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:29.946 12:17:29 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:29.946 12:17:29 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:29.947 12:17:29 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:29.947 12:17:29 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:29.947 12:17:29 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:29.947 12:17:29 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:29.947 12:17:29 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:29.947 12:17:29 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:29.947 12:17:29 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:29.947 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:29.947 12:17:29 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:29.947 12:17:29 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:29.947 12:17:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:29.947 12:17:29 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:29.947 12:17:29 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:29.947 12:17:29 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:29.947 12:17:29 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:29.947 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:29.947 12:17:29 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:29.947 12:17:29 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:29.947 12:17:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:29.947 12:17:29 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:29.947 12:17:29 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:29.947 12:17:29 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:29.947 12:17:29 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:29.947 12:17:29 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:29.947 12:17:29 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:29.947 12:17:29 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:29.947 12:17:29 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:29.947 12:17:29 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:29.947 12:17:29 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:29.947 Found net devices under 0000:31:00.0: cvl_0_0 00:21:29.947 12:17:29 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:29.947 12:17:29 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:29.947 12:17:29 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:29.947 12:17:29 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:29.947 12:17:29 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:29.947 12:17:29 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:29.947 Found net devices under 0000:31:00.1: cvl_0_1 00:21:29.947 12:17:29 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:29.947 12:17:29 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:29.947 12:17:29 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:29.947 12:17:29 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:29.947 12:17:29 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:29.947 12:17:29 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:29.947 12:17:29 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:29.947 12:17:29 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:29.947 12:17:29 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:29.947 12:17:29 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:29.947 12:17:29 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:29.947 12:17:29 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:29.947 12:17:29 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:29.947 12:17:29 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:29.947 12:17:29 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:29.947 12:17:29 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:29.947 12:17:29 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:29.947 12:17:29 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:29.947 12:17:29 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:29.947 12:17:29 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:29.947 12:17:29 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:29.947 12:17:29 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:29.947 12:17:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:29.947 12:17:30 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:29.947 12:17:30 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:29.947 12:17:30 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:29.947 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:29.947 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:21:29.947 00:21:29.947 --- 10.0.0.2 ping statistics --- 00:21:29.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:29.947 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:21:29.947 12:17:30 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:29.947 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:29.947 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:21:29.947 00:21:29.947 --- 10.0.0.1 ping statistics --- 00:21:29.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:29.947 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:21:29.947 12:17:30 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:29.947 12:17:30 -- nvmf/common.sh@411 -- # return 0 00:21:29.947 12:17:30 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:29.947 12:17:30 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:29.947 12:17:30 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:29.947 12:17:30 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:29.947 12:17:30 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:29.947 12:17:30 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:29.947 12:17:30 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:29.947 12:17:30 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:21:29.947 12:17:30 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:29.947 12:17:30 -- common/autotest_common.sh@10 -- # set +x 00:21:29.947 12:17:30 -- host/identify.sh@19 -- # nvmfpid=3482134 00:21:29.947 12:17:30 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:29.947 12:17:30 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:29.947 12:17:30 -- host/identify.sh@23 -- # waitforlisten 3482134 00:21:29.947 12:17:30 -- common/autotest_common.sh@817 -- # '[' -z 3482134 ']' 00:21:29.947 12:17:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:29.947 12:17:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:29.947 12:17:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:29.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:29.947 12:17:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:29.947 12:17:30 -- common/autotest_common.sh@10 -- # set +x 00:21:29.947 [2024-04-26 12:17:30.158712] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:21:29.947 [2024-04-26 12:17:30.158780] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:29.947 EAL: No free 2048 kB hugepages reported on node 1 00:21:29.947 [2024-04-26 12:17:30.231184] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:29.947 [2024-04-26 12:17:30.305788] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:29.947 [2024-04-26 12:17:30.305831] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:29.947 [2024-04-26 12:17:30.305846] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:29.947 [2024-04-26 12:17:30.305854] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:29.947 [2024-04-26 12:17:30.305861] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:29.947 [2024-04-26 12:17:30.305930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:29.947 [2024-04-26 12:17:30.306045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:29.947 [2024-04-26 12:17:30.306200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:29.947 [2024-04-26 12:17:30.306201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:29.947 12:17:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:29.947 12:17:30 -- common/autotest_common.sh@850 -- # return 0 00:21:29.947 12:17:30 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:29.947 12:17:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:29.947 12:17:30 -- common/autotest_common.sh@10 -- # set +x 00:21:29.947 [2024-04-26 12:17:30.943338] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:29.947 12:17:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:29.947 12:17:30 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:21:29.947 12:17:30 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:29.947 12:17:30 -- common/autotest_common.sh@10 -- # set +x 00:21:29.947 12:17:30 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:29.947 12:17:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:29.947 12:17:30 -- common/autotest_common.sh@10 -- # set +x 00:21:29.947 Malloc0 00:21:29.947 12:17:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:29.947 12:17:31 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:29.947 12:17:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:29.947 12:17:31 -- common/autotest_common.sh@10 -- # set +x 00:21:29.947 12:17:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:29.947 12:17:31 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:21:29.947 12:17:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:29.947 12:17:31 -- common/autotest_common.sh@10 -- # set +x 00:21:29.947 12:17:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:29.947 12:17:31 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:29.947 12:17:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:29.947 12:17:31 -- common/autotest_common.sh@10 -- # set +x 00:21:29.947 [2024-04-26 12:17:31.042851] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:29.947 12:17:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:29.947 12:17:31 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:29.947 12:17:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:29.947 12:17:31 -- common/autotest_common.sh@10 -- # set +x 00:21:29.947 12:17:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:29.947 12:17:31 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:21:29.947 12:17:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:29.947 12:17:31 -- common/autotest_common.sh@10 -- # set +x 00:21:29.948 [2024-04-26 12:17:31.066704] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:21:29.948 [ 00:21:29.948 { 00:21:29.948 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:29.948 "subtype": "Discovery", 00:21:29.948 "listen_addresses": [ 00:21:29.948 { 00:21:29.948 "transport": "TCP", 00:21:29.948 "trtype": "TCP", 00:21:29.948 "adrfam": "IPv4", 00:21:29.948 "traddr": "10.0.0.2", 00:21:29.948 "trsvcid": "4420" 00:21:29.948 } 00:21:29.948 ], 00:21:29.948 "allow_any_host": true, 00:21:29.948 "hosts": [] 00:21:29.948 }, 00:21:29.948 { 00:21:29.948 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:29.948 "subtype": "NVMe", 00:21:29.948 "listen_addresses": [ 00:21:29.948 { 00:21:29.948 "transport": "TCP", 00:21:29.948 "trtype": "TCP", 00:21:29.948 "adrfam": "IPv4", 00:21:29.948 "traddr": "10.0.0.2", 00:21:29.948 "trsvcid": "4420" 00:21:29.948 } 00:21:29.948 ], 00:21:29.948 "allow_any_host": true, 00:21:29.948 "hosts": [], 00:21:29.948 "serial_number": "SPDK00000000000001", 00:21:29.948 "model_number": "SPDK bdev Controller", 00:21:29.948 "max_namespaces": 32, 00:21:29.948 "min_cntlid": 1, 00:21:29.948 "max_cntlid": 65519, 00:21:29.948 "namespaces": [ 00:21:29.948 { 00:21:29.948 "nsid": 1, 00:21:29.948 "bdev_name": "Malloc0", 00:21:29.948 "name": "Malloc0", 00:21:29.948 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:21:29.948 "eui64": "ABCDEF0123456789", 00:21:29.948 "uuid": "1d062eba-6d8d-4d4e-9133-34c9141a3e6f" 00:21:29.948 } 00:21:29.948 ] 00:21:29.948 } 00:21:29.948 ] 00:21:29.948 12:17:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:29.948 12:17:31 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:21:29.948 [2024-04-26 12:17:31.103366] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:21:29.948 [2024-04-26 12:17:31.103412] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3482239 ] 00:21:29.948 EAL: No free 2048 kB hugepages reported on node 1 00:21:29.948 [2024-04-26 12:17:31.137495] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:21:29.948 [2024-04-26 12:17:31.137538] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:29.948 [2024-04-26 12:17:31.137543] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:29.948 [2024-04-26 12:17:31.137556] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:29.948 [2024-04-26 12:17:31.137563] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:29.948 [2024-04-26 12:17:31.138017] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:21:29.948 [2024-04-26 12:17:31.138050] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x16b1d10 0 00:21:29.948 [2024-04-26 12:17:31.148847] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:29.948 [2024-04-26 12:17:31.148858] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:29.948 [2024-04-26 12:17:31.148863] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:29.948 [2024-04-26 12:17:31.148866] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:29.948 [2024-04-26 12:17:31.148900] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:29.948 [2024-04-26 12:17:31.148906] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:29.948 [2024-04-26 12:17:31.148910] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16b1d10) 00:21:29.948 [2024-04-26 12:17:31.148922] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:29.948 [2024-04-26 12:17:31.148937] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1719a60, cid 0, qid 0 00:21:29.948 [2024-04-26 12:17:31.156850] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:29.948 [2024-04-26 12:17:31.156859] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:29.948 [2024-04-26 12:17:31.156862] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:29.948 [2024-04-26 12:17:31.156867] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1719a60) on tqpair=0x16b1d10 00:21:29.948 [2024-04-26 12:17:31.156877] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:29.948 [2024-04-26 12:17:31.156883] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:21:29.948 [2024-04-26 12:17:31.156888] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:21:29.948 [2024-04-26 12:17:31.156900] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:29.948 [2024-04-26 12:17:31.156904] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:29.948 [2024-04-26 12:17:31.156908] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16b1d10) 00:21:29.948 [2024-04-26 12:17:31.156915] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.948 [2024-04-26 12:17:31.156927] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1719a60, cid 0, qid 0 00:21:29.948 [2024-04-26 12:17:31.157151] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:29.948 [2024-04-26 12:17:31.157157] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:29.948 [2024-04-26 12:17:31.157161] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:29.948 [2024-04-26 12:17:31.157164] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1719a60) on tqpair=0x16b1d10 00:21:29.948 [2024-04-26 12:17:31.157170] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:21:29.948 [2024-04-26 12:17:31.157177] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:21:29.948 [2024-04-26 12:17:31.157183] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:29.948 [2024-04-26 12:17:31.157187] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:29.948 [2024-04-26 12:17:31.157190] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16b1d10) 00:21:29.948 [2024-04-26 12:17:31.157197] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.948 [2024-04-26 12:17:31.157207] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1719a60, cid 0, qid 0 00:21:29.948 [2024-04-26 12:17:31.157415] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:29.948 [2024-04-26 12:17:31.157421] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:29.948 [2024-04-26 12:17:31.157424] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:29.948 [2024-04-26 12:17:31.157431] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1719a60) on tqpair=0x16b1d10 00:21:29.948 [2024-04-26 12:17:31.157436] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:21:29.948 [2024-04-26 12:17:31.157444] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:21:29.948 [2024-04-26 12:17:31.157451] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:29.948 [2024-04-26 12:17:31.157454] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:29.948 [2024-04-26 12:17:31.157458] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16b1d10) 00:21:29.948 [2024-04-26 12:17:31.157465] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.948 [2024-04-26 12:17:31.157474] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1719a60, cid 0, qid 0 00:21:29.948 [2024-04-26 12:17:31.157650] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:29.948 [2024-04-26 12:17:31.157656] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:29.948 [2024-04-26 12:17:31.157659] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:29.948 [2024-04-26 12:17:31.157663] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1719a60) on tqpair=0x16b1d10 00:21:29.948 [2024-04-26 12:17:31.157669] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:29.948 [2024-04-26 12:17:31.157677] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:29.948 [2024-04-26 12:17:31.157681] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:29.948 [2024-04-26 12:17:31.157684] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16b1d10) 00:21:29.948 [2024-04-26 12:17:31.157691] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.948 [2024-04-26 12:17:31.157700] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1719a60, cid 0, qid 0 00:21:29.948 [2024-04-26 12:17:31.157901] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:29.948 [2024-04-26 12:17:31.157908] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:29.948 [2024-04-26 12:17:31.157912] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:29.948 [2024-04-26 12:17:31.157916] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1719a60) on tqpair=0x16b1d10 00:21:29.948 [2024-04-26 12:17:31.157921] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:21:29.948 [2024-04-26 12:17:31.157926] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:21:29.948 [2024-04-26 12:17:31.157933] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:29.948 [2024-04-26 12:17:31.158038] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:21:29.948 [2024-04-26 12:17:31.158043] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:29.948 [2024-04-26 12:17:31.158051] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:29.948 [2024-04-26 12:17:31.158055] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:29.948 [2024-04-26 12:17:31.158058] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16b1d10) 00:21:29.948 [2024-04-26 12:17:31.158065] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.948 [2024-04-26 12:17:31.158075] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1719a60, cid 0, qid 0 00:21:29.949 [2024-04-26 12:17:31.158246] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:29.949 [2024-04-26 12:17:31.158252] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:29.949 [2024-04-26 12:17:31.158255] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:29.949 [2024-04-26 12:17:31.158259] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1719a60) on tqpair=0x16b1d10 00:21:29.949 [2024-04-26 12:17:31.158264] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:29.949 [2024-04-26 12:17:31.158274] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:29.949 [2024-04-26 12:17:31.158277] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:29.949 [2024-04-26 12:17:31.158281] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16b1d10) 00:21:29.949 [2024-04-26 12:17:31.158287] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.949 [2024-04-26 12:17:31.158297] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1719a60, cid 0, qid 0 00:21:29.949 [2024-04-26 12:17:31.158498] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:29.949 [2024-04-26 12:17:31.158504] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:29.949 [2024-04-26 12:17:31.158508] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:29.949 [2024-04-26 12:17:31.158511] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1719a60) on tqpair=0x16b1d10 00:21:29.949 [2024-04-26 12:17:31.158517] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:29.949 [2024-04-26 12:17:31.158521] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:21:29.949 [2024-04-26 12:17:31.158528] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:21:29.949 [2024-04-26 12:17:31.158536] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:21:29.949 [2024-04-26 12:17:31.158547] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:29.949 [2024-04-26 12:17:31.158551] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16b1d10) 00:21:29.949 [2024-04-26 12:17:31.158557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.949 [2024-04-26 12:17:31.158567] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1719a60, cid 0, qid 0 00:21:29.949 [2024-04-26 12:17:31.158804] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:29.949 [2024-04-26 12:17:31.158811] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:29.949 [2024-04-26 12:17:31.158814] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:29.949 [2024-04-26 12:17:31.158818] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16b1d10): datao=0, datal=4096, cccid=0 00:21:29.949 [2024-04-26 12:17:31.158823] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1719a60) on tqpair(0x16b1d10): expected_datao=0, payload_size=4096 00:21:29.949 [2024-04-26 12:17:31.158827] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:29.949 [2024-04-26 12:17:31.158835] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:29.949 [2024-04-26 12:17:31.158844] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:29.949 [2024-04-26 12:17:31.158942] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:29.949 [2024-04-26 12:17:31.158949] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:29.949 [2024-04-26 12:17:31.158952] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:29.949 [2024-04-26 12:17:31.158956] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1719a60) on tqpair=0x16b1d10 00:21:29.949 [2024-04-26 12:17:31.158967] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:21:29.949 [2024-04-26 12:17:31.158972] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:21:29.949 [2024-04-26 12:17:31.158976] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:21:29.949 [2024-04-26 12:17:31.158981] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:21:29.949 [2024-04-26 12:17:31.158985] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:21:29.949 [2024-04-26 12:17:31.158990] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:21:29.949 [2024-04-26 12:17:31.158997] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:21:29.949 [2024-04-26 12:17:31.159004] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:29.949 [2024-04-26 12:17:31.159008] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:29.949 [2024-04-26 12:17:31.159011] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16b1d10) 00:21:29.949 [2024-04-26 12:17:31.159018] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:29.949 [2024-04-26 12:17:31.159028] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1719a60, cid 0, qid 0 00:21:29.949 [2024-04-26 12:17:31.159258] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:29.949 [2024-04-26 12:17:31.159264] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:29.949 [2024-04-26 12:17:31.159268] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:29.949 [2024-04-26 12:17:31.159271] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1719a60) on tqpair=0x16b1d10 00:21:29.949 [2024-04-26 12:17:31.159280] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:29.949 [2024-04-26 12:17:31.159283] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:29.949 [2024-04-26 12:17:31.159287] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16b1d10) 00:21:29.949 [2024-04-26 12:17:31.159293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.949 [2024-04-26 12:17:31.159299] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:29.949 [2024-04-26 12:17:31.159302] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:29.949 [2024-04-26 12:17:31.159306] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x16b1d10) 00:21:29.949 [2024-04-26 12:17:31.159311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.949 [2024-04-26 12:17:31.159317] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:29.949 [2024-04-26 12:17:31.159321] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:29.949 [2024-04-26 12:17:31.159324] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x16b1d10) 00:21:29.949 [2024-04-26 12:17:31.159330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.949 [2024-04-26 12:17:31.159336] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:29.949 [2024-04-26 12:17:31.159339] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:29.949 [2024-04-26 12:17:31.159343] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16b1d10) 00:21:29.949 [2024-04-26 12:17:31.159348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.949 [2024-04-26 12:17:31.159353] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:21:29.949 [2024-04-26 12:17:31.159364] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:29.949 [2024-04-26 12:17:31.159371] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:29.949 [2024-04-26 12:17:31.159375] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16b1d10) 00:21:29.949 [2024-04-26 12:17:31.159381] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.949 [2024-04-26 12:17:31.159392] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1719a60, cid 0, qid 0 00:21:29.949 [2024-04-26 12:17:31.159397] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1719bc0, cid 1, qid 0 00:21:29.949 [2024-04-26 12:17:31.159402] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1719d20, cid 2, qid 0 00:21:29.949 [2024-04-26 12:17:31.159407] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1719e80, cid 3, qid 0 00:21:29.949 [2024-04-26 12:17:31.159411] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1719fe0, cid 4, qid 0 00:21:29.949 [2024-04-26 12:17:31.159681] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:29.949 [2024-04-26 12:17:31.159687] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:29.949 [2024-04-26 12:17:31.159691] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:29.949 [2024-04-26 12:17:31.159694] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1719fe0) on tqpair=0x16b1d10 00:21:29.949 [2024-04-26 12:17:31.159700] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:21:29.949 [2024-04-26 12:17:31.159705] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:21:29.949 [2024-04-26 12:17:31.159715] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:29.949 [2024-04-26 12:17:31.159719] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16b1d10) 00:21:29.949 [2024-04-26 12:17:31.159725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.949 [2024-04-26 12:17:31.159734] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1719fe0, cid 4, qid 0 00:21:29.949 [2024-04-26 12:17:31.159953] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:29.949 [2024-04-26 12:17:31.159960] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:29.949 [2024-04-26 12:17:31.159963] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:29.949 [2024-04-26 12:17:31.159967] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16b1d10): datao=0, datal=4096, cccid=4 00:21:29.949 [2024-04-26 12:17:31.159971] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1719fe0) on tqpair(0x16b1d10): expected_datao=0, payload_size=4096 00:21:29.949 [2024-04-26 12:17:31.159975] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:29.949 [2024-04-26 12:17:31.159991] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:29.949 [2024-04-26 12:17:31.159995] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:30.215 [2024-04-26 12:17:31.203848] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:30.215 [2024-04-26 12:17:31.203861] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:30.215 [2024-04-26 12:17:31.203864] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:30.215 [2024-04-26 12:17:31.203868] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1719fe0) on tqpair=0x16b1d10 00:21:30.215 [2024-04-26 12:17:31.203881] nvme_ctrlr.c:4036:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:21:30.215 [2024-04-26 12:17:31.203900] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:30.215 [2024-04-26 12:17:31.203908] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16b1d10) 00:21:30.215 [2024-04-26 12:17:31.203915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.215 [2024-04-26 12:17:31.203922] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:30.215 [2024-04-26 12:17:31.203926] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:30.215 [2024-04-26 12:17:31.203929] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x16b1d10) 00:21:30.215 [2024-04-26 12:17:31.203935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.215 [2024-04-26 12:17:31.203952] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1719fe0, cid 4, qid 0 00:21:30.215 [2024-04-26 12:17:31.203957] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x171a140, cid 5, qid 0 00:21:30.215 [2024-04-26 12:17:31.204180] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:30.215 [2024-04-26 12:17:31.204186] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:30.215 [2024-04-26 12:17:31.204189] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:30.215 [2024-04-26 12:17:31.204193] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16b1d10): datao=0, datal=1024, cccid=4 00:21:30.215 [2024-04-26 12:17:31.204197] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1719fe0) on tqpair(0x16b1d10): expected_datao=0, payload_size=1024 00:21:30.215 [2024-04-26 12:17:31.204201] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:30.215 [2024-04-26 12:17:31.204208] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:30.215 [2024-04-26 12:17:31.204211] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:30.215 [2024-04-26 12:17:31.204217] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:30.215 [2024-04-26 12:17:31.204223] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:30.215 [2024-04-26 12:17:31.204226] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:30.215 [2024-04-26 12:17:31.204230] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x171a140) on tqpair=0x16b1d10 00:21:30.215 [2024-04-26 12:17:31.244986] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:30.215 [2024-04-26 12:17:31.244997] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:30.215 [2024-04-26 12:17:31.245000] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:30.215 [2024-04-26 12:17:31.245004] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1719fe0) on tqpair=0x16b1d10 00:21:30.215 [2024-04-26 12:17:31.245016] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:30.215 [2024-04-26 12:17:31.245020] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16b1d10) 00:21:30.215 [2024-04-26 12:17:31.245027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.215 [2024-04-26 12:17:31.245041] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1719fe0, cid 4, qid 0 00:21:30.215 [2024-04-26 12:17:31.245273] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:30.215 [2024-04-26 12:17:31.245279] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:30.215 [2024-04-26 12:17:31.245283] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:30.215 [2024-04-26 12:17:31.245286] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16b1d10): datao=0, datal=3072, cccid=4 00:21:30.215 [2024-04-26 12:17:31.245291] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1719fe0) on tqpair(0x16b1d10): expected_datao=0, payload_size=3072 00:21:30.215 [2024-04-26 12:17:31.245295] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:30.215 [2024-04-26 12:17:31.245302] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:30.215 [2024-04-26 12:17:31.245305] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:30.215 [2024-04-26 12:17:31.245474] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:30.215 [2024-04-26 12:17:31.245480] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:30.215 [2024-04-26 12:17:31.245483] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:30.215 [2024-04-26 12:17:31.245487] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1719fe0) on tqpair=0x16b1d10 00:21:30.215 [2024-04-26 12:17:31.245497] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:30.215 [2024-04-26 12:17:31.245500] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16b1d10) 00:21:30.215 [2024-04-26 12:17:31.245507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.215 [2024-04-26 12:17:31.245520] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1719fe0, cid 4, qid 0 00:21:30.215 [2024-04-26 12:17:31.245771] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:30.215 [2024-04-26 12:17:31.245777] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:30.215 [2024-04-26 12:17:31.245781] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:30.215 [2024-04-26 12:17:31.245784] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16b1d10): datao=0, datal=8, cccid=4 00:21:30.215 [2024-04-26 12:17:31.245788] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1719fe0) on tqpair(0x16b1d10): expected_datao=0, payload_size=8 00:21:30.215 [2024-04-26 12:17:31.245793] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:30.215 [2024-04-26 12:17:31.245799] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:30.215 [2024-04-26 12:17:31.245803] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:30.215 [2024-04-26 12:17:31.286025] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:30.215 [2024-04-26 12:17:31.286036] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:30.215 [2024-04-26 12:17:31.286039] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:30.215 [2024-04-26 12:17:31.286043] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1719fe0) on tqpair=0x16b1d10 00:21:30.215 ===================================================== 00:21:30.215 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:30.215 ===================================================== 00:21:30.215 Controller Capabilities/Features 00:21:30.215 ================================ 00:21:30.215 Vendor ID: 0000 00:21:30.215 Subsystem Vendor ID: 0000 00:21:30.215 Serial Number: .................... 00:21:30.215 Model Number: ........................................ 00:21:30.215 Firmware Version: 24.05 00:21:30.215 Recommended Arb Burst: 0 00:21:30.215 IEEE OUI Identifier: 00 00 00 00:21:30.215 Multi-path I/O 00:21:30.215 May have multiple subsystem ports: No 00:21:30.215 May have multiple controllers: No 00:21:30.215 Associated with SR-IOV VF: No 00:21:30.215 Max Data Transfer Size: 131072 00:21:30.215 Max Number of Namespaces: 0 00:21:30.215 Max Number of I/O Queues: 1024 00:21:30.215 NVMe Specification Version (VS): 1.3 00:21:30.215 NVMe Specification Version (Identify): 1.3 00:21:30.215 Maximum Queue Entries: 128 00:21:30.215 Contiguous Queues Required: Yes 00:21:30.215 Arbitration Mechanisms Supported 00:21:30.215 Weighted Round Robin: Not Supported 00:21:30.215 Vendor Specific: Not Supported 00:21:30.215 Reset Timeout: 15000 ms 00:21:30.215 Doorbell Stride: 4 bytes 00:21:30.215 NVM Subsystem Reset: Not Supported 00:21:30.215 Command Sets Supported 00:21:30.215 NVM Command Set: Supported 00:21:30.215 Boot Partition: Not Supported 00:21:30.215 Memory Page Size Minimum: 4096 bytes 00:21:30.215 Memory Page Size Maximum: 4096 bytes 00:21:30.215 Persistent Memory Region: Not Supported 00:21:30.215 Optional Asynchronous Events Supported 00:21:30.215 Namespace Attribute Notices: Not Supported 00:21:30.215 Firmware Activation Notices: Not Supported 00:21:30.215 ANA Change Notices: Not Supported 00:21:30.215 PLE Aggregate Log Change Notices: Not Supported 00:21:30.215 LBA Status Info Alert Notices: Not Supported 00:21:30.215 EGE Aggregate Log Change Notices: Not Supported 00:21:30.215 Normal NVM Subsystem Shutdown event: Not Supported 00:21:30.215 Zone Descriptor Change Notices: Not Supported 00:21:30.215 Discovery Log Change Notices: Supported 00:21:30.215 Controller Attributes 00:21:30.215 128-bit Host Identifier: Not Supported 00:21:30.215 Non-Operational Permissive Mode: Not Supported 00:21:30.215 NVM Sets: Not Supported 00:21:30.215 Read Recovery Levels: Not Supported 00:21:30.215 Endurance Groups: Not Supported 00:21:30.215 Predictable Latency Mode: Not Supported 00:21:30.215 Traffic Based Keep ALive: Not Supported 00:21:30.215 Namespace Granularity: Not Supported 00:21:30.215 SQ Associations: Not Supported 00:21:30.215 UUID List: Not Supported 00:21:30.215 Multi-Domain Subsystem: Not Supported 00:21:30.215 Fixed Capacity Management: Not Supported 00:21:30.215 Variable Capacity Management: Not Supported 00:21:30.215 Delete Endurance Group: Not Supported 00:21:30.215 Delete NVM Set: Not Supported 00:21:30.215 Extended LBA Formats Supported: Not Supported 00:21:30.215 Flexible Data Placement Supported: Not Supported 00:21:30.215 00:21:30.215 Controller Memory Buffer Support 00:21:30.215 ================================ 00:21:30.215 Supported: No 00:21:30.215 00:21:30.215 Persistent Memory Region Support 00:21:30.215 ================================ 00:21:30.215 Supported: No 00:21:30.215 00:21:30.215 Admin Command Set Attributes 00:21:30.215 ============================ 00:21:30.215 Security Send/Receive: Not Supported 00:21:30.215 Format NVM: Not Supported 00:21:30.215 Firmware Activate/Download: Not Supported 00:21:30.215 Namespace Management: Not Supported 00:21:30.215 Device Self-Test: Not Supported 00:21:30.215 Directives: Not Supported 00:21:30.215 NVMe-MI: Not Supported 00:21:30.215 Virtualization Management: Not Supported 00:21:30.215 Doorbell Buffer Config: Not Supported 00:21:30.215 Get LBA Status Capability: Not Supported 00:21:30.216 Command & Feature Lockdown Capability: Not Supported 00:21:30.216 Abort Command Limit: 1 00:21:30.216 Async Event Request Limit: 4 00:21:30.216 Number of Firmware Slots: N/A 00:21:30.216 Firmware Slot 1 Read-Only: N/A 00:21:30.216 Firmware Activation Without Reset: N/A 00:21:30.216 Multiple Update Detection Support: N/A 00:21:30.216 Firmware Update Granularity: No Information Provided 00:21:30.216 Per-Namespace SMART Log: No 00:21:30.216 Asymmetric Namespace Access Log Page: Not Supported 00:21:30.216 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:30.216 Command Effects Log Page: Not Supported 00:21:30.216 Get Log Page Extended Data: Supported 00:21:30.216 Telemetry Log Pages: Not Supported 00:21:30.216 Persistent Event Log Pages: Not Supported 00:21:30.216 Supported Log Pages Log Page: May Support 00:21:30.216 Commands Supported & Effects Log Page: Not Supported 00:21:30.216 Feature Identifiers & Effects Log Page:May Support 00:21:30.216 NVMe-MI Commands & Effects Log Page: May Support 00:21:30.216 Data Area 4 for Telemetry Log: Not Supported 00:21:30.216 Error Log Page Entries Supported: 128 00:21:30.216 Keep Alive: Not Supported 00:21:30.216 00:21:30.216 NVM Command Set Attributes 00:21:30.216 ========================== 00:21:30.216 Submission Queue Entry Size 00:21:30.216 Max: 1 00:21:30.216 Min: 1 00:21:30.216 Completion Queue Entry Size 00:21:30.216 Max: 1 00:21:30.216 Min: 1 00:21:30.216 Number of Namespaces: 0 00:21:30.216 Compare Command: Not Supported 00:21:30.216 Write Uncorrectable Command: Not Supported 00:21:30.216 Dataset Management Command: Not Supported 00:21:30.216 Write Zeroes Command: Not Supported 00:21:30.216 Set Features Save Field: Not Supported 00:21:30.216 Reservations: Not Supported 00:21:30.216 Timestamp: Not Supported 00:21:30.216 Copy: Not Supported 00:21:30.216 Volatile Write Cache: Not Present 00:21:30.216 Atomic Write Unit (Normal): 1 00:21:30.216 Atomic Write Unit (PFail): 1 00:21:30.216 Atomic Compare & Write Unit: 1 00:21:30.216 Fused Compare & Write: Supported 00:21:30.216 Scatter-Gather List 00:21:30.216 SGL Command Set: Supported 00:21:30.216 SGL Keyed: Supported 00:21:30.216 SGL Bit Bucket Descriptor: Not Supported 00:21:30.216 SGL Metadata Pointer: Not Supported 00:21:30.216 Oversized SGL: Not Supported 00:21:30.216 SGL Metadata Address: Not Supported 00:21:30.216 SGL Offset: Supported 00:21:30.216 Transport SGL Data Block: Not Supported 00:21:30.216 Replay Protected Memory Block: Not Supported 00:21:30.216 00:21:30.216 Firmware Slot Information 00:21:30.216 ========================= 00:21:30.216 Active slot: 0 00:21:30.216 00:21:30.216 00:21:30.216 Error Log 00:21:30.216 ========= 00:21:30.216 00:21:30.216 Active Namespaces 00:21:30.216 ================= 00:21:30.216 Discovery Log Page 00:21:30.216 ================== 00:21:30.216 Generation Counter: 2 00:21:30.216 Number of Records: 2 00:21:30.216 Record Format: 0 00:21:30.216 00:21:30.216 Discovery Log Entry 0 00:21:30.216 ---------------------- 00:21:30.216 Transport Type: 3 (TCP) 00:21:30.216 Address Family: 1 (IPv4) 00:21:30.216 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:30.216 Entry Flags: 00:21:30.216 Duplicate Returned Information: 1 00:21:30.216 Explicit Persistent Connection Support for Discovery: 1 00:21:30.216 Transport Requirements: 00:21:30.216 Secure Channel: Not Required 00:21:30.216 Port ID: 0 (0x0000) 00:21:30.216 Controller ID: 65535 (0xffff) 00:21:30.216 Admin Max SQ Size: 128 00:21:30.216 Transport Service Identifier: 4420 00:21:30.216 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:30.216 Transport Address: 10.0.0.2 00:21:30.216 Discovery Log Entry 1 00:21:30.216 ---------------------- 00:21:30.216 Transport Type: 3 (TCP) 00:21:30.216 Address Family: 1 (IPv4) 00:21:30.216 Subsystem Type: 2 (NVM Subsystem) 00:21:30.216 Entry Flags: 00:21:30.216 Duplicate Returned Information: 0 00:21:30.216 Explicit Persistent Connection Support for Discovery: 0 00:21:30.216 Transport Requirements: 00:21:30.216 Secure Channel: Not Required 00:21:30.216 Port ID: 0 (0x0000) 00:21:30.216 Controller ID: 65535 (0xffff) 00:21:30.216 Admin Max SQ Size: 128 00:21:30.216 Transport Service Identifier: 4420 00:21:30.216 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:21:30.216 Transport Address: 10.0.0.2 [2024-04-26 12:17:31.286132] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:21:30.216 [2024-04-26 12:17:31.286145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.216 [2024-04-26 12:17:31.286152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.216 [2024-04-26 12:17:31.286158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.216 [2024-04-26 12:17:31.286164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.216 [2024-04-26 12:17:31.286172] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:30.216 [2024-04-26 12:17:31.286176] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:30.216 [2024-04-26 12:17:31.286179] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16b1d10) 00:21:30.216 [2024-04-26 12:17:31.286187] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.216 [2024-04-26 12:17:31.286201] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1719e80, cid 3, qid 0 00:21:30.216 [2024-04-26 12:17:31.286320] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:30.216 [2024-04-26 12:17:31.286327] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:30.216 [2024-04-26 12:17:31.286330] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:30.216 [2024-04-26 12:17:31.286334] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1719e80) on tqpair=0x16b1d10 00:21:30.216 [2024-04-26 12:17:31.286342] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:30.216 [2024-04-26 12:17:31.286347] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:30.216 [2024-04-26 12:17:31.286351] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16b1d10) 00:21:30.216 [2024-04-26 12:17:31.286357] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.216 [2024-04-26 12:17:31.286370] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1719e80, cid 3, qid 0 00:21:30.216 [2024-04-26 12:17:31.286538] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:30.216 [2024-04-26 12:17:31.286545] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:30.216 [2024-04-26 12:17:31.286548] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:30.216 [2024-04-26 12:17:31.286552] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1719e80) on tqpair=0x16b1d10 00:21:30.216 [2024-04-26 12:17:31.286558] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:21:30.216 [2024-04-26 12:17:31.286562] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:21:30.216 [2024-04-26 12:17:31.286572] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:30.216 [2024-04-26 12:17:31.286576] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:30.216 [2024-04-26 12:17:31.286579] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16b1d10) 00:21:30.216 [2024-04-26 12:17:31.286586] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.216 [2024-04-26 12:17:31.286595] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1719e80, cid 3, qid 0 00:21:30.216 [2024-04-26 12:17:31.286821] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:30.216 [2024-04-26 12:17:31.286828] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:30.216 [2024-04-26 12:17:31.286831] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:30.216 [2024-04-26 12:17:31.286835] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1719e80) on tqpair=0x16b1d10 00:21:30.216 [2024-04-26 12:17:31.286851] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:30.216 [2024-04-26 12:17:31.286855] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:30.216 [2024-04-26 12:17:31.286859] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16b1d10) 00:21:30.216 [2024-04-26 12:17:31.286865] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.216 [2024-04-26 12:17:31.286875] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1719e80, cid 3, qid 0 00:21:30.216 [2024-04-26 12:17:31.287125] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:30.216 [2024-04-26 12:17:31.287131] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:30.216 [2024-04-26 12:17:31.287135] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:30.216 [2024-04-26 12:17:31.287139] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1719e80) on tqpair=0x16b1d10 00:21:30.216 [2024-04-26 12:17:31.287148] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:30.216 [2024-04-26 12:17:31.287152] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:30.216 [2024-04-26 12:17:31.287156] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16b1d10) 00:21:30.216 [2024-04-26 12:17:31.287163] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.216 [2024-04-26 12:17:31.287172] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1719e80, cid 3, qid 0 00:21:30.216 [2024-04-26 12:17:31.287427] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:30.216 [2024-04-26 12:17:31.287434] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:30.216 [2024-04-26 12:17:31.287437] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:30.216 [2024-04-26 12:17:31.287443] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1719e80) on tqpair=0x16b1d10 00:21:30.216 [2024-04-26 12:17:31.287453] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:30.216 [2024-04-26 12:17:31.287457] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:30.216 [2024-04-26 12:17:31.287460] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16b1d10) 00:21:30.216 [2024-04-26 12:17:31.287467] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.216 [2024-04-26 12:17:31.287476] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1719e80, cid 3, qid 0 00:21:30.216 [2024-04-26 12:17:31.287684] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:30.216 [2024-04-26 12:17:31.287690] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:30.216 [2024-04-26 12:17:31.287693] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:30.216 [2024-04-26 12:17:31.287697] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1719e80) on tqpair=0x16b1d10 00:21:30.216 [2024-04-26 12:17:31.287707] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:30.216 [2024-04-26 12:17:31.287711] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:30.216 [2024-04-26 12:17:31.287715] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16b1d10) 00:21:30.216 [2024-04-26 12:17:31.287721] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.216 [2024-04-26 12:17:31.287731] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1719e80, cid 3, qid 0 00:21:30.216 [2024-04-26 12:17:31.291845] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:30.216 [2024-04-26 12:17:31.291853] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:30.216 [2024-04-26 12:17:31.291857] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:30.216 [2024-04-26 12:17:31.291860] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1719e80) on tqpair=0x16b1d10 00:21:30.216 [2024-04-26 12:17:31.291871] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:30.216 [2024-04-26 12:17:31.291875] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:30.216 [2024-04-26 12:17:31.291878] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16b1d10) 00:21:30.216 [2024-04-26 12:17:31.291885] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.216 [2024-04-26 12:17:31.291895] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1719e80, cid 3, qid 0 00:21:30.216 [2024-04-26 12:17:31.292082] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:30.216 [2024-04-26 12:17:31.292088] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:30.216 [2024-04-26 12:17:31.292092] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:30.216 [2024-04-26 12:17:31.292095] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1719e80) on tqpair=0x16b1d10 00:21:30.216 [2024-04-26 12:17:31.292103] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:21:30.216 00:21:30.216 12:17:31 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:21:30.216 [2024-04-26 12:17:31.328642] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:21:30.216 [2024-04-26 12:17:31.328680] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3482260 ] 00:21:30.216 EAL: No free 2048 kB hugepages reported on node 1 00:21:30.216 [2024-04-26 12:17:31.361381] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:21:30.216 [2024-04-26 12:17:31.361428] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:30.216 [2024-04-26 12:17:31.361433] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:30.216 [2024-04-26 12:17:31.361445] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:30.216 [2024-04-26 12:17:31.361453] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:30.216 [2024-04-26 12:17:31.364864] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:21:30.216 [2024-04-26 12:17:31.364890] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x129dd10 0 00:21:30.216 [2024-04-26 12:17:31.372847] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:30.216 [2024-04-26 12:17:31.372856] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:30.216 [2024-04-26 12:17:31.372860] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:30.216 [2024-04-26 12:17:31.372863] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:30.216 [2024-04-26 12:17:31.372892] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:30.216 [2024-04-26 12:17:31.372897] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:30.216 [2024-04-26 12:17:31.372901] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x129dd10) 00:21:30.216 [2024-04-26 12:17:31.372912] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:30.216 [2024-04-26 12:17:31.372926] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1305a60, cid 0, qid 0 00:21:30.216 [2024-04-26 12:17:31.380849] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:30.216 [2024-04-26 12:17:31.380858] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:30.216 [2024-04-26 12:17:31.380862] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:30.216 [2024-04-26 12:17:31.380866] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1305a60) on tqpair=0x129dd10 00:21:30.216 [2024-04-26 12:17:31.380875] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:30.216 [2024-04-26 12:17:31.380880] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:21:30.216 [2024-04-26 12:17:31.380885] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:21:30.216 [2024-04-26 12:17:31.380896] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:30.216 [2024-04-26 12:17:31.380900] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:30.216 [2024-04-26 12:17:31.380904] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x129dd10) 00:21:30.216 [2024-04-26 12:17:31.380911] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.216 [2024-04-26 12:17:31.380923] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1305a60, cid 0, qid 0 00:21:30.216 [2024-04-26 12:17:31.381128] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:30.216 [2024-04-26 12:17:31.381134] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:30.216 [2024-04-26 12:17:31.381138] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:30.216 [2024-04-26 12:17:31.381142] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1305a60) on tqpair=0x129dd10 00:21:30.216 [2024-04-26 12:17:31.381147] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:21:30.216 [2024-04-26 12:17:31.381154] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:21:30.216 [2024-04-26 12:17:31.381163] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:30.216 [2024-04-26 12:17:31.381167] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:30.216 [2024-04-26 12:17:31.381171] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x129dd10) 00:21:30.216 [2024-04-26 12:17:31.381177] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.216 [2024-04-26 12:17:31.381187] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1305a60, cid 0, qid 0 00:21:30.216 [2024-04-26 12:17:31.381384] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:30.216 [2024-04-26 12:17:31.381391] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:30.217 [2024-04-26 12:17:31.381394] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:30.217 [2024-04-26 12:17:31.381398] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1305a60) on tqpair=0x129dd10 00:21:30.217 [2024-04-26 12:17:31.381403] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:21:30.217 [2024-04-26 12:17:31.381411] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:21:30.217 [2024-04-26 12:17:31.381418] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:30.217 [2024-04-26 12:17:31.381422] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:30.217 [2024-04-26 12:17:31.381425] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x129dd10) 00:21:30.217 [2024-04-26 12:17:31.381432] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.217 [2024-04-26 12:17:31.381441] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1305a60, cid 0, qid 0 00:21:30.217 [2024-04-26 12:17:31.381647] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:30.217 [2024-04-26 12:17:31.381654] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:30.217 [2024-04-26 12:17:31.381657] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:30.217 [2024-04-26 12:17:31.381661] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1305a60) on tqpair=0x129dd10 00:21:30.217 [2024-04-26 12:17:31.381666] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:30.217 [2024-04-26 12:17:31.381675] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:30.217 [2024-04-26 12:17:31.381678] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:30.217 [2024-04-26 12:17:31.381682] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x129dd10) 00:21:30.217 [2024-04-26 12:17:31.381689] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.217 [2024-04-26 12:17:31.381698] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1305a60, cid 0, qid 0 00:21:30.217 [2024-04-26 12:17:31.381877] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:30.217 [2024-04-26 12:17:31.381884] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:30.217 [2024-04-26 12:17:31.381887] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:30.217 [2024-04-26 12:17:31.381891] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1305a60) on tqpair=0x129dd10 00:21:30.217 [2024-04-26 12:17:31.381896] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:21:30.217 [2024-04-26 12:17:31.381901] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:21:30.217 [2024-04-26 12:17:31.381908] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:30.217 [2024-04-26 12:17:31.382013] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:21:30.217 [2024-04-26 12:17:31.382020] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:30.217 [2024-04-26 12:17:31.382027] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:30.217 [2024-04-26 12:17:31.382031] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:30.217 [2024-04-26 12:17:31.382034] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x129dd10) 00:21:30.217 [2024-04-26 12:17:31.382041] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.217 [2024-04-26 12:17:31.382051] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1305a60, cid 0, qid 0 00:21:30.217 [2024-04-26 12:17:31.382227] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:30.217 [2024-04-26 12:17:31.382233] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:30.217 [2024-04-26 12:17:31.382236] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:30.217 [2024-04-26 12:17:31.382240] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1305a60) on tqpair=0x129dd10 00:21:30.217 [2024-04-26 12:17:31.382245] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:30.217 [2024-04-26 12:17:31.382254] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:30.217 [2024-04-26 12:17:31.382258] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:30.217 [2024-04-26 12:17:31.382262] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x129dd10) 00:21:30.217 [2024-04-26 12:17:31.382268] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.217 [2024-04-26 12:17:31.382278] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1305a60, cid 0, qid 0 00:21:30.217 [2024-04-26 12:17:31.382472] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:30.217 [2024-04-26 12:17:31.382478] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:30.217 [2024-04-26 12:17:31.382481] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:30.217 [2024-04-26 12:17:31.382485] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1305a60) on tqpair=0x129dd10 00:21:30.217 [2024-04-26 12:17:31.382490] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:30.217 [2024-04-26 12:17:31.382494] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:21:30.217 [2024-04-26 12:17:31.382501] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:21:30.217 [2024-04-26 12:17:31.382513] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:21:30.217 [2024-04-26 12:17:31.382522] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:30.217 [2024-04-26 12:17:31.382526] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x129dd10) 00:21:30.217 [2024-04-26 12:17:31.382533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.217 [2024-04-26 12:17:31.382543] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1305a60, cid 0, qid 0 00:21:30.217 [2024-04-26 12:17:31.382885] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:30.217 [2024-04-26 12:17:31.382892] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:30.217 [2024-04-26 12:17:31.382895] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:30.217 [2024-04-26 12:17:31.382899] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x129dd10): datao=0, datal=4096, cccid=0 00:21:30.217 [2024-04-26 12:17:31.382903] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1305a60) on tqpair(0x129dd10): expected_datao=0, payload_size=4096 00:21:30.217 [2024-04-26 12:17:31.382910] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:30.217 [2024-04-26 12:17:31.382917] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:30.217 [2024-04-26 12:17:31.382920] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:30.217 [2024-04-26 12:17:31.383062] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:30.217 [2024-04-26 12:17:31.383068] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:30.217 [2024-04-26 12:17:31.383072] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:30.217 [2024-04-26 12:17:31.383075] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1305a60) on tqpair=0x129dd10 00:21:30.217 [2024-04-26 12:17:31.383083] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:21:30.217 [2024-04-26 12:17:31.383087] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:21:30.217 [2024-04-26 12:17:31.383092] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:21:30.217 [2024-04-26 12:17:31.383096] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:21:30.217 [2024-04-26 12:17:31.383100] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:21:30.217 [2024-04-26 12:17:31.383105] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:21:30.217 [2024-04-26 12:17:31.383112] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:21:30.217 [2024-04-26 12:17:31.383118] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:30.217 [2024-04-26 12:17:31.383122] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:30.217 [2024-04-26 12:17:31.383126] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x129dd10) 00:21:30.217 [2024-04-26 12:17:31.383132] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:30.217 [2024-04-26 12:17:31.383143] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1305a60, cid 0, qid 0 00:21:30.217 [2024-04-26 12:17:31.383356] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:30.217 [2024-04-26 12:17:31.383362] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:30.217 [2024-04-26 12:17:31.383365] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:30.217 [2024-04-26 12:17:31.383369] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1305a60) on tqpair=0x129dd10 00:21:30.217 [2024-04-26 12:17:31.383376] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:30.217 [2024-04-26 12:17:31.383380] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:30.217 [2024-04-26 12:17:31.383383] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x129dd10) 00:21:30.217 [2024-04-26 12:17:31.383389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.217 [2024-04-26 12:17:31.383395] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:30.217 [2024-04-26 12:17:31.383399] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:30.217 [2024-04-26 12:17:31.383402] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x129dd10) 00:21:30.217 [2024-04-26 12:17:31.383408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.217 [2024-04-26 12:17:31.383414] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:30.217 [2024-04-26 12:17:31.383418] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:30.217 [2024-04-26 12:17:31.383421] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x129dd10) 00:21:30.217 [2024-04-26 12:17:31.383428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.217 [2024-04-26 12:17:31.383435] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:30.217 [2024-04-26 12:17:31.383438] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:30.217 [2024-04-26 12:17:31.383442] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x129dd10) 00:21:30.217 [2024-04-26 12:17:31.383448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.217 [2024-04-26 12:17:31.383452] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:30.217 [2024-04-26 12:17:31.383461] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:30.217 [2024-04-26 12:17:31.383468] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:30.217 [2024-04-26 12:17:31.383471] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x129dd10) 00:21:30.217 [2024-04-26 12:17:31.383478] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.217 [2024-04-26 12:17:31.383489] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1305a60, cid 0, qid 0 00:21:30.217 [2024-04-26 12:17:31.383494] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1305bc0, cid 1, qid 0 00:21:30.217 [2024-04-26 12:17:31.383499] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1305d20, cid 2, qid 0 00:21:30.217 [2024-04-26 12:17:31.383504] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1305e80, cid 3, qid 0 00:21:30.217 [2024-04-26 12:17:31.383508] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1305fe0, cid 4, qid 0 00:21:30.217 [2024-04-26 12:17:31.383721] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:30.217 [2024-04-26 12:17:31.383727] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:30.217 [2024-04-26 12:17:31.383731] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:30.217 [2024-04-26 12:17:31.383734] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1305fe0) on tqpair=0x129dd10 00:21:30.217 [2024-04-26 12:17:31.383740] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:21:30.217 [2024-04-26 12:17:31.383745] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:21:30.217 [2024-04-26 12:17:31.383754] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:21:30.217 [2024-04-26 12:17:31.383760] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:21:30.217 [2024-04-26 12:17:31.383766] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:30.217 [2024-04-26 12:17:31.383769] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:30.217 [2024-04-26 12:17:31.383773] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x129dd10) 00:21:30.217 [2024-04-26 12:17:31.383779] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:30.217 [2024-04-26 12:17:31.383789] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1305fe0, cid 4, qid 0 00:21:30.217 [2024-04-26 12:17:31.383945] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:30.217 [2024-04-26 12:17:31.383952] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:30.217 [2024-04-26 12:17:31.383955] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:30.217 [2024-04-26 12:17:31.383959] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1305fe0) on tqpair=0x129dd10 00:21:30.217 [2024-04-26 12:17:31.384009] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:21:30.217 [2024-04-26 12:17:31.384019] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:21:30.217 [2024-04-26 12:17:31.384027] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:30.217 [2024-04-26 12:17:31.384030] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x129dd10) 00:21:30.217 [2024-04-26 12:17:31.384037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.217 [2024-04-26 12:17:31.384047] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1305fe0, cid 4, qid 0 00:21:30.217 [2024-04-26 12:17:31.384255] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:30.217 [2024-04-26 12:17:31.384262] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:30.217 [2024-04-26 12:17:31.384265] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:30.217 [2024-04-26 12:17:31.384269] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x129dd10): datao=0, datal=4096, cccid=4 00:21:30.217 [2024-04-26 12:17:31.384273] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1305fe0) on tqpair(0x129dd10): expected_datao=0, payload_size=4096 00:21:30.217 [2024-04-26 12:17:31.384277] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:30.217 [2024-04-26 12:17:31.384303] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:30.217 [2024-04-26 12:17:31.384307] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:30.217 [2024-04-26 12:17:31.426846] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:30.217 [2024-04-26 12:17:31.426857] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:30.217 [2024-04-26 12:17:31.426861] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:30.217 [2024-04-26 12:17:31.426865] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1305fe0) on tqpair=0x129dd10 00:21:30.217 [2024-04-26 12:17:31.426875] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:21:30.217 [2024-04-26 12:17:31.426885] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:21:30.217 [2024-04-26 12:17:31.426894] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:21:30.217 [2024-04-26 12:17:31.426901] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:30.217 [2024-04-26 12:17:31.426905] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x129dd10) 00:21:30.217 [2024-04-26 12:17:31.426912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.217 [2024-04-26 12:17:31.426924] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1305fe0, cid 4, qid 0 00:21:30.217 [2024-04-26 12:17:31.427100] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:30.217 [2024-04-26 12:17:31.427106] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:30.217 [2024-04-26 12:17:31.427110] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:30.217 [2024-04-26 12:17:31.427114] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x129dd10): datao=0, datal=4096, cccid=4 00:21:30.217 [2024-04-26 12:17:31.427118] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1305fe0) on tqpair(0x129dd10): expected_datao=0, payload_size=4096 00:21:30.217 [2024-04-26 12:17:31.427122] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:30.217 [2024-04-26 12:17:31.427139] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:30.217 [2024-04-26 12:17:31.427143] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:30.480 [2024-04-26 12:17:31.468053] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:30.480 [2024-04-26 12:17:31.468063] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:30.480 [2024-04-26 12:17:31.468069] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:30.480 [2024-04-26 12:17:31.468073] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1305fe0) on tqpair=0x129dd10 00:21:30.480 [2024-04-26 12:17:31.468087] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:21:30.480 [2024-04-26 12:17:31.468095] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:21:30.480 [2024-04-26 12:17:31.468103] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:30.480 [2024-04-26 12:17:31.468107] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x129dd10) 00:21:30.480 [2024-04-26 12:17:31.468114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.480 [2024-04-26 12:17:31.468125] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1305fe0, cid 4, qid 0 00:21:30.480 [2024-04-26 12:17:31.468317] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:30.480 [2024-04-26 12:17:31.468323] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:30.480 [2024-04-26 12:17:31.468327] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:30.480 [2024-04-26 12:17:31.468330] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x129dd10): datao=0, datal=4096, cccid=4 00:21:30.480 [2024-04-26 12:17:31.468334] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1305fe0) on tqpair(0x129dd10): expected_datao=0, payload_size=4096 00:21:30.480 [2024-04-26 12:17:31.468339] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:30.480 [2024-04-26 12:17:31.468345] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:30.480 [2024-04-26 12:17:31.468349] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:30.480 [2024-04-26 12:17:31.509013] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:30.480 [2024-04-26 12:17:31.509022] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:30.480 [2024-04-26 12:17:31.509026] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:30.480 [2024-04-26 12:17:31.509029] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1305fe0) on tqpair=0x129dd10 00:21:30.480 [2024-04-26 12:17:31.509038] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:21:30.480 [2024-04-26 12:17:31.509045] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:21:30.480 [2024-04-26 12:17:31.509056] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:21:30.480 [2024-04-26 12:17:31.509062] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:21:30.480 [2024-04-26 12:17:31.509067] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:21:30.480 [2024-04-26 12:17:31.509071] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:21:30.480 [2024-04-26 12:17:31.509076] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:21:30.480 [2024-04-26 12:17:31.509081] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:21:30.480 [2024-04-26 12:17:31.509094] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:30.480 [2024-04-26 12:17:31.509097] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x129dd10) 00:21:30.480 [2024-04-26 12:17:31.509104] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.480 [2024-04-26 12:17:31.509113] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:30.480 [2024-04-26 12:17:31.509117] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:30.480 [2024-04-26 12:17:31.509120] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x129dd10) 00:21:30.480 [2024-04-26 12:17:31.509126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.480 [2024-04-26 12:17:31.509139] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1305fe0, cid 4, qid 0 00:21:30.480 [2024-04-26 12:17:31.509144] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1306140, cid 5, qid 0 00:21:30.480 [2024-04-26 12:17:31.509320] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:30.480 [2024-04-26 12:17:31.509327] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:30.480 [2024-04-26 12:17:31.509330] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:30.480 [2024-04-26 12:17:31.509334] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1305fe0) on tqpair=0x129dd10 00:21:30.480 [2024-04-26 12:17:31.509341] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:30.480 [2024-04-26 12:17:31.509347] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:30.480 [2024-04-26 12:17:31.509350] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:30.480 [2024-04-26 12:17:31.509354] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1306140) on tqpair=0x129dd10 00:21:30.480 [2024-04-26 12:17:31.509363] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:30.480 [2024-04-26 12:17:31.509367] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x129dd10) 00:21:30.481 [2024-04-26 12:17:31.509373] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.481 [2024-04-26 12:17:31.509382] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1306140, cid 5, qid 0 00:21:30.481 [2024-04-26 12:17:31.509577] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:30.481 [2024-04-26 12:17:31.509583] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:30.481 [2024-04-26 12:17:31.509586] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:30.481 [2024-04-26 12:17:31.509590] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1306140) on tqpair=0x129dd10 00:21:30.481 [2024-04-26 12:17:31.509599] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:30.481 [2024-04-26 12:17:31.509603] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x129dd10) 00:21:30.481 [2024-04-26 12:17:31.509609] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.481 [2024-04-26 12:17:31.509618] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1306140, cid 5, qid 0 00:21:30.481 [2024-04-26 12:17:31.509844] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:30.481 [2024-04-26 12:17:31.509851] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:30.481 [2024-04-26 12:17:31.509854] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:30.481 [2024-04-26 12:17:31.509858] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1306140) on tqpair=0x129dd10 00:21:30.481 [2024-04-26 12:17:31.509867] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:30.481 [2024-04-26 12:17:31.509871] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x129dd10) 00:21:30.481 [2024-04-26 12:17:31.509877] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.481 [2024-04-26 12:17:31.509887] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1306140, cid 5, qid 0 00:21:30.481 [2024-04-26 12:17:31.510048] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:30.481 [2024-04-26 12:17:31.510054] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:30.481 [2024-04-26 12:17:31.510060] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:30.481 [2024-04-26 12:17:31.510064] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1306140) on tqpair=0x129dd10 00:21:30.481 [2024-04-26 12:17:31.510075] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:30.481 [2024-04-26 12:17:31.510079] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x129dd10) 00:21:30.481 [2024-04-26 12:17:31.510085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.481 [2024-04-26 12:17:31.510092] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:30.481 [2024-04-26 12:17:31.510096] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x129dd10) 00:21:30.481 [2024-04-26 12:17:31.510102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.481 [2024-04-26 12:17:31.510109] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:30.481 [2024-04-26 12:17:31.510113] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x129dd10) 00:21:30.481 [2024-04-26 12:17:31.510119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.481 [2024-04-26 12:17:31.510126] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:30.481 [2024-04-26 12:17:31.510129] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x129dd10) 00:21:30.481 [2024-04-26 12:17:31.510135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.481 [2024-04-26 12:17:31.510146] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1306140, cid 5, qid 0 00:21:30.481 [2024-04-26 12:17:31.510151] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1305fe0, cid 4, qid 0 00:21:30.481 [2024-04-26 12:17:31.510156] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13062a0, cid 6, qid 0 00:21:30.481 [2024-04-26 12:17:31.510161] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1306400, cid 7, qid 0 00:21:30.481 [2024-04-26 12:17:31.510401] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:30.481 [2024-04-26 12:17:31.510407] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:30.481 [2024-04-26 12:17:31.510411] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:30.481 [2024-04-26 12:17:31.510414] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x129dd10): datao=0, datal=8192, cccid=5 00:21:30.481 [2024-04-26 12:17:31.510419] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1306140) on tqpair(0x129dd10): expected_datao=0, payload_size=8192 00:21:30.481 [2024-04-26 12:17:31.510423] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:30.481 [2024-04-26 12:17:31.510510] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:30.481 [2024-04-26 12:17:31.510515] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:30.481 [2024-04-26 12:17:31.510520] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:30.481 [2024-04-26 12:17:31.510526] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:30.481 [2024-04-26 12:17:31.510529] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:30.481 [2024-04-26 12:17:31.510533] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x129dd10): datao=0, datal=512, cccid=4 00:21:30.481 [2024-04-26 12:17:31.510537] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1305fe0) on tqpair(0x129dd10): expected_datao=0, payload_size=512 00:21:30.481 [2024-04-26 12:17:31.510541] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:30.481 [2024-04-26 12:17:31.510547] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:30.481 [2024-04-26 12:17:31.510552] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:30.481 [2024-04-26 12:17:31.510558] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:30.481 [2024-04-26 12:17:31.510564] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:30.481 [2024-04-26 12:17:31.510567] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:30.481 [2024-04-26 12:17:31.510571] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x129dd10): datao=0, datal=512, cccid=6 00:21:30.481 [2024-04-26 12:17:31.510575] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13062a0) on tqpair(0x129dd10): expected_datao=0, payload_size=512 00:21:30.481 [2024-04-26 12:17:31.510579] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:30.481 [2024-04-26 12:17:31.510585] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:30.481 [2024-04-26 12:17:31.510589] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:30.481 [2024-04-26 12:17:31.510594] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:30.481 [2024-04-26 12:17:31.510600] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:30.481 [2024-04-26 12:17:31.510603] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:30.481 [2024-04-26 12:17:31.510607] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x129dd10): datao=0, datal=4096, cccid=7 00:21:30.481 [2024-04-26 12:17:31.510611] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1306400) on tqpair(0x129dd10): expected_datao=0, payload_size=4096 00:21:30.481 [2024-04-26 12:17:31.510615] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:30.481 [2024-04-26 12:17:31.510622] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:30.481 [2024-04-26 12:17:31.510625] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:30.481 [2024-04-26 12:17:31.510637] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:30.481 [2024-04-26 12:17:31.510643] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:30.481 [2024-04-26 12:17:31.510647] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:30.481 [2024-04-26 12:17:31.510650] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1306140) on tqpair=0x129dd10 00:21:30.481 [2024-04-26 12:17:31.510664] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:30.481 [2024-04-26 12:17:31.510669] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:30.481 [2024-04-26 12:17:31.510673] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:30.481 [2024-04-26 12:17:31.510676] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1305fe0) on tqpair=0x129dd10 00:21:30.481 [2024-04-26 12:17:31.510686] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:30.481 [2024-04-26 12:17:31.510691] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:30.481 [2024-04-26 12:17:31.510695] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:30.481 [2024-04-26 12:17:31.510698] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13062a0) on tqpair=0x129dd10 00:21:30.481 [2024-04-26 12:17:31.510706] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:30.481 [2024-04-26 12:17:31.510712] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:30.481 [2024-04-26 12:17:31.510715] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:30.481 [2024-04-26 12:17:31.510719] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1306400) on tqpair=0x129dd10 00:21:30.481 ===================================================== 00:21:30.481 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:30.481 ===================================================== 00:21:30.481 Controller Capabilities/Features 00:21:30.481 ================================ 00:21:30.481 Vendor ID: 8086 00:21:30.481 Subsystem Vendor ID: 8086 00:21:30.481 Serial Number: SPDK00000000000001 00:21:30.481 Model Number: SPDK bdev Controller 00:21:30.481 Firmware Version: 24.05 00:21:30.481 Recommended Arb Burst: 6 00:21:30.481 IEEE OUI Identifier: e4 d2 5c 00:21:30.481 Multi-path I/O 00:21:30.481 May have multiple subsystem ports: Yes 00:21:30.481 May have multiple controllers: Yes 00:21:30.481 Associated with SR-IOV VF: No 00:21:30.481 Max Data Transfer Size: 131072 00:21:30.481 Max Number of Namespaces: 32 00:21:30.481 Max Number of I/O Queues: 127 00:21:30.481 NVMe Specification Version (VS): 1.3 00:21:30.481 NVMe Specification Version (Identify): 1.3 00:21:30.481 Maximum Queue Entries: 128 00:21:30.481 Contiguous Queues Required: Yes 00:21:30.481 Arbitration Mechanisms Supported 00:21:30.481 Weighted Round Robin: Not Supported 00:21:30.481 Vendor Specific: Not Supported 00:21:30.481 Reset Timeout: 15000 ms 00:21:30.481 Doorbell Stride: 4 bytes 00:21:30.481 NVM Subsystem Reset: Not Supported 00:21:30.481 Command Sets Supported 00:21:30.481 NVM Command Set: Supported 00:21:30.481 Boot Partition: Not Supported 00:21:30.481 Memory Page Size Minimum: 4096 bytes 00:21:30.481 Memory Page Size Maximum: 4096 bytes 00:21:30.481 Persistent Memory Region: Not Supported 00:21:30.481 Optional Asynchronous Events Supported 00:21:30.481 Namespace Attribute Notices: Supported 00:21:30.482 Firmware Activation Notices: Not Supported 00:21:30.482 ANA Change Notices: Not Supported 00:21:30.482 PLE Aggregate Log Change Notices: Not Supported 00:21:30.482 LBA Status Info Alert Notices: Not Supported 00:21:30.482 EGE Aggregate Log Change Notices: Not Supported 00:21:30.482 Normal NVM Subsystem Shutdown event: Not Supported 00:21:30.482 Zone Descriptor Change Notices: Not Supported 00:21:30.482 Discovery Log Change Notices: Not Supported 00:21:30.482 Controller Attributes 00:21:30.482 128-bit Host Identifier: Supported 00:21:30.482 Non-Operational Permissive Mode: Not Supported 00:21:30.482 NVM Sets: Not Supported 00:21:30.482 Read Recovery Levels: Not Supported 00:21:30.482 Endurance Groups: Not Supported 00:21:30.482 Predictable Latency Mode: Not Supported 00:21:30.482 Traffic Based Keep ALive: Not Supported 00:21:30.482 Namespace Granularity: Not Supported 00:21:30.482 SQ Associations: Not Supported 00:21:30.482 UUID List: Not Supported 00:21:30.482 Multi-Domain Subsystem: Not Supported 00:21:30.482 Fixed Capacity Management: Not Supported 00:21:30.482 Variable Capacity Management: Not Supported 00:21:30.482 Delete Endurance Group: Not Supported 00:21:30.482 Delete NVM Set: Not Supported 00:21:30.482 Extended LBA Formats Supported: Not Supported 00:21:30.482 Flexible Data Placement Supported: Not Supported 00:21:30.482 00:21:30.482 Controller Memory Buffer Support 00:21:30.482 ================================ 00:21:30.482 Supported: No 00:21:30.482 00:21:30.482 Persistent Memory Region Support 00:21:30.482 ================================ 00:21:30.482 Supported: No 00:21:30.482 00:21:30.482 Admin Command Set Attributes 00:21:30.482 ============================ 00:21:30.482 Security Send/Receive: Not Supported 00:21:30.482 Format NVM: Not Supported 00:21:30.482 Firmware Activate/Download: Not Supported 00:21:30.482 Namespace Management: Not Supported 00:21:30.482 Device Self-Test: Not Supported 00:21:30.482 Directives: Not Supported 00:21:30.482 NVMe-MI: Not Supported 00:21:30.482 Virtualization Management: Not Supported 00:21:30.482 Doorbell Buffer Config: Not Supported 00:21:30.482 Get LBA Status Capability: Not Supported 00:21:30.482 Command & Feature Lockdown Capability: Not Supported 00:21:30.482 Abort Command Limit: 4 00:21:30.482 Async Event Request Limit: 4 00:21:30.482 Number of Firmware Slots: N/A 00:21:30.482 Firmware Slot 1 Read-Only: N/A 00:21:30.482 Firmware Activation Without Reset: N/A 00:21:30.482 Multiple Update Detection Support: N/A 00:21:30.482 Firmware Update Granularity: No Information Provided 00:21:30.482 Per-Namespace SMART Log: No 00:21:30.482 Asymmetric Namespace Access Log Page: Not Supported 00:21:30.482 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:21:30.482 Command Effects Log Page: Supported 00:21:30.482 Get Log Page Extended Data: Supported 00:21:30.482 Telemetry Log Pages: Not Supported 00:21:30.482 Persistent Event Log Pages: Not Supported 00:21:30.482 Supported Log Pages Log Page: May Support 00:21:30.482 Commands Supported & Effects Log Page: Not Supported 00:21:30.482 Feature Identifiers & Effects Log Page:May Support 00:21:30.482 NVMe-MI Commands & Effects Log Page: May Support 00:21:30.482 Data Area 4 for Telemetry Log: Not Supported 00:21:30.482 Error Log Page Entries Supported: 128 00:21:30.482 Keep Alive: Supported 00:21:30.482 Keep Alive Granularity: 10000 ms 00:21:30.482 00:21:30.482 NVM Command Set Attributes 00:21:30.482 ========================== 00:21:30.482 Submission Queue Entry Size 00:21:30.482 Max: 64 00:21:30.482 Min: 64 00:21:30.482 Completion Queue Entry Size 00:21:30.482 Max: 16 00:21:30.482 Min: 16 00:21:30.482 Number of Namespaces: 32 00:21:30.482 Compare Command: Supported 00:21:30.482 Write Uncorrectable Command: Not Supported 00:21:30.482 Dataset Management Command: Supported 00:21:30.482 Write Zeroes Command: Supported 00:21:30.482 Set Features Save Field: Not Supported 00:21:30.482 Reservations: Supported 00:21:30.482 Timestamp: Not Supported 00:21:30.482 Copy: Supported 00:21:30.482 Volatile Write Cache: Present 00:21:30.482 Atomic Write Unit (Normal): 1 00:21:30.482 Atomic Write Unit (PFail): 1 00:21:30.482 Atomic Compare & Write Unit: 1 00:21:30.482 Fused Compare & Write: Supported 00:21:30.482 Scatter-Gather List 00:21:30.482 SGL Command Set: Supported 00:21:30.482 SGL Keyed: Supported 00:21:30.482 SGL Bit Bucket Descriptor: Not Supported 00:21:30.482 SGL Metadata Pointer: Not Supported 00:21:30.482 Oversized SGL: Not Supported 00:21:30.482 SGL Metadata Address: Not Supported 00:21:30.482 SGL Offset: Supported 00:21:30.482 Transport SGL Data Block: Not Supported 00:21:30.482 Replay Protected Memory Block: Not Supported 00:21:30.482 00:21:30.482 Firmware Slot Information 00:21:30.482 ========================= 00:21:30.482 Active slot: 1 00:21:30.482 Slot 1 Firmware Revision: 24.05 00:21:30.482 00:21:30.482 00:21:30.482 Commands Supported and Effects 00:21:30.482 ============================== 00:21:30.482 Admin Commands 00:21:30.482 -------------- 00:21:30.482 Get Log Page (02h): Supported 00:21:30.482 Identify (06h): Supported 00:21:30.482 Abort (08h): Supported 00:21:30.482 Set Features (09h): Supported 00:21:30.482 Get Features (0Ah): Supported 00:21:30.482 Asynchronous Event Request (0Ch): Supported 00:21:30.482 Keep Alive (18h): Supported 00:21:30.482 I/O Commands 00:21:30.482 ------------ 00:21:30.482 Flush (00h): Supported LBA-Change 00:21:30.482 Write (01h): Supported LBA-Change 00:21:30.482 Read (02h): Supported 00:21:30.482 Compare (05h): Supported 00:21:30.482 Write Zeroes (08h): Supported LBA-Change 00:21:30.482 Dataset Management (09h): Supported LBA-Change 00:21:30.482 Copy (19h): Supported LBA-Change 00:21:30.482 Unknown (79h): Supported LBA-Change 00:21:30.482 Unknown (7Ah): Supported 00:21:30.482 00:21:30.482 Error Log 00:21:30.482 ========= 00:21:30.482 00:21:30.482 Arbitration 00:21:30.482 =========== 00:21:30.482 Arbitration Burst: 1 00:21:30.482 00:21:30.482 Power Management 00:21:30.482 ================ 00:21:30.482 Number of Power States: 1 00:21:30.482 Current Power State: Power State #0 00:21:30.482 Power State #0: 00:21:30.482 Max Power: 0.00 W 00:21:30.482 Non-Operational State: Operational 00:21:30.482 Entry Latency: Not Reported 00:21:30.482 Exit Latency: Not Reported 00:21:30.482 Relative Read Throughput: 0 00:21:30.482 Relative Read Latency: 0 00:21:30.482 Relative Write Throughput: 0 00:21:30.482 Relative Write Latency: 0 00:21:30.482 Idle Power: Not Reported 00:21:30.482 Active Power: Not Reported 00:21:30.482 Non-Operational Permissive Mode: Not Supported 00:21:30.482 00:21:30.482 Health Information 00:21:30.482 ================== 00:21:30.482 Critical Warnings: 00:21:30.482 Available Spare Space: OK 00:21:30.482 Temperature: OK 00:21:30.482 Device Reliability: OK 00:21:30.482 Read Only: No 00:21:30.482 Volatile Memory Backup: OK 00:21:30.482 Current Temperature: 0 Kelvin (-273 Celsius) 00:21:30.482 Temperature Threshold: [2024-04-26 12:17:31.510821] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:30.482 [2024-04-26 12:17:31.510827] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x129dd10) 00:21:30.482 [2024-04-26 12:17:31.510833] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.482 [2024-04-26 12:17:31.510851] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1306400, cid 7, qid 0 00:21:30.482 [2024-04-26 12:17:31.511062] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:30.482 [2024-04-26 12:17:31.511068] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:30.482 [2024-04-26 12:17:31.511073] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:30.482 [2024-04-26 12:17:31.511077] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1306400) on tqpair=0x129dd10 00:21:30.482 [2024-04-26 12:17:31.511105] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:21:30.482 [2024-04-26 12:17:31.511116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.482 [2024-04-26 12:17:31.511123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.482 [2024-04-26 12:17:31.511129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.482 [2024-04-26 12:17:31.511135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.482 [2024-04-26 12:17:31.511142] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:30.482 [2024-04-26 12:17:31.511146] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:30.482 [2024-04-26 12:17:31.511149] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x129dd10) 00:21:30.482 [2024-04-26 12:17:31.511156] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.482 [2024-04-26 12:17:31.511168] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1305e80, cid 3, qid 0 00:21:30.482 [2024-04-26 12:17:31.511362] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:30.483 [2024-04-26 12:17:31.511368] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:30.483 [2024-04-26 12:17:31.511371] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:30.483 [2024-04-26 12:17:31.511375] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1305e80) on tqpair=0x129dd10 00:21:30.483 [2024-04-26 12:17:31.511382] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:30.483 [2024-04-26 12:17:31.511386] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:30.483 [2024-04-26 12:17:31.511389] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x129dd10) 00:21:30.483 [2024-04-26 12:17:31.511396] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.483 [2024-04-26 12:17:31.511408] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1305e80, cid 3, qid 0 00:21:30.483 [2024-04-26 12:17:31.511583] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:30.483 [2024-04-26 12:17:31.511589] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:30.483 [2024-04-26 12:17:31.511593] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:30.483 [2024-04-26 12:17:31.511596] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1305e80) on tqpair=0x129dd10 00:21:30.483 [2024-04-26 12:17:31.511602] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:21:30.483 [2024-04-26 12:17:31.511606] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:21:30.483 [2024-04-26 12:17:31.511615] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:30.483 [2024-04-26 12:17:31.511619] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:30.483 [2024-04-26 12:17:31.511622] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x129dd10) 00:21:30.483 [2024-04-26 12:17:31.511629] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.483 [2024-04-26 12:17:31.511638] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1305e80, cid 3, qid 0 00:21:30.483 [2024-04-26 12:17:31.511793] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:30.483 [2024-04-26 12:17:31.511799] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:30.483 [2024-04-26 12:17:31.511806] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:30.483 [2024-04-26 12:17:31.511809] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1305e80) on tqpair=0x129dd10 00:21:30.483 [2024-04-26 12:17:31.511819] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:30.483 [2024-04-26 12:17:31.511823] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:30.483 [2024-04-26 12:17:31.511827] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x129dd10) 00:21:30.483 [2024-04-26 12:17:31.511833] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.483 [2024-04-26 12:17:31.515852] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1305e80, cid 3, qid 0 00:21:30.483 [2024-04-26 12:17:31.516039] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:30.483 [2024-04-26 12:17:31.516046] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:30.483 [2024-04-26 12:17:31.516049] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:30.483 [2024-04-26 12:17:31.516053] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1305e80) on tqpair=0x129dd10 00:21:30.483 [2024-04-26 12:17:31.516060] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:21:30.483 0 Kelvin (-273 Celsius) 00:21:30.483 Available Spare: 0% 00:21:30.483 Available Spare Threshold: 0% 00:21:30.483 Life Percentage Used: 0% 00:21:30.483 Data Units Read: 0 00:21:30.483 Data Units Written: 0 00:21:30.483 Host Read Commands: 0 00:21:30.483 Host Write Commands: 0 00:21:30.483 Controller Busy Time: 0 minutes 00:21:30.483 Power Cycles: 0 00:21:30.483 Power On Hours: 0 hours 00:21:30.483 Unsafe Shutdowns: 0 00:21:30.483 Unrecoverable Media Errors: 0 00:21:30.483 Lifetime Error Log Entries: 0 00:21:30.483 Warning Temperature Time: 0 minutes 00:21:30.483 Critical Temperature Time: 0 minutes 00:21:30.483 00:21:30.483 Number of Queues 00:21:30.483 ================ 00:21:30.483 Number of I/O Submission Queues: 127 00:21:30.483 Number of I/O Completion Queues: 127 00:21:30.483 00:21:30.483 Active Namespaces 00:21:30.483 ================= 00:21:30.483 Namespace ID:1 00:21:30.483 Error Recovery Timeout: Unlimited 00:21:30.483 Command Set Identifier: NVM (00h) 00:21:30.483 Deallocate: Supported 00:21:30.483 Deallocated/Unwritten Error: Not Supported 00:21:30.483 Deallocated Read Value: Unknown 00:21:30.483 Deallocate in Write Zeroes: Not Supported 00:21:30.483 Deallocated Guard Field: 0xFFFF 00:21:30.483 Flush: Supported 00:21:30.483 Reservation: Supported 00:21:30.483 Namespace Sharing Capabilities: Multiple Controllers 00:21:30.483 Size (in LBAs): 131072 (0GiB) 00:21:30.483 Capacity (in LBAs): 131072 (0GiB) 00:21:30.483 Utilization (in LBAs): 131072 (0GiB) 00:21:30.483 NGUID: ABCDEF0123456789ABCDEF0123456789 00:21:30.483 EUI64: ABCDEF0123456789 00:21:30.483 UUID: 1d062eba-6d8d-4d4e-9133-34c9141a3e6f 00:21:30.483 Thin Provisioning: Not Supported 00:21:30.483 Per-NS Atomic Units: Yes 00:21:30.483 Atomic Boundary Size (Normal): 0 00:21:30.483 Atomic Boundary Size (PFail): 0 00:21:30.483 Atomic Boundary Offset: 0 00:21:30.483 Maximum Single Source Range Length: 65535 00:21:30.483 Maximum Copy Length: 65535 00:21:30.483 Maximum Source Range Count: 1 00:21:30.483 NGUID/EUI64 Never Reused: No 00:21:30.483 Namespace Write Protected: No 00:21:30.483 Number of LBA Formats: 1 00:21:30.483 Current LBA Format: LBA Format #00 00:21:30.483 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:30.483 00:21:30.483 12:17:31 -- host/identify.sh@51 -- # sync 00:21:30.483 12:17:31 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:30.483 12:17:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:30.483 12:17:31 -- common/autotest_common.sh@10 -- # set +x 00:21:30.483 12:17:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:30.483 12:17:31 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:21:30.483 12:17:31 -- host/identify.sh@56 -- # nvmftestfini 00:21:30.483 12:17:31 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:30.483 12:17:31 -- nvmf/common.sh@117 -- # sync 00:21:30.483 12:17:31 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:30.483 12:17:31 -- nvmf/common.sh@120 -- # set +e 00:21:30.483 12:17:31 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:30.483 12:17:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:30.483 rmmod nvme_tcp 00:21:30.483 rmmod nvme_fabrics 00:21:30.483 rmmod nvme_keyring 00:21:30.483 12:17:31 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:30.483 12:17:31 -- nvmf/common.sh@124 -- # set -e 00:21:30.483 12:17:31 -- nvmf/common.sh@125 -- # return 0 00:21:30.483 12:17:31 -- nvmf/common.sh@478 -- # '[' -n 3482134 ']' 00:21:30.483 12:17:31 -- nvmf/common.sh@479 -- # killprocess 3482134 00:21:30.483 12:17:31 -- common/autotest_common.sh@936 -- # '[' -z 3482134 ']' 00:21:30.483 12:17:31 -- common/autotest_common.sh@940 -- # kill -0 3482134 00:21:30.483 12:17:31 -- common/autotest_common.sh@941 -- # uname 00:21:30.483 12:17:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:30.483 12:17:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3482134 00:21:30.483 12:17:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:30.483 12:17:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:30.483 12:17:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3482134' 00:21:30.483 killing process with pid 3482134 00:21:30.483 12:17:31 -- common/autotest_common.sh@955 -- # kill 3482134 00:21:30.483 [2024-04-26 12:17:31.665826] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:21:30.483 12:17:31 -- common/autotest_common.sh@960 -- # wait 3482134 00:21:30.744 12:17:31 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:30.744 12:17:31 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:30.744 12:17:31 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:30.744 12:17:31 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:30.744 12:17:31 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:30.744 12:17:31 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:30.744 12:17:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:30.744 12:17:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:33.287 12:17:33 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:33.287 00:21:33.287 real 0m11.241s 00:21:33.287 user 0m8.097s 00:21:33.287 sys 0m5.870s 00:21:33.287 12:17:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:33.288 12:17:33 -- common/autotest_common.sh@10 -- # set +x 00:21:33.288 ************************************ 00:21:33.288 END TEST nvmf_identify 00:21:33.288 ************************************ 00:21:33.288 12:17:33 -- nvmf/nvmf.sh@96 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:33.288 12:17:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:33.288 12:17:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:33.288 12:17:33 -- common/autotest_common.sh@10 -- # set +x 00:21:33.288 ************************************ 00:21:33.288 START TEST nvmf_perf 00:21:33.288 ************************************ 00:21:33.288 12:17:34 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:33.288 * Looking for test storage... 00:21:33.288 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:33.288 12:17:34 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:33.288 12:17:34 -- nvmf/common.sh@7 -- # uname -s 00:21:33.288 12:17:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:33.288 12:17:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:33.288 12:17:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:33.288 12:17:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:33.288 12:17:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:33.288 12:17:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:33.288 12:17:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:33.288 12:17:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:33.288 12:17:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:33.288 12:17:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:33.288 12:17:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:33.288 12:17:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:33.288 12:17:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:33.288 12:17:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:33.288 12:17:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:33.288 12:17:34 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:33.288 12:17:34 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:33.288 12:17:34 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:33.288 12:17:34 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:33.288 12:17:34 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:33.288 12:17:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.288 12:17:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.288 12:17:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.288 12:17:34 -- paths/export.sh@5 -- # export PATH 00:21:33.288 12:17:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.288 12:17:34 -- nvmf/common.sh@47 -- # : 0 00:21:33.288 12:17:34 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:33.288 12:17:34 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:33.288 12:17:34 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:33.288 12:17:34 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:33.288 12:17:34 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:33.288 12:17:34 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:33.288 12:17:34 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:33.288 12:17:34 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:33.288 12:17:34 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:33.288 12:17:34 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:33.288 12:17:34 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:33.288 12:17:34 -- host/perf.sh@17 -- # nvmftestinit 00:21:33.288 12:17:34 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:33.288 12:17:34 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:33.288 12:17:34 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:33.288 12:17:34 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:33.288 12:17:34 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:33.288 12:17:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:33.288 12:17:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:33.288 12:17:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:33.288 12:17:34 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:33.288 12:17:34 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:33.288 12:17:34 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:33.288 12:17:34 -- common/autotest_common.sh@10 -- # set +x 00:21:39.917 12:17:40 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:39.917 12:17:40 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:39.917 12:17:40 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:39.917 12:17:40 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:39.917 12:17:40 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:39.917 12:17:40 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:39.917 12:17:40 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:39.917 12:17:40 -- nvmf/common.sh@295 -- # net_devs=() 00:21:39.917 12:17:40 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:39.917 12:17:40 -- nvmf/common.sh@296 -- # e810=() 00:21:39.917 12:17:40 -- nvmf/common.sh@296 -- # local -ga e810 00:21:39.917 12:17:40 -- nvmf/common.sh@297 -- # x722=() 00:21:39.917 12:17:40 -- nvmf/common.sh@297 -- # local -ga x722 00:21:39.917 12:17:40 -- nvmf/common.sh@298 -- # mlx=() 00:21:39.917 12:17:40 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:39.917 12:17:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:39.917 12:17:40 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:39.917 12:17:40 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:39.917 12:17:40 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:39.917 12:17:40 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:39.917 12:17:40 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:39.917 12:17:40 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:39.917 12:17:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:39.917 12:17:40 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:39.917 12:17:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:39.917 12:17:40 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:39.917 12:17:40 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:39.917 12:17:40 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:39.917 12:17:40 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:39.917 12:17:40 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:39.917 12:17:40 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:39.917 12:17:40 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:39.917 12:17:40 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:39.917 12:17:40 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:39.917 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:39.917 12:17:40 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:39.917 12:17:40 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:39.917 12:17:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:39.917 12:17:40 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:39.917 12:17:40 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:39.917 12:17:40 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:39.917 12:17:40 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:39.917 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:39.917 12:17:40 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:39.917 12:17:40 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:39.917 12:17:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:39.917 12:17:40 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:39.917 12:17:40 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:39.917 12:17:40 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:39.917 12:17:40 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:39.917 12:17:40 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:39.917 12:17:40 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:39.917 12:17:40 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:39.917 12:17:40 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:39.917 12:17:40 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:39.917 12:17:40 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:39.917 Found net devices under 0000:31:00.0: cvl_0_0 00:21:39.917 12:17:40 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:39.917 12:17:40 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:39.917 12:17:40 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:39.917 12:17:40 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:39.917 12:17:40 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:39.917 12:17:40 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:39.917 Found net devices under 0000:31:00.1: cvl_0_1 00:21:39.917 12:17:40 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:39.917 12:17:40 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:39.917 12:17:40 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:39.917 12:17:40 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:39.917 12:17:40 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:39.917 12:17:40 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:39.917 12:17:40 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:39.917 12:17:40 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:39.917 12:17:40 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:39.917 12:17:40 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:39.917 12:17:40 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:39.917 12:17:40 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:39.917 12:17:40 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:39.917 12:17:40 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:39.917 12:17:40 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:39.917 12:17:40 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:39.917 12:17:40 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:39.918 12:17:40 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:39.918 12:17:40 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:39.918 12:17:41 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:39.918 12:17:41 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:39.918 12:17:41 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:39.918 12:17:41 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:40.179 12:17:41 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:40.180 12:17:41 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:40.180 12:17:41 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:40.180 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:40.180 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.578 ms 00:21:40.180 00:21:40.180 --- 10.0.0.2 ping statistics --- 00:21:40.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.180 rtt min/avg/max/mdev = 0.578/0.578/0.578/0.000 ms 00:21:40.180 12:17:41 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:40.180 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:40.180 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:21:40.180 00:21:40.180 --- 10.0.0.1 ping statistics --- 00:21:40.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.180 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:21:40.180 12:17:41 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:40.180 12:17:41 -- nvmf/common.sh@411 -- # return 0 00:21:40.180 12:17:41 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:40.180 12:17:41 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:40.180 12:17:41 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:40.180 12:17:41 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:40.180 12:17:41 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:40.180 12:17:41 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:40.180 12:17:41 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:40.180 12:17:41 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:21:40.180 12:17:41 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:40.180 12:17:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:40.180 12:17:41 -- common/autotest_common.sh@10 -- # set +x 00:21:40.180 12:17:41 -- nvmf/common.sh@470 -- # nvmfpid=3486632 00:21:40.180 12:17:41 -- nvmf/common.sh@471 -- # waitforlisten 3486632 00:21:40.180 12:17:41 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:40.180 12:17:41 -- common/autotest_common.sh@817 -- # '[' -z 3486632 ']' 00:21:40.180 12:17:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:40.180 12:17:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:40.180 12:17:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:40.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:40.180 12:17:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:40.180 12:17:41 -- common/autotest_common.sh@10 -- # set +x 00:21:40.180 [2024-04-26 12:17:41.347754] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:21:40.180 [2024-04-26 12:17:41.347848] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:40.180 EAL: No free 2048 kB hugepages reported on node 1 00:21:40.441 [2024-04-26 12:17:41.421736] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:40.441 [2024-04-26 12:17:41.495494] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:40.441 [2024-04-26 12:17:41.495549] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:40.441 [2024-04-26 12:17:41.495558] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:40.441 [2024-04-26 12:17:41.495565] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:40.441 [2024-04-26 12:17:41.495571] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:40.441 [2024-04-26 12:17:41.495708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:40.441 [2024-04-26 12:17:41.495847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:40.441 [2024-04-26 12:17:41.495957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:40.441 [2024-04-26 12:17:41.496092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:41.014 12:17:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:41.014 12:17:42 -- common/autotest_common.sh@850 -- # return 0 00:21:41.014 12:17:42 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:41.014 12:17:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:41.014 12:17:42 -- common/autotest_common.sh@10 -- # set +x 00:21:41.014 12:17:42 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:41.014 12:17:42 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:21:41.014 12:17:42 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:21:41.588 12:17:42 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:21:41.588 12:17:42 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:21:41.588 12:17:42 -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:21:41.588 12:17:42 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:41.850 12:17:42 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:21:41.850 12:17:42 -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:21:41.850 12:17:42 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:21:41.850 12:17:42 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:21:41.850 12:17:42 -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:42.110 [2024-04-26 12:17:43.116016] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:42.110 12:17:43 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:42.110 12:17:43 -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:42.110 12:17:43 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:42.370 12:17:43 -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:42.370 12:17:43 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:21:42.630 12:17:43 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:42.630 [2024-04-26 12:17:43.790488] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:42.631 12:17:43 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:42.891 12:17:43 -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:21:42.892 12:17:43 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:21:42.892 12:17:43 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:21:42.892 12:17:43 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:21:44.279 Initializing NVMe Controllers 00:21:44.279 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:21:44.279 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:21:44.279 Initialization complete. Launching workers. 00:21:44.279 ======================================================== 00:21:44.279 Latency(us) 00:21:44.279 Device Information : IOPS MiB/s Average min max 00:21:44.279 PCIE (0000:65:00.0) NSID 1 from core 0: 80328.74 313.78 397.72 62.10 4522.74 00:21:44.279 ======================================================== 00:21:44.279 Total : 80328.74 313.78 397.72 62.10 4522.74 00:21:44.279 00:21:44.279 12:17:45 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:44.279 EAL: No free 2048 kB hugepages reported on node 1 00:21:45.665 Initializing NVMe Controllers 00:21:45.665 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:45.665 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:45.665 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:45.665 Initialization complete. Launching workers. 00:21:45.665 ======================================================== 00:21:45.665 Latency(us) 00:21:45.665 Device Information : IOPS MiB/s Average min max 00:21:45.665 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 83.00 0.32 12488.38 92.35 45844.53 00:21:45.665 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 63.00 0.25 16199.40 7961.92 47890.44 00:21:45.665 ======================================================== 00:21:45.665 Total : 146.00 0.57 14089.71 92.35 47890.44 00:21:45.665 00:21:45.665 12:17:46 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:45.665 EAL: No free 2048 kB hugepages reported on node 1 00:21:47.053 Initializing NVMe Controllers 00:21:47.053 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:47.053 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:47.053 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:47.053 Initialization complete. Launching workers. 00:21:47.053 ======================================================== 00:21:47.053 Latency(us) 00:21:47.053 Device Information : IOPS MiB/s Average min max 00:21:47.053 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10227.98 39.95 3132.74 463.83 6553.21 00:21:47.053 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3633.99 14.20 8845.59 4546.23 17046.39 00:21:47.053 ======================================================== 00:21:47.053 Total : 13861.97 54.15 4630.40 463.83 17046.39 00:21:47.053 00:21:47.053 12:17:47 -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:21:47.053 12:17:47 -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:21:47.053 12:17:47 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:47.053 EAL: No free 2048 kB hugepages reported on node 1 00:21:49.601 Initializing NVMe Controllers 00:21:49.601 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:49.601 Controller IO queue size 128, less than required. 00:21:49.601 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:49.601 Controller IO queue size 128, less than required. 00:21:49.601 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:49.601 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:49.601 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:49.601 Initialization complete. Launching workers. 00:21:49.601 ======================================================== 00:21:49.601 Latency(us) 00:21:49.601 Device Information : IOPS MiB/s Average min max 00:21:49.601 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1564.00 391.00 83968.44 47651.17 151965.82 00:21:49.601 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 559.50 139.87 234118.76 62679.98 337066.25 00:21:49.601 ======================================================== 00:21:49.601 Total : 2123.50 530.87 123530.06 47651.17 337066.25 00:21:49.601 00:21:49.601 12:17:50 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:21:49.601 EAL: No free 2048 kB hugepages reported on node 1 00:21:49.601 No valid NVMe controllers or AIO or URING devices found 00:21:49.601 Initializing NVMe Controllers 00:21:49.601 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:49.601 Controller IO queue size 128, less than required. 00:21:49.601 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:49.601 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:21:49.601 Controller IO queue size 128, less than required. 00:21:49.601 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:49.601 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:21:49.601 WARNING: Some requested NVMe devices were skipped 00:21:49.601 12:17:50 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:21:49.601 EAL: No free 2048 kB hugepages reported on node 1 00:21:52.149 Initializing NVMe Controllers 00:21:52.149 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:52.149 Controller IO queue size 128, less than required. 00:21:52.149 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:52.149 Controller IO queue size 128, less than required. 00:21:52.149 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:52.149 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:52.149 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:52.149 Initialization complete. Launching workers. 00:21:52.149 00:21:52.149 ==================== 00:21:52.149 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:21:52.149 TCP transport: 00:21:52.149 polls: 24596 00:21:52.149 idle_polls: 13231 00:21:52.149 sock_completions: 11365 00:21:52.149 nvme_completions: 6541 00:21:52.149 submitted_requests: 9878 00:21:52.149 queued_requests: 1 00:21:52.149 00:21:52.149 ==================== 00:21:52.149 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:21:52.149 TCP transport: 00:21:52.149 polls: 24315 00:21:52.149 idle_polls: 12045 00:21:52.149 sock_completions: 12270 00:21:52.149 nvme_completions: 6521 00:21:52.149 submitted_requests: 9724 00:21:52.149 queued_requests: 1 00:21:52.149 ======================================================== 00:21:52.149 Latency(us) 00:21:52.149 Device Information : IOPS MiB/s Average min max 00:21:52.149 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1632.19 408.05 80305.83 44724.84 123758.60 00:21:52.149 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1627.20 406.80 79515.13 37385.46 125129.03 00:21:52.149 ======================================================== 00:21:52.149 Total : 3259.39 814.85 79911.09 37385.46 125129.03 00:21:52.149 00:21:52.149 12:17:53 -- host/perf.sh@66 -- # sync 00:21:52.149 12:17:53 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:52.410 12:17:53 -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:21:52.410 12:17:53 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:21:52.410 12:17:53 -- host/perf.sh@114 -- # nvmftestfini 00:21:52.410 12:17:53 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:52.410 12:17:53 -- nvmf/common.sh@117 -- # sync 00:21:52.410 12:17:53 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:52.410 12:17:53 -- nvmf/common.sh@120 -- # set +e 00:21:52.410 12:17:53 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:52.410 12:17:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:52.410 rmmod nvme_tcp 00:21:52.410 rmmod nvme_fabrics 00:21:52.410 rmmod nvme_keyring 00:21:52.410 12:17:53 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:52.410 12:17:53 -- nvmf/common.sh@124 -- # set -e 00:21:52.410 12:17:53 -- nvmf/common.sh@125 -- # return 0 00:21:52.410 12:17:53 -- nvmf/common.sh@478 -- # '[' -n 3486632 ']' 00:21:52.410 12:17:53 -- nvmf/common.sh@479 -- # killprocess 3486632 00:21:52.410 12:17:53 -- common/autotest_common.sh@936 -- # '[' -z 3486632 ']' 00:21:52.410 12:17:53 -- common/autotest_common.sh@940 -- # kill -0 3486632 00:21:52.410 12:17:53 -- common/autotest_common.sh@941 -- # uname 00:21:52.410 12:17:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:52.410 12:17:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3486632 00:21:52.410 12:17:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:52.410 12:17:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:52.410 12:17:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3486632' 00:21:52.410 killing process with pid 3486632 00:21:52.410 12:17:53 -- common/autotest_common.sh@955 -- # kill 3486632 00:21:52.410 12:17:53 -- common/autotest_common.sh@960 -- # wait 3486632 00:21:54.957 12:17:55 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:54.957 12:17:55 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:54.957 12:17:55 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:54.957 12:17:55 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:54.957 12:17:55 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:54.957 12:17:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:54.957 12:17:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:54.957 12:17:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:56.940 12:17:57 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:56.940 00:21:56.940 real 0m23.553s 00:21:56.940 user 0m57.585s 00:21:56.940 sys 0m7.874s 00:21:56.940 12:17:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:56.940 12:17:57 -- common/autotest_common.sh@10 -- # set +x 00:21:56.940 ************************************ 00:21:56.940 END TEST nvmf_perf 00:21:56.940 ************************************ 00:21:56.940 12:17:57 -- nvmf/nvmf.sh@97 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:56.940 12:17:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:56.940 12:17:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:56.940 12:17:57 -- common/autotest_common.sh@10 -- # set +x 00:21:56.940 ************************************ 00:21:56.940 START TEST nvmf_fio_host 00:21:56.940 ************************************ 00:21:56.940 12:17:57 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:56.941 * Looking for test storage... 00:21:56.941 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:56.941 12:17:57 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:56.941 12:17:57 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:56.941 12:17:57 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:56.941 12:17:57 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:56.941 12:17:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.941 12:17:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.941 12:17:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.941 12:17:57 -- paths/export.sh@5 -- # export PATH 00:21:56.941 12:17:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.941 12:17:57 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:56.941 12:17:57 -- nvmf/common.sh@7 -- # uname -s 00:21:56.941 12:17:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:56.941 12:17:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:56.941 12:17:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:56.941 12:17:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:56.941 12:17:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:56.941 12:17:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:56.941 12:17:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:56.941 12:17:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:56.941 12:17:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:56.941 12:17:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:56.941 12:17:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:56.941 12:17:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:56.941 12:17:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:56.941 12:17:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:56.941 12:17:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:56.941 12:17:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:56.941 12:17:57 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:56.941 12:17:57 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:56.941 12:17:57 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:56.941 12:17:57 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:56.941 12:17:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.941 12:17:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.941 12:17:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.941 12:17:57 -- paths/export.sh@5 -- # export PATH 00:21:56.941 12:17:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.941 12:17:57 -- nvmf/common.sh@47 -- # : 0 00:21:56.941 12:17:57 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:56.941 12:17:57 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:56.941 12:17:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:56.941 12:17:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:56.941 12:17:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:56.941 12:17:57 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:56.941 12:17:57 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:56.941 12:17:57 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:56.941 12:17:57 -- host/fio.sh@12 -- # nvmftestinit 00:21:56.941 12:17:57 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:56.941 12:17:57 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:56.941 12:17:57 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:56.941 12:17:57 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:56.941 12:17:57 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:56.941 12:17:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:56.941 12:17:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:56.941 12:17:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:56.941 12:17:58 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:56.941 12:17:58 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:56.941 12:17:58 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:56.941 12:17:58 -- common/autotest_common.sh@10 -- # set +x 00:22:05.093 12:18:04 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:05.093 12:18:04 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:05.093 12:18:04 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:05.093 12:18:04 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:05.093 12:18:04 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:05.093 12:18:04 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:05.093 12:18:04 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:05.093 12:18:04 -- nvmf/common.sh@295 -- # net_devs=() 00:22:05.093 12:18:04 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:05.093 12:18:04 -- nvmf/common.sh@296 -- # e810=() 00:22:05.093 12:18:04 -- nvmf/common.sh@296 -- # local -ga e810 00:22:05.093 12:18:04 -- nvmf/common.sh@297 -- # x722=() 00:22:05.093 12:18:04 -- nvmf/common.sh@297 -- # local -ga x722 00:22:05.093 12:18:04 -- nvmf/common.sh@298 -- # mlx=() 00:22:05.093 12:18:04 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:05.093 12:18:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:05.093 12:18:04 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:05.093 12:18:04 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:05.093 12:18:04 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:05.093 12:18:04 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:05.093 12:18:04 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:05.093 12:18:04 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:05.093 12:18:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:05.093 12:18:04 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:05.093 12:18:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:05.093 12:18:04 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:05.093 12:18:04 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:05.093 12:18:04 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:05.093 12:18:04 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:05.093 12:18:04 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:05.093 12:18:04 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:05.093 12:18:04 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:05.093 12:18:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:05.093 12:18:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:05.093 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:05.093 12:18:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:05.093 12:18:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:05.093 12:18:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:05.093 12:18:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:05.093 12:18:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:05.093 12:18:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:05.093 12:18:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:05.093 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:05.093 12:18:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:05.093 12:18:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:05.093 12:18:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:05.093 12:18:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:05.093 12:18:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:05.093 12:18:04 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:05.093 12:18:04 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:05.093 12:18:04 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:05.093 12:18:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:05.093 12:18:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:05.093 12:18:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:05.093 12:18:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:05.093 12:18:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:05.093 Found net devices under 0000:31:00.0: cvl_0_0 00:22:05.093 12:18:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:05.093 12:18:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:05.093 12:18:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:05.093 12:18:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:05.093 12:18:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:05.093 12:18:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:05.093 Found net devices under 0000:31:00.1: cvl_0_1 00:22:05.093 12:18:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:05.093 12:18:04 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:05.093 12:18:04 -- nvmf/common.sh@403 -- # is_hw=yes 00:22:05.093 12:18:04 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:22:05.093 12:18:04 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:22:05.093 12:18:04 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:22:05.093 12:18:04 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:05.093 12:18:04 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:05.093 12:18:04 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:05.093 12:18:04 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:05.093 12:18:04 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:05.093 12:18:04 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:05.093 12:18:04 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:05.093 12:18:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:05.093 12:18:04 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:05.093 12:18:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:05.093 12:18:04 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:05.093 12:18:04 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:05.093 12:18:04 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:05.093 12:18:05 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:05.093 12:18:05 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:05.093 12:18:05 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:05.093 12:18:05 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:05.093 12:18:05 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:05.093 12:18:05 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:05.093 12:18:05 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:05.093 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:05.093 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.667 ms 00:22:05.093 00:22:05.093 --- 10.0.0.2 ping statistics --- 00:22:05.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:05.093 rtt min/avg/max/mdev = 0.667/0.667/0.667/0.000 ms 00:22:05.093 12:18:05 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:05.093 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:05.093 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.337 ms 00:22:05.093 00:22:05.093 --- 10.0.0.1 ping statistics --- 00:22:05.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:05.093 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:22:05.093 12:18:05 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:05.093 12:18:05 -- nvmf/common.sh@411 -- # return 0 00:22:05.093 12:18:05 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:05.093 12:18:05 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:05.093 12:18:05 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:05.093 12:18:05 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:05.093 12:18:05 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:05.093 12:18:05 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:05.093 12:18:05 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:05.093 12:18:05 -- host/fio.sh@14 -- # [[ y != y ]] 00:22:05.093 12:18:05 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:22:05.093 12:18:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:05.093 12:18:05 -- common/autotest_common.sh@10 -- # set +x 00:22:05.093 12:18:05 -- host/fio.sh@22 -- # nvmfpid=3493872 00:22:05.093 12:18:05 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:05.093 12:18:05 -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:05.093 12:18:05 -- host/fio.sh@26 -- # waitforlisten 3493872 00:22:05.093 12:18:05 -- common/autotest_common.sh@817 -- # '[' -z 3493872 ']' 00:22:05.093 12:18:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:05.093 12:18:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:05.093 12:18:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:05.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:05.093 12:18:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:05.093 12:18:05 -- common/autotest_common.sh@10 -- # set +x 00:22:05.093 [2024-04-26 12:18:05.396562] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:22:05.093 [2024-04-26 12:18:05.396629] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:05.093 EAL: No free 2048 kB hugepages reported on node 1 00:22:05.093 [2024-04-26 12:18:05.469200] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:05.093 [2024-04-26 12:18:05.542270] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:05.094 [2024-04-26 12:18:05.542314] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:05.094 [2024-04-26 12:18:05.542323] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:05.094 [2024-04-26 12:18:05.542330] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:05.094 [2024-04-26 12:18:05.542336] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:05.094 [2024-04-26 12:18:05.542482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:05.094 [2024-04-26 12:18:05.542597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:05.094 [2024-04-26 12:18:05.542753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:05.094 [2024-04-26 12:18:05.542754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:05.094 12:18:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:05.094 12:18:06 -- common/autotest_common.sh@850 -- # return 0 00:22:05.094 12:18:06 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:05.094 12:18:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:05.094 12:18:06 -- common/autotest_common.sh@10 -- # set +x 00:22:05.094 [2024-04-26 12:18:06.184322] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:05.094 12:18:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:05.094 12:18:06 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:22:05.094 12:18:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:05.094 12:18:06 -- common/autotest_common.sh@10 -- # set +x 00:22:05.094 12:18:06 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:05.094 12:18:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:05.094 12:18:06 -- common/autotest_common.sh@10 -- # set +x 00:22:05.094 Malloc1 00:22:05.094 12:18:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:05.094 12:18:06 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:05.094 12:18:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:05.094 12:18:06 -- common/autotest_common.sh@10 -- # set +x 00:22:05.094 12:18:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:05.094 12:18:06 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:05.094 12:18:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:05.094 12:18:06 -- common/autotest_common.sh@10 -- # set +x 00:22:05.094 12:18:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:05.094 12:18:06 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:05.094 12:18:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:05.094 12:18:06 -- common/autotest_common.sh@10 -- # set +x 00:22:05.094 [2024-04-26 12:18:06.283851] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:05.094 12:18:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:05.094 12:18:06 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:05.094 12:18:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:05.094 12:18:06 -- common/autotest_common.sh@10 -- # set +x 00:22:05.094 12:18:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:05.094 12:18:06 -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:22:05.094 12:18:06 -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:05.094 12:18:06 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:05.094 12:18:06 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:22:05.094 12:18:06 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:05.094 12:18:06 -- common/autotest_common.sh@1325 -- # local sanitizers 00:22:05.094 12:18:06 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:05.094 12:18:06 -- common/autotest_common.sh@1327 -- # shift 00:22:05.094 12:18:06 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:22:05.094 12:18:06 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:05.094 12:18:06 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:05.094 12:18:06 -- common/autotest_common.sh@1331 -- # grep libasan 00:22:05.094 12:18:06 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:05.376 12:18:06 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:05.376 12:18:06 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:05.376 12:18:06 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:05.376 12:18:06 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:05.376 12:18:06 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:22:05.376 12:18:06 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:05.376 12:18:06 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:05.376 12:18:06 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:05.376 12:18:06 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:05.376 12:18:06 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:05.640 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:05.640 fio-3.35 00:22:05.640 Starting 1 thread 00:22:05.640 EAL: No free 2048 kB hugepages reported on node 1 00:22:08.182 00:22:08.182 test: (groupid=0, jobs=1): err= 0: pid=3494319: Fri Apr 26 12:18:08 2024 00:22:08.182 read: IOPS=10.6k, BW=41.6MiB/s (43.6MB/s)(83.4MiB/2005msec) 00:22:08.182 slat (usec): min=2, max=286, avg= 2.19, stdev= 2.69 00:22:08.182 clat (usec): min=3359, max=9165, avg=6620.90, stdev=1176.21 00:22:08.182 lat (usec): min=3388, max=9167, avg=6623.10, stdev=1176.21 00:22:08.182 clat percentiles (usec): 00:22:08.182 | 1.00th=[ 4555], 5.00th=[ 4817], 10.00th=[ 5014], 20.00th=[ 5211], 00:22:08.182 | 30.00th=[ 5473], 40.00th=[ 6652], 50.00th=[ 7046], 60.00th=[ 7242], 00:22:08.182 | 70.00th=[ 7504], 80.00th=[ 7701], 90.00th=[ 7963], 95.00th=[ 8160], 00:22:08.182 | 99.00th=[ 8586], 99.50th=[ 8717], 99.90th=[ 8979], 99.95th=[ 8979], 00:22:08.182 | 99.99th=[ 9110] 00:22:08.182 bw ( KiB/s): min=37240, max=54656, per=99.88%, avg=42534.00, stdev=8139.54, samples=4 00:22:08.182 iops : min= 9310, max=13664, avg=10633.50, stdev=2034.89, samples=4 00:22:08.182 write: IOPS=10.6k, BW=41.5MiB/s (43.6MB/s)(83.3MiB/2005msec); 0 zone resets 00:22:08.182 slat (usec): min=2, max=269, avg= 2.28, stdev= 2.03 00:22:08.182 clat (usec): min=2913, max=8296, avg=5376.44, stdev=959.80 00:22:08.182 lat (usec): min=2931, max=8298, avg=5378.72, stdev=959.83 00:22:08.182 clat percentiles (usec): 00:22:08.182 | 1.00th=[ 3654], 5.00th=[ 3884], 10.00th=[ 4015], 20.00th=[ 4228], 00:22:08.182 | 30.00th=[ 4490], 40.00th=[ 5407], 50.00th=[ 5735], 60.00th=[ 5932], 00:22:08.182 | 70.00th=[ 6063], 80.00th=[ 6259], 90.00th=[ 6456], 95.00th=[ 6652], 00:22:08.182 | 99.00th=[ 6980], 99.50th=[ 7046], 99.90th=[ 7308], 99.95th=[ 7635], 00:22:08.182 | 99.99th=[ 8160] 00:22:08.182 bw ( KiB/s): min=37776, max=55000, per=100.00%, avg=42538.00, stdev=8335.11, samples=4 00:22:08.182 iops : min= 9444, max=13750, avg=10634.50, stdev=2083.78, samples=4 00:22:08.182 lat (msec) : 4=4.43%, 10=95.57% 00:22:08.182 cpu : usr=72.75%, sys=25.70%, ctx=53, majf=0, minf=5 00:22:08.182 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:08.182 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:08.182 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:08.182 issued rwts: total=21345,21318,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:08.182 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:08.182 00:22:08.182 Run status group 0 (all jobs): 00:22:08.182 READ: bw=41.6MiB/s (43.6MB/s), 41.6MiB/s-41.6MiB/s (43.6MB/s-43.6MB/s), io=83.4MiB (87.4MB), run=2005-2005msec 00:22:08.182 WRITE: bw=41.5MiB/s (43.6MB/s), 41.5MiB/s-41.5MiB/s (43.6MB/s-43.6MB/s), io=83.3MiB (87.3MB), run=2005-2005msec 00:22:08.182 12:18:08 -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:08.182 12:18:08 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:08.182 12:18:08 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:22:08.182 12:18:08 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:08.182 12:18:08 -- common/autotest_common.sh@1325 -- # local sanitizers 00:22:08.182 12:18:08 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:08.182 12:18:08 -- common/autotest_common.sh@1327 -- # shift 00:22:08.182 12:18:08 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:22:08.182 12:18:08 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:08.182 12:18:08 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:08.182 12:18:08 -- common/autotest_common.sh@1331 -- # grep libasan 00:22:08.182 12:18:08 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:08.182 12:18:09 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:08.182 12:18:09 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:08.182 12:18:09 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:08.182 12:18:09 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:08.182 12:18:09 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:22:08.182 12:18:09 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:08.182 12:18:09 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:08.182 12:18:09 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:08.182 12:18:09 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:08.182 12:18:09 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:08.182 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:22:08.182 fio-3.35 00:22:08.182 Starting 1 thread 00:22:08.182 EAL: No free 2048 kB hugepages reported on node 1 00:22:10.727 [2024-04-26 12:18:11.700575] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9ccd0 is same with the state(5) to be set 00:22:10.727 00:22:10.727 test: (groupid=0, jobs=1): err= 0: pid=3495268: Fri Apr 26 12:18:11 2024 00:22:10.727 read: IOPS=9341, BW=146MiB/s (153MB/s)(293MiB/2007msec) 00:22:10.727 slat (usec): min=3, max=486, avg= 3.66, stdev= 3.87 00:22:10.727 clat (usec): min=2035, max=17167, avg=8314.86, stdev=2070.23 00:22:10.727 lat (usec): min=2038, max=17171, avg=8318.51, stdev=2070.35 00:22:10.727 clat percentiles (usec): 00:22:10.727 | 1.00th=[ 4178], 5.00th=[ 5211], 10.00th=[ 5735], 20.00th=[ 6456], 00:22:10.727 | 30.00th=[ 7111], 40.00th=[ 7570], 50.00th=[ 8094], 60.00th=[ 8717], 00:22:10.727 | 70.00th=[ 9503], 80.00th=[10290], 90.00th=[10945], 95.00th=[11731], 00:22:10.727 | 99.00th=[13435], 99.50th=[14091], 99.90th=[15139], 99.95th=[15139], 00:22:10.727 | 99.99th=[15270] 00:22:10.727 bw ( KiB/s): min=69984, max=84352, per=49.60%, avg=74136.00, stdev=6849.12, samples=4 00:22:10.727 iops : min= 4374, max= 5272, avg=4633.50, stdev=428.07, samples=4 00:22:10.727 write: IOPS=5606, BW=87.6MiB/s (91.8MB/s)(152MiB/1731msec); 0 zone resets 00:22:10.727 slat (usec): min=40, max=358, avg=41.14, stdev= 7.62 00:22:10.727 clat (usec): min=2201, max=17137, avg=9440.34, stdev=1591.03 00:22:10.727 lat (usec): min=2241, max=17177, avg=9481.48, stdev=1592.57 00:22:10.727 clat percentiles (usec): 00:22:10.727 | 1.00th=[ 6128], 5.00th=[ 7242], 10.00th=[ 7635], 20.00th=[ 8160], 00:22:10.727 | 30.00th=[ 8586], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[ 9634], 00:22:10.727 | 70.00th=[10028], 80.00th=[10552], 90.00th=[11469], 95.00th=[12125], 00:22:10.727 | 99.00th=[14353], 99.50th=[15008], 99.90th=[16057], 99.95th=[16188], 00:22:10.727 | 99.99th=[17171] 00:22:10.727 bw ( KiB/s): min=72864, max=87296, per=86.14%, avg=77264.00, stdev=6736.26, samples=4 00:22:10.727 iops : min= 4554, max= 5456, avg=4829.00, stdev=421.02, samples=4 00:22:10.727 lat (msec) : 4=0.59%, 10=72.98%, 20=26.43% 00:22:10.727 cpu : usr=86.09%, sys=12.36%, ctx=15, majf=0, minf=16 00:22:10.727 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:22:10.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:10.727 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:10.727 issued rwts: total=18749,9704,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:10.727 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:10.727 00:22:10.727 Run status group 0 (all jobs): 00:22:10.727 READ: bw=146MiB/s (153MB/s), 146MiB/s-146MiB/s (153MB/s-153MB/s), io=293MiB (307MB), run=2007-2007msec 00:22:10.727 WRITE: bw=87.6MiB/s (91.8MB/s), 87.6MiB/s-87.6MiB/s (91.8MB/s-91.8MB/s), io=152MiB (159MB), run=1731-1731msec 00:22:10.727 12:18:11 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:10.727 12:18:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:10.727 12:18:11 -- common/autotest_common.sh@10 -- # set +x 00:22:10.727 12:18:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:10.727 12:18:11 -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:22:10.727 12:18:11 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:22:10.727 12:18:11 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:22:10.727 12:18:11 -- host/fio.sh@84 -- # nvmftestfini 00:22:10.727 12:18:11 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:10.727 12:18:11 -- nvmf/common.sh@117 -- # sync 00:22:10.727 12:18:11 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:10.727 12:18:11 -- nvmf/common.sh@120 -- # set +e 00:22:10.727 12:18:11 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:10.727 12:18:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:10.727 rmmod nvme_tcp 00:22:10.727 rmmod nvme_fabrics 00:22:10.727 rmmod nvme_keyring 00:22:10.727 12:18:11 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:10.727 12:18:11 -- nvmf/common.sh@124 -- # set -e 00:22:10.727 12:18:11 -- nvmf/common.sh@125 -- # return 0 00:22:10.727 12:18:11 -- nvmf/common.sh@478 -- # '[' -n 3493872 ']' 00:22:10.727 12:18:11 -- nvmf/common.sh@479 -- # killprocess 3493872 00:22:10.727 12:18:11 -- common/autotest_common.sh@936 -- # '[' -z 3493872 ']' 00:22:10.727 12:18:11 -- common/autotest_common.sh@940 -- # kill -0 3493872 00:22:10.728 12:18:11 -- common/autotest_common.sh@941 -- # uname 00:22:10.728 12:18:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:10.728 12:18:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3493872 00:22:10.728 12:18:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:10.728 12:18:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:10.728 12:18:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3493872' 00:22:10.728 killing process with pid 3493872 00:22:10.728 12:18:11 -- common/autotest_common.sh@955 -- # kill 3493872 00:22:10.728 12:18:11 -- common/autotest_common.sh@960 -- # wait 3493872 00:22:10.988 12:18:12 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:10.988 12:18:12 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:10.988 12:18:12 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:10.988 12:18:12 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:10.988 12:18:12 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:10.988 12:18:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:10.988 12:18:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:10.988 12:18:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:12.901 12:18:14 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:12.901 00:22:12.901 real 0m16.235s 00:22:12.901 user 1m2.104s 00:22:12.901 sys 0m7.166s 00:22:12.901 12:18:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:12.901 12:18:14 -- common/autotest_common.sh@10 -- # set +x 00:22:12.901 ************************************ 00:22:12.901 END TEST nvmf_fio_host 00:22:12.901 ************************************ 00:22:13.163 12:18:14 -- nvmf/nvmf.sh@98 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:13.163 12:18:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:13.163 12:18:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:13.163 12:18:14 -- common/autotest_common.sh@10 -- # set +x 00:22:13.163 ************************************ 00:22:13.163 START TEST nvmf_failover 00:22:13.163 ************************************ 00:22:13.163 12:18:14 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:13.163 * Looking for test storage... 00:22:13.163 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:13.163 12:18:14 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:13.425 12:18:14 -- nvmf/common.sh@7 -- # uname -s 00:22:13.425 12:18:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:13.425 12:18:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:13.425 12:18:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:13.425 12:18:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:13.425 12:18:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:13.425 12:18:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:13.425 12:18:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:13.425 12:18:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:13.425 12:18:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:13.425 12:18:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:13.425 12:18:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:13.425 12:18:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:13.425 12:18:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:13.425 12:18:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:13.425 12:18:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:13.425 12:18:14 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:13.425 12:18:14 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:13.425 12:18:14 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:13.425 12:18:14 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:13.425 12:18:14 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:13.425 12:18:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.425 12:18:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.426 12:18:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.426 12:18:14 -- paths/export.sh@5 -- # export PATH 00:22:13.426 12:18:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.426 12:18:14 -- nvmf/common.sh@47 -- # : 0 00:22:13.426 12:18:14 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:13.426 12:18:14 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:13.426 12:18:14 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:13.426 12:18:14 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:13.426 12:18:14 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:13.426 12:18:14 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:13.426 12:18:14 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:13.426 12:18:14 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:13.426 12:18:14 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:13.426 12:18:14 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:13.426 12:18:14 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:13.426 12:18:14 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:13.426 12:18:14 -- host/failover.sh@18 -- # nvmftestinit 00:22:13.426 12:18:14 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:13.426 12:18:14 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:13.426 12:18:14 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:13.426 12:18:14 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:13.426 12:18:14 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:13.426 12:18:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:13.426 12:18:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:13.426 12:18:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:13.426 12:18:14 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:22:13.426 12:18:14 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:22:13.426 12:18:14 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:13.426 12:18:14 -- common/autotest_common.sh@10 -- # set +x 00:22:21.571 12:18:21 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:21.571 12:18:21 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:21.571 12:18:21 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:21.571 12:18:21 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:21.571 12:18:21 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:21.571 12:18:21 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:21.571 12:18:21 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:21.571 12:18:21 -- nvmf/common.sh@295 -- # net_devs=() 00:22:21.571 12:18:21 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:21.571 12:18:21 -- nvmf/common.sh@296 -- # e810=() 00:22:21.571 12:18:21 -- nvmf/common.sh@296 -- # local -ga e810 00:22:21.571 12:18:21 -- nvmf/common.sh@297 -- # x722=() 00:22:21.571 12:18:21 -- nvmf/common.sh@297 -- # local -ga x722 00:22:21.571 12:18:21 -- nvmf/common.sh@298 -- # mlx=() 00:22:21.571 12:18:21 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:21.571 12:18:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:21.571 12:18:21 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:21.571 12:18:21 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:21.571 12:18:21 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:21.571 12:18:21 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:21.571 12:18:21 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:21.571 12:18:21 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:21.571 12:18:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:21.571 12:18:21 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:21.571 12:18:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:21.571 12:18:21 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:21.571 12:18:21 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:21.571 12:18:21 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:21.571 12:18:21 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:21.571 12:18:21 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:21.571 12:18:21 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:21.571 12:18:21 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:21.571 12:18:21 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:21.571 12:18:21 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:21.571 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:21.571 12:18:21 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:21.571 12:18:21 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:21.571 12:18:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:21.571 12:18:21 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:21.571 12:18:21 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:21.571 12:18:21 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:21.571 12:18:21 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:21.571 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:21.571 12:18:21 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:21.571 12:18:21 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:21.571 12:18:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:21.571 12:18:21 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:21.571 12:18:21 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:21.571 12:18:21 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:21.571 12:18:21 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:21.571 12:18:21 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:21.571 12:18:21 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:21.571 12:18:21 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:21.571 12:18:21 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:21.572 12:18:21 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:21.572 12:18:21 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:21.572 Found net devices under 0000:31:00.0: cvl_0_0 00:22:21.572 12:18:21 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:21.572 12:18:21 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:21.572 12:18:21 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:21.572 12:18:21 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:21.572 12:18:21 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:21.572 12:18:21 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:21.572 Found net devices under 0000:31:00.1: cvl_0_1 00:22:21.572 12:18:21 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:21.572 12:18:21 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:21.572 12:18:21 -- nvmf/common.sh@403 -- # is_hw=yes 00:22:21.572 12:18:21 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:22:21.572 12:18:21 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:22:21.572 12:18:21 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:22:21.572 12:18:21 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:21.572 12:18:21 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:21.572 12:18:21 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:21.572 12:18:21 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:21.572 12:18:21 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:21.572 12:18:21 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:21.572 12:18:21 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:21.572 12:18:21 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:21.572 12:18:21 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:21.572 12:18:21 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:21.572 12:18:21 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:21.572 12:18:21 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:21.572 12:18:21 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:21.572 12:18:21 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:21.572 12:18:21 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:21.572 12:18:21 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:21.572 12:18:21 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:21.572 12:18:21 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:21.572 12:18:21 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:21.572 12:18:21 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:21.572 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:21.572 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.594 ms 00:22:21.572 00:22:21.572 --- 10.0.0.2 ping statistics --- 00:22:21.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:21.572 rtt min/avg/max/mdev = 0.594/0.594/0.594/0.000 ms 00:22:21.572 12:18:21 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:21.572 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:21.572 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:22:21.572 00:22:21.572 --- 10.0.0.1 ping statistics --- 00:22:21.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:21.572 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:22:21.572 12:18:21 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:21.572 12:18:21 -- nvmf/common.sh@411 -- # return 0 00:22:21.572 12:18:21 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:21.572 12:18:21 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:21.572 12:18:21 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:21.572 12:18:21 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:21.572 12:18:21 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:21.572 12:18:21 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:21.572 12:18:21 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:21.572 12:18:21 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:22:21.572 12:18:21 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:21.572 12:18:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:21.572 12:18:21 -- common/autotest_common.sh@10 -- # set +x 00:22:21.572 12:18:21 -- nvmf/common.sh@470 -- # nvmfpid=3500083 00:22:21.572 12:18:21 -- nvmf/common.sh@471 -- # waitforlisten 3500083 00:22:21.572 12:18:21 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:21.572 12:18:21 -- common/autotest_common.sh@817 -- # '[' -z 3500083 ']' 00:22:21.572 12:18:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:21.572 12:18:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:21.572 12:18:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:21.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:21.572 12:18:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:21.572 12:18:21 -- common/autotest_common.sh@10 -- # set +x 00:22:21.572 [2024-04-26 12:18:21.847041] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:22:21.572 [2024-04-26 12:18:21.847108] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:21.572 EAL: No free 2048 kB hugepages reported on node 1 00:22:21.572 [2024-04-26 12:18:21.936992] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:21.572 [2024-04-26 12:18:22.030360] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:21.572 [2024-04-26 12:18:22.030417] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:21.572 [2024-04-26 12:18:22.030426] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:21.572 [2024-04-26 12:18:22.030433] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:21.572 [2024-04-26 12:18:22.030440] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:21.572 [2024-04-26 12:18:22.030571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:21.572 [2024-04-26 12:18:22.030740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:21.572 [2024-04-26 12:18:22.030741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:21.572 12:18:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:21.572 12:18:22 -- common/autotest_common.sh@850 -- # return 0 00:22:21.572 12:18:22 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:21.572 12:18:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:21.572 12:18:22 -- common/autotest_common.sh@10 -- # set +x 00:22:21.572 12:18:22 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:21.572 12:18:22 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:21.832 [2024-04-26 12:18:22.795172] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:21.832 12:18:22 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:21.832 Malloc0 00:22:21.832 12:18:23 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:22.097 12:18:23 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:22.367 12:18:23 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:22.367 [2024-04-26 12:18:23.474278] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:22.367 12:18:23 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:22.627 [2024-04-26 12:18:23.642696] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:22.627 12:18:23 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:22.627 [2024-04-26 12:18:23.807206] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:22.627 12:18:23 -- host/failover.sh@31 -- # bdevperf_pid=3500467 00:22:22.627 12:18:23 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:22:22.627 12:18:23 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:22.627 12:18:23 -- host/failover.sh@34 -- # waitforlisten 3500467 /var/tmp/bdevperf.sock 00:22:22.627 12:18:23 -- common/autotest_common.sh@817 -- # '[' -z 3500467 ']' 00:22:22.627 12:18:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:22.627 12:18:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:22.627 12:18:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:22.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:22.627 12:18:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:22.627 12:18:23 -- common/autotest_common.sh@10 -- # set +x 00:22:23.566 12:18:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:23.566 12:18:24 -- common/autotest_common.sh@850 -- # return 0 00:22:23.566 12:18:24 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:23.825 NVMe0n1 00:22:23.825 12:18:24 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:24.084 00:22:24.084 12:18:25 -- host/failover.sh@39 -- # run_test_pid=3500788 00:22:24.084 12:18:25 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:24.084 12:18:25 -- host/failover.sh@41 -- # sleep 1 00:22:25.464 12:18:26 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:25.464 [2024-04-26 12:18:26.427442] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc01ca0 is same with the state(5) to be set 00:22:25.464 [2024-04-26 12:18:26.427481] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc01ca0 is same with the state(5) to be set 00:22:25.464 [2024-04-26 12:18:26.427487] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc01ca0 is same with the state(5) to be set 00:22:25.464 [2024-04-26 12:18:26.427492] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc01ca0 is same with the state(5) to be set 00:22:25.464 [2024-04-26 12:18:26.427496] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc01ca0 is same with the state(5) to be set 00:22:25.464 [2024-04-26 12:18:26.427501] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc01ca0 is same with the state(5) to be set 00:22:25.464 [2024-04-26 12:18:26.427506] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc01ca0 is same with the state(5) to be set 00:22:25.464 [2024-04-26 12:18:26.427511] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc01ca0 is same with the state(5) to be set 00:22:25.464 [2024-04-26 12:18:26.427515] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc01ca0 is same with the state(5) to be set 00:22:25.464 [2024-04-26 12:18:26.427524] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc01ca0 is same with the state(5) to be set 00:22:25.464 [2024-04-26 12:18:26.427529] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc01ca0 is same with the state(5) to be set 00:22:25.464 [2024-04-26 12:18:26.427533] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc01ca0 is same with the state(5) to be set 00:22:25.464 [2024-04-26 12:18:26.427538] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc01ca0 is same with the state(5) to be set 00:22:25.464 [2024-04-26 12:18:26.427542] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc01ca0 is same with the state(5) to be set 00:22:25.464 [2024-04-26 12:18:26.427546] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc01ca0 is same with the state(5) to be set 00:22:25.464 12:18:26 -- host/failover.sh@45 -- # sleep 3 00:22:28.769 12:18:29 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:28.769 00:22:28.769 12:18:29 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:28.769 [2024-04-26 12:18:29.865633] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.865669] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.865676] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.865680] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.865685] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.865690] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.865694] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.865699] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.865704] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.865708] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.865713] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.865717] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.865721] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.865726] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.865730] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.865734] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.865739] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.865743] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.865753] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.865758] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.865762] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.865767] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.865771] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.865775] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.865780] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.865784] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.865788] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.865793] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.865797] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.865801] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.865806] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.865810] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.865815] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.865819] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.865823] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.865828] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.865832] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.865840] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.865845] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.865849] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.865854] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.865858] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.865863] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.865867] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.865871] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.865877] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.865882] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.865887] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.865891] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.865896] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.865900] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.865904] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.865909] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.865913] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.865917] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.865922] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.865926] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.865931] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.865936] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.865940] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.865944] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.865949] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.865953] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.865957] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.865962] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.865966] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.865971] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.865976] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.865982] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.865986] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.865991] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.865995] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.866001] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.866006] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.866010] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.866015] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.866020] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.866024] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.866031] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.866036] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.866041] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.866047] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.866051] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.866056] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.866060] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.866065] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.866070] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.866074] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.866079] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.866083] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.866088] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.866093] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.866099] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.866104] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.866109] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.866113] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.866118] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.866123] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.866128] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.866133] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.866138] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.866143] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.866149] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.866154] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.866159] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.866165] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.866170] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.866176] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.866181] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.866186] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 [2024-04-26 12:18:29.866190] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b50 is same with the state(5) to be set 00:22:28.769 12:18:29 -- host/failover.sh@50 -- # sleep 3 00:22:32.071 12:18:32 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:32.071 [2024-04-26 12:18:33.034740] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:32.071 12:18:33 -- host/failover.sh@55 -- # sleep 1 00:22:33.017 12:18:34 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:33.017 [2024-04-26 12:18:34.210928] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc03850 is same with the state(5) to be set 00:22:33.017 [2024-04-26 12:18:34.210965] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc03850 is same with the state(5) to be set 00:22:33.017 [2024-04-26 12:18:34.210970] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc03850 is same with the state(5) to be set 00:22:33.017 [2024-04-26 12:18:34.210975] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc03850 is same with the state(5) to be set 00:22:33.017 [2024-04-26 12:18:34.210979] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc03850 is same with the state(5) to be set 00:22:33.017 [2024-04-26 12:18:34.210984] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc03850 is same with the state(5) to be set 00:22:33.017 [2024-04-26 12:18:34.210989] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc03850 is same with the state(5) to be set 00:22:33.017 [2024-04-26 12:18:34.210993] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc03850 is same with the state(5) to be set 00:22:33.017 [2024-04-26 12:18:34.210997] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc03850 is same with the state(5) to be set 00:22:33.017 [2024-04-26 12:18:34.211002] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc03850 is same with the state(5) to be set 00:22:33.017 [2024-04-26 12:18:34.211006] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc03850 is same with the state(5) to be set 00:22:33.017 [2024-04-26 12:18:34.211010] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc03850 is same with the state(5) to be set 00:22:33.017 [2024-04-26 12:18:34.211020] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc03850 is same with the state(5) to be set 00:22:33.017 [2024-04-26 12:18:34.211025] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc03850 is same with the state(5) to be set 00:22:33.017 [2024-04-26 12:18:34.211029] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc03850 is same with the state(5) to be set 00:22:33.017 [2024-04-26 12:18:34.211034] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc03850 is same with the state(5) to be set 00:22:33.017 [2024-04-26 12:18:34.211038] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc03850 is same with the state(5) to be set 00:22:33.017 [2024-04-26 12:18:34.211043] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc03850 is same with the state(5) to be set 00:22:33.017 [2024-04-26 12:18:34.211048] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc03850 is same with the state(5) to be set 00:22:33.017 [2024-04-26 12:18:34.211052] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc03850 is same with the state(5) to be set 00:22:33.017 [2024-04-26 12:18:34.211056] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc03850 is same with the state(5) to be set 00:22:33.017 [2024-04-26 12:18:34.211061] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc03850 is same with the state(5) to be set 00:22:33.017 [2024-04-26 12:18:34.211065] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc03850 is same with the state(5) to be set 00:22:33.017 [2024-04-26 12:18:34.211070] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc03850 is same with the state(5) to be set 00:22:33.017 [2024-04-26 12:18:34.211074] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc03850 is same with the state(5) to be set 00:22:33.017 [2024-04-26 12:18:34.211078] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc03850 is same with the state(5) to be set 00:22:33.017 [2024-04-26 12:18:34.211083] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc03850 is same with the state(5) to be set 00:22:33.017 [2024-04-26 12:18:34.211088] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc03850 is same with the state(5) to be set 00:22:33.017 [2024-04-26 12:18:34.211093] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc03850 is same with the state(5) to be set 00:22:33.017 [2024-04-26 12:18:34.211098] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc03850 is same with the state(5) to be set 00:22:33.017 [2024-04-26 12:18:34.211102] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc03850 is same with the state(5) to be set 00:22:33.278 12:18:34 -- host/failover.sh@59 -- # wait 3500788 00:22:39.969 0 00:22:39.969 12:18:40 -- host/failover.sh@61 -- # killprocess 3500467 00:22:39.969 12:18:40 -- common/autotest_common.sh@936 -- # '[' -z 3500467 ']' 00:22:39.969 12:18:40 -- common/autotest_common.sh@940 -- # kill -0 3500467 00:22:39.969 12:18:40 -- common/autotest_common.sh@941 -- # uname 00:22:39.969 12:18:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:39.969 12:18:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3500467 00:22:39.969 12:18:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:39.969 12:18:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:39.969 12:18:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3500467' 00:22:39.969 killing process with pid 3500467 00:22:39.969 12:18:40 -- common/autotest_common.sh@955 -- # kill 3500467 00:22:39.969 12:18:40 -- common/autotest_common.sh@960 -- # wait 3500467 00:22:39.969 12:18:40 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:39.969 [2024-04-26 12:18:23.873266] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:22:39.969 [2024-04-26 12:18:23.873320] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3500467 ] 00:22:39.969 EAL: No free 2048 kB hugepages reported on node 1 00:22:39.969 [2024-04-26 12:18:23.933281] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:39.969 [2024-04-26 12:18:23.996074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:39.969 Running I/O for 15 seconds... 00:22:39.969 [2024-04-26 12:18:26.427830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:96136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.969 [2024-04-26 12:18:26.427870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.969 [2024-04-26 12:18:26.427887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:96144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.969 [2024-04-26 12:18:26.427896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.969 [2024-04-26 12:18:26.427906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:96152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.969 [2024-04-26 12:18:26.427913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.969 [2024-04-26 12:18:26.427923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:96160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.969 [2024-04-26 12:18:26.427930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.969 [2024-04-26 12:18:26.427939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.969 [2024-04-26 12:18:26.427947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.969 [2024-04-26 12:18:26.427956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.969 [2024-04-26 12:18:26.427963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.969 [2024-04-26 12:18:26.427972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:96184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.969 [2024-04-26 12:18:26.427979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.969 [2024-04-26 12:18:26.427988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:96192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.969 [2024-04-26 12:18:26.427995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.969 [2024-04-26 12:18:26.428004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:96200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.969 [2024-04-26 12:18:26.428012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.969 [2024-04-26 12:18:26.428021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.969 [2024-04-26 12:18:26.428028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.969 [2024-04-26 12:18:26.428038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:96216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.969 [2024-04-26 12:18:26.428045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.969 [2024-04-26 12:18:26.428059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:96224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.969 [2024-04-26 12:18:26.428066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.969 [2024-04-26 12:18:26.428075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:96232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.969 [2024-04-26 12:18:26.428082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.969 [2024-04-26 12:18:26.428091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:96240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.969 [2024-04-26 12:18:26.428098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.969 [2024-04-26 12:18:26.428107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.969 [2024-04-26 12:18:26.428114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.969 [2024-04-26 12:18:26.428123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:96256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.969 [2024-04-26 12:18:26.428130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.969 [2024-04-26 12:18:26.428139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:96264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.969 [2024-04-26 12:18:26.428146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.969 [2024-04-26 12:18:26.428155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:96272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.969 [2024-04-26 12:18:26.428162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.969 [2024-04-26 12:18:26.428171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:96280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.969 [2024-04-26 12:18:26.428179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.969 [2024-04-26 12:18:26.428188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:96288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.969 [2024-04-26 12:18:26.428196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.969 [2024-04-26 12:18:26.428205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.969 [2024-04-26 12:18:26.428212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.969 [2024-04-26 12:18:26.428221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.969 [2024-04-26 12:18:26.428228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.969 [2024-04-26 12:18:26.428237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:96312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.969 [2024-04-26 12:18:26.428245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.969 [2024-04-26 12:18:26.428254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.969 [2024-04-26 12:18:26.428263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.969 [2024-04-26 12:18:26.428272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.969 [2024-04-26 12:18:26.428279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.969 [2024-04-26 12:18:26.428288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.969 [2024-04-26 12:18:26.428295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.969 [2024-04-26 12:18:26.428304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.969 [2024-04-26 12:18:26.428311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.970 [2024-04-26 12:18:26.428320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:96352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.970 [2024-04-26 12:18:26.428327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.970 [2024-04-26 12:18:26.428336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:96360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.970 [2024-04-26 12:18:26.428342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.970 [2024-04-26 12:18:26.428351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.970 [2024-04-26 12:18:26.428358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.970 [2024-04-26 12:18:26.428367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.970 [2024-04-26 12:18:26.428375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.970 [2024-04-26 12:18:26.428383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:96384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.970 [2024-04-26 12:18:26.428390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.970 [2024-04-26 12:18:26.428399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:96392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.970 [2024-04-26 12:18:26.428407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.970 [2024-04-26 12:18:26.428417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:96400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.970 [2024-04-26 12:18:26.428424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.970 [2024-04-26 12:18:26.428433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.970 [2024-04-26 12:18:26.428441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.970 [2024-04-26 12:18:26.428450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:96416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.970 [2024-04-26 12:18:26.428457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.970 [2024-04-26 12:18:26.428467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:96424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.970 [2024-04-26 12:18:26.428475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.970 [2024-04-26 12:18:26.428484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.970 [2024-04-26 12:18:26.428491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.970 [2024-04-26 12:18:26.428500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.970 [2024-04-26 12:18:26.428507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.970 [2024-04-26 12:18:26.428515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:96448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.970 [2024-04-26 12:18:26.428523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.970 [2024-04-26 12:18:26.428532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:95448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.970 [2024-04-26 12:18:26.428540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.970 [2024-04-26 12:18:26.428549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:95456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.970 [2024-04-26 12:18:26.428555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.970 [2024-04-26 12:18:26.428565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:95464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.970 [2024-04-26 12:18:26.428572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.970 [2024-04-26 12:18:26.428581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:95472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.970 [2024-04-26 12:18:26.428588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.970 [2024-04-26 12:18:26.428597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:95480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.970 [2024-04-26 12:18:26.428604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.970 [2024-04-26 12:18:26.428613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:95488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.970 [2024-04-26 12:18:26.428620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.970 [2024-04-26 12:18:26.428629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:95496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.970 [2024-04-26 12:18:26.428636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.970 [2024-04-26 12:18:26.428646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.970 [2024-04-26 12:18:26.428652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.970 [2024-04-26 12:18:26.428661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:95512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.970 [2024-04-26 12:18:26.428669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.970 [2024-04-26 12:18:26.428679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:95520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.970 [2024-04-26 12:18:26.428686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.970 [2024-04-26 12:18:26.428695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.970 [2024-04-26 12:18:26.428702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.970 [2024-04-26 12:18:26.428711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:95536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.970 [2024-04-26 12:18:26.428718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.970 [2024-04-26 12:18:26.428727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:95544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.970 [2024-04-26 12:18:26.428735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.970 [2024-04-26 12:18:26.428743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:95552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.970 [2024-04-26 12:18:26.428750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.970 [2024-04-26 12:18:26.428759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:95560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.970 [2024-04-26 12:18:26.428766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.970 [2024-04-26 12:18:26.428775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:95568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.970 [2024-04-26 12:18:26.428782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.970 [2024-04-26 12:18:26.428791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:95576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.970 [2024-04-26 12:18:26.428798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.970 [2024-04-26 12:18:26.428807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:95584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.970 [2024-04-26 12:18:26.428814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.970 [2024-04-26 12:18:26.428823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:95592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.970 [2024-04-26 12:18:26.428830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.970 [2024-04-26 12:18:26.428843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:95600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.970 [2024-04-26 12:18:26.428851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.970 [2024-04-26 12:18:26.428860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:95608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.970 [2024-04-26 12:18:26.428867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.970 [2024-04-26 12:18:26.428876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:95616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.970 [2024-04-26 12:18:26.428885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.970 [2024-04-26 12:18:26.428894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:95624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.970 [2024-04-26 12:18:26.428901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.970 [2024-04-26 12:18:26.428910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:95632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.970 [2024-04-26 12:18:26.428916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.970 [2024-04-26 12:18:26.428926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:95640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.970 [2024-04-26 12:18:26.428934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.970 [2024-04-26 12:18:26.428944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:95648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.970 [2024-04-26 12:18:26.428952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.970 [2024-04-26 12:18:26.428962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.970 [2024-04-26 12:18:26.428971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.970 [2024-04-26 12:18:26.428982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:95664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.970 [2024-04-26 12:18:26.428989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.970 [2024-04-26 12:18:26.428999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:95672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.970 [2024-04-26 12:18:26.429005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.970 [2024-04-26 12:18:26.429014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:95680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.970 [2024-04-26 12:18:26.429023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.970 [2024-04-26 12:18:26.429033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.970 [2024-04-26 12:18:26.429040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.970 [2024-04-26 12:18:26.429050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.970 [2024-04-26 12:18:26.429059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.970 [2024-04-26 12:18:26.429070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:95696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.970 [2024-04-26 12:18:26.429077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.970 [2024-04-26 12:18:26.429088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:95704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.970 [2024-04-26 12:18:26.429097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.970 [2024-04-26 12:18:26.429108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:95712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.970 [2024-04-26 12:18:26.429115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.970 [2024-04-26 12:18:26.429124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:95720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.970 [2024-04-26 12:18:26.429131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.970 [2024-04-26 12:18:26.429141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:95728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.970 [2024-04-26 12:18:26.429148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.970 [2024-04-26 12:18:26.429157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:95736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.970 [2024-04-26 12:18:26.429164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.970 [2024-04-26 12:18:26.429174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:95744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.970 [2024-04-26 12:18:26.429181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.970 [2024-04-26 12:18:26.429189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:95752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.970 [2024-04-26 12:18:26.429196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.970 [2024-04-26 12:18:26.429205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:95760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.970 [2024-04-26 12:18:26.429212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.970 [2024-04-26 12:18:26.429221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:95768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.970 [2024-04-26 12:18:26.429230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.970 [2024-04-26 12:18:26.429239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:95776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.970 [2024-04-26 12:18:26.429247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.970 [2024-04-26 12:18:26.429256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.970 [2024-04-26 12:18:26.429263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.970 [2024-04-26 12:18:26.429272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:95792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.970 [2024-04-26 12:18:26.429279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.970 [2024-04-26 12:18:26.429288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:95800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.970 [2024-04-26 12:18:26.429296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.970 [2024-04-26 12:18:26.429305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:95808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.970 [2024-04-26 12:18:26.429313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.970 [2024-04-26 12:18:26.429322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:95816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.970 [2024-04-26 12:18:26.429329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.970 [2024-04-26 12:18:26.429339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:95824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.970 [2024-04-26 12:18:26.429346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.970 [2024-04-26 12:18:26.429355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:95832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.970 [2024-04-26 12:18:26.429362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.970 [2024-04-26 12:18:26.429371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:95840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.970 [2024-04-26 12:18:26.429378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.970 [2024-04-26 12:18:26.429387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:95848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.970 [2024-04-26 12:18:26.429394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.970 [2024-04-26 12:18:26.429403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:95856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.970 [2024-04-26 12:18:26.429410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.970 [2024-04-26 12:18:26.429419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:95864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.970 [2024-04-26 12:18:26.429426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.970 [2024-04-26 12:18:26.429435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:95872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.970 [2024-04-26 12:18:26.429442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.971 [2024-04-26 12:18:26.429452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.971 [2024-04-26 12:18:26.429458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.971 [2024-04-26 12:18:26.429467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:95880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.971 [2024-04-26 12:18:26.429475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.971 [2024-04-26 12:18:26.429484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:95888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.971 [2024-04-26 12:18:26.429492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.971 [2024-04-26 12:18:26.429501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:95896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.971 [2024-04-26 12:18:26.429509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.971 [2024-04-26 12:18:26.429519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:95904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.971 [2024-04-26 12:18:26.429526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.971 [2024-04-26 12:18:26.429535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:95912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.971 [2024-04-26 12:18:26.429543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.971 [2024-04-26 12:18:26.429553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:95920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.971 [2024-04-26 12:18:26.429560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.971 [2024-04-26 12:18:26.429569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:95928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.971 [2024-04-26 12:18:26.429576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.971 [2024-04-26 12:18:26.429585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:95936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.971 [2024-04-26 12:18:26.429592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.971 [2024-04-26 12:18:26.429602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:95944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.971 [2024-04-26 12:18:26.429609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.971 [2024-04-26 12:18:26.429619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:95952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.971 [2024-04-26 12:18:26.429626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.971 [2024-04-26 12:18:26.429635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.971 [2024-04-26 12:18:26.429641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.971 [2024-04-26 12:18:26.429651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:95968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.971 [2024-04-26 12:18:26.429658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.971 [2024-04-26 12:18:26.429668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:95976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.971 [2024-04-26 12:18:26.429674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.971 [2024-04-26 12:18:26.429683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:95984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.971 [2024-04-26 12:18:26.429690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.971 [2024-04-26 12:18:26.429700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:95992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.971 [2024-04-26 12:18:26.429707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.971 [2024-04-26 12:18:26.429716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:96000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.971 [2024-04-26 12:18:26.429723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.971 [2024-04-26 12:18:26.429734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:96008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.971 [2024-04-26 12:18:26.429741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.971 [2024-04-26 12:18:26.429750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:96016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.971 [2024-04-26 12:18:26.429757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.971 [2024-04-26 12:18:26.429767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:96024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.971 [2024-04-26 12:18:26.429774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.971 [2024-04-26 12:18:26.429783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:96032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.971 [2024-04-26 12:18:26.429790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.971 [2024-04-26 12:18:26.429800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.971 [2024-04-26 12:18:26.429808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.971 [2024-04-26 12:18:26.429817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:96048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.971 [2024-04-26 12:18:26.429824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.971 [2024-04-26 12:18:26.429833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.971 [2024-04-26 12:18:26.429844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.971 [2024-04-26 12:18:26.429853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:96064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.971 [2024-04-26 12:18:26.429860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.971 [2024-04-26 12:18:26.429870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:96072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.971 [2024-04-26 12:18:26.429877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.971 [2024-04-26 12:18:26.429886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:96080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.971 [2024-04-26 12:18:26.429893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.971 [2024-04-26 12:18:26.429902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:96088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.971 [2024-04-26 12:18:26.429908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.971 [2024-04-26 12:18:26.429918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:96096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.971 [2024-04-26 12:18:26.429925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.971 [2024-04-26 12:18:26.429934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:96104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.971 [2024-04-26 12:18:26.429942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.971 [2024-04-26 12:18:26.429951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:96112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.971 [2024-04-26 12:18:26.429958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.971 [2024-04-26 12:18:26.429967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:96120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.971 [2024-04-26 12:18:26.429975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.971 [2024-04-26 12:18:26.429983] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df4800 is same with the state(5) to be set 00:22:39.971 [2024-04-26 12:18:26.429991] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.971 [2024-04-26 12:18:26.429997] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.971 [2024-04-26 12:18:26.430004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96128 len:8 PRP1 0x0 PRP2 0x0 00:22:39.971 [2024-04-26 12:18:26.430010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.971 [2024-04-26 12:18:26.430045] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1df4800 was disconnected and freed. reset controller. 00:22:39.971 [2024-04-26 12:18:26.430055] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:39.971 [2024-04-26 12:18:26.430075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:39.971 [2024-04-26 12:18:26.430083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.971 [2024-04-26 12:18:26.430092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:39.971 [2024-04-26 12:18:26.430099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.971 [2024-04-26 12:18:26.430106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:39.971 [2024-04-26 12:18:26.430113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.971 [2024-04-26 12:18:26.430122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:39.971 [2024-04-26 12:18:26.430129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.971 [2024-04-26 12:18:26.430137] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:39.971 [2024-04-26 12:18:26.433643] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:39.971 [2024-04-26 12:18:26.433665] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dfee40 (9): Bad file descriptor 00:22:39.971 [2024-04-26 12:18:26.471620] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:39.971 [2024-04-26 12:18:29.868619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:20360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.971 [2024-04-26 12:18:29.868658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.971 [2024-04-26 12:18:29.868674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.971 [2024-04-26 12:18:29.868688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.971 [2024-04-26 12:18:29.868699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:20376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.971 [2024-04-26 12:18:29.868706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.971 [2024-04-26 12:18:29.868716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:20384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.971 [2024-04-26 12:18:29.868723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.971 [2024-04-26 12:18:29.868732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.971 [2024-04-26 12:18:29.868739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.971 [2024-04-26 12:18:29.868749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:20400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.971 [2024-04-26 12:18:29.868756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.971 [2024-04-26 12:18:29.868765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.971 [2024-04-26 12:18:29.868772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.971 [2024-04-26 12:18:29.868781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:20416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.971 [2024-04-26 12:18:29.868789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.971 [2024-04-26 12:18:29.868798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.971 [2024-04-26 12:18:29.868805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.971 [2024-04-26 12:18:29.868814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:20432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.971 [2024-04-26 12:18:29.868821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.971 [2024-04-26 12:18:29.868830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:20440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.971 [2024-04-26 12:18:29.868843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.971 [2024-04-26 12:18:29.868853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:20448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.971 [2024-04-26 12:18:29.868860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.971 [2024-04-26 12:18:29.868869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:20456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.971 [2024-04-26 12:18:29.868876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.971 [2024-04-26 12:18:29.868885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.971 [2024-04-26 12:18:29.868893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.971 [2024-04-26 12:18:29.868906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.971 [2024-04-26 12:18:29.868914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.971 [2024-04-26 12:18:29.868923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:20480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.971 [2024-04-26 12:18:29.868930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.971 [2024-04-26 12:18:29.868939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:20488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.971 [2024-04-26 12:18:29.868946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.971 [2024-04-26 12:18:29.868956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:20496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.971 [2024-04-26 12:18:29.868963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.971 [2024-04-26 12:18:29.868972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.971 [2024-04-26 12:18:29.868978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.971 [2024-04-26 12:18:29.868988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.971 [2024-04-26 12:18:29.868995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.971 [2024-04-26 12:18:29.869004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:20520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.971 [2024-04-26 12:18:29.869012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.971 [2024-04-26 12:18:29.869021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.971 [2024-04-26 12:18:29.869027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.971 [2024-04-26 12:18:29.869036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.971 [2024-04-26 12:18:29.869044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.971 [2024-04-26 12:18:29.869053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:20544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.971 [2024-04-26 12:18:29.869060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.971 [2024-04-26 12:18:29.869069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:20552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.971 [2024-04-26 12:18:29.869076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.971 [2024-04-26 12:18:29.869085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:20560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.971 [2024-04-26 12:18:29.869092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.971 [2024-04-26 12:18:29.869101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:20568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.971 [2024-04-26 12:18:29.869110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.971 [2024-04-26 12:18:29.869119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:20576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.971 [2024-04-26 12:18:29.869126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.971 [2024-04-26 12:18:29.869135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.971 [2024-04-26 12:18:29.869142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.971 [2024-04-26 12:18:29.869151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:20592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.971 [2024-04-26 12:18:29.869158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.971 [2024-04-26 12:18:29.869167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:20600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.971 [2024-04-26 12:18:29.869174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.971 [2024-04-26 12:18:29.869186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:20608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.971 [2024-04-26 12:18:29.869192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.972 [2024-04-26 12:18:29.869201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:20616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.972 [2024-04-26 12:18:29.869209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.972 [2024-04-26 12:18:29.869218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.972 [2024-04-26 12:18:29.869226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.972 [2024-04-26 12:18:29.869235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.972 [2024-04-26 12:18:29.869242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.972 [2024-04-26 12:18:29.869251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:20640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.972 [2024-04-26 12:18:29.869258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.972 [2024-04-26 12:18:29.869267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:20648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.972 [2024-04-26 12:18:29.869274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.972 [2024-04-26 12:18:29.869283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.972 [2024-04-26 12:18:29.869290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.972 [2024-04-26 12:18:29.869299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:20664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.972 [2024-04-26 12:18:29.869306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.972 [2024-04-26 12:18:29.869315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.972 [2024-04-26 12:18:29.869324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.972 [2024-04-26 12:18:29.869333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:20680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.972 [2024-04-26 12:18:29.869340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.972 [2024-04-26 12:18:29.869350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:20688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.972 [2024-04-26 12:18:29.869357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.972 [2024-04-26 12:18:29.869366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:20696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.972 [2024-04-26 12:18:29.869373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.972 [2024-04-26 12:18:29.869382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:20704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.972 [2024-04-26 12:18:29.869389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.972 [2024-04-26 12:18:29.869398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:20712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.972 [2024-04-26 12:18:29.869405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.972 [2024-04-26 12:18:29.869413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:20720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.972 [2024-04-26 12:18:29.869421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.972 [2024-04-26 12:18:29.869429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:20728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.972 [2024-04-26 12:18:29.869436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.972 [2024-04-26 12:18:29.869445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:20736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.972 [2024-04-26 12:18:29.869452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.972 [2024-04-26 12:18:29.869462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:20744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.972 [2024-04-26 12:18:29.869469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.972 [2024-04-26 12:18:29.869479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.972 [2024-04-26 12:18:29.869486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.972 [2024-04-26 12:18:29.869495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:20760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.972 [2024-04-26 12:18:29.869501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.972 [2024-04-26 12:18:29.869511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:20768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.972 [2024-04-26 12:18:29.869518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.972 [2024-04-26 12:18:29.869528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.972 [2024-04-26 12:18:29.869535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.972 [2024-04-26 12:18:29.869544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.972 [2024-04-26 12:18:29.869551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.972 [2024-04-26 12:18:29.869560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.972 [2024-04-26 12:18:29.869567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.972 [2024-04-26 12:18:29.869576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:20800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.972 [2024-04-26 12:18:29.869583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.972 [2024-04-26 12:18:29.869591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:20808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.972 [2024-04-26 12:18:29.869598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.972 [2024-04-26 12:18:29.869607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:20816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.972 [2024-04-26 12:18:29.869614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.972 [2024-04-26 12:18:29.869623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:20824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.972 [2024-04-26 12:18:29.869629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.972 [2024-04-26 12:18:29.869638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.972 [2024-04-26 12:18:29.869645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.972 [2024-04-26 12:18:29.869653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:20840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.972 [2024-04-26 12:18:29.869661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.972 [2024-04-26 12:18:29.869669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:20848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.972 [2024-04-26 12:18:29.869676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.972 [2024-04-26 12:18:29.869685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:20856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.972 [2024-04-26 12:18:29.869692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.972 [2024-04-26 12:18:29.869700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.972 [2024-04-26 12:18:29.869707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.972 [2024-04-26 12:18:29.869716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:20872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.972 [2024-04-26 12:18:29.869725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.972 [2024-04-26 12:18:29.869734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:20880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.972 [2024-04-26 12:18:29.869741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.972 [2024-04-26 12:18:29.869752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.972 [2024-04-26 12:18:29.869759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.972 [2024-04-26 12:18:29.869769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:20896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.972 [2024-04-26 12:18:29.869776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.972 [2024-04-26 12:18:29.869784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.972 [2024-04-26 12:18:29.869791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.972 [2024-04-26 12:18:29.869801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:20912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.972 [2024-04-26 12:18:29.869808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.972 [2024-04-26 12:18:29.869817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:20920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.972 [2024-04-26 12:18:29.869824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.972 [2024-04-26 12:18:29.869833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.972 [2024-04-26 12:18:29.869844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.972 [2024-04-26 12:18:29.869853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.972 [2024-04-26 12:18:29.869861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.972 [2024-04-26 12:18:29.869869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:20944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.972 [2024-04-26 12:18:29.869877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.972 [2024-04-26 12:18:29.869886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.972 [2024-04-26 12:18:29.869893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.972 [2024-04-26 12:18:29.869902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.972 [2024-04-26 12:18:29.869909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.972 [2024-04-26 12:18:29.869918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:20968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.972 [2024-04-26 12:18:29.869925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.972 [2024-04-26 12:18:29.869936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:20976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.972 [2024-04-26 12:18:29.869943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.972 [2024-04-26 12:18:29.869953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:20984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.972 [2024-04-26 12:18:29.869960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.972 [2024-04-26 12:18:29.869969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.972 [2024-04-26 12:18:29.869976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.972 [2024-04-26 12:18:29.869984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:21000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.972 [2024-04-26 12:18:29.869991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.972 [2024-04-26 12:18:29.870001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:21008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.972 [2024-04-26 12:18:29.870008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.972 [2024-04-26 12:18:29.870016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.972 [2024-04-26 12:18:29.870023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.972 [2024-04-26 12:18:29.870032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.972 [2024-04-26 12:18:29.870039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.972 [2024-04-26 12:18:29.870049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.972 [2024-04-26 12:18:29.870056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.972 [2024-04-26 12:18:29.870065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.972 [2024-04-26 12:18:29.870073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.972 [2024-04-26 12:18:29.870081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.972 [2024-04-26 12:18:29.870088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.972 [2024-04-26 12:18:29.870108] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.972 [2024-04-26 12:18:29.870115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:8 PRP1 0x0 PRP2 0x0 00:22:39.972 [2024-04-26 12:18:29.870123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.972 [2024-04-26 12:18:29.870133] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.972 [2024-04-26 12:18:29.870139] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.972 [2024-04-26 12:18:29.870145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21064 len:8 PRP1 0x0 PRP2 0x0 00:22:39.972 [2024-04-26 12:18:29.870151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.972 [2024-04-26 12:18:29.870161] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.972 [2024-04-26 12:18:29.870166] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.972 [2024-04-26 12:18:29.870172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21072 len:8 PRP1 0x0 PRP2 0x0 00:22:39.972 [2024-04-26 12:18:29.870178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.972 [2024-04-26 12:18:29.870186] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.972 [2024-04-26 12:18:29.870191] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.972 [2024-04-26 12:18:29.870197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21080 len:8 PRP1 0x0 PRP2 0x0 00:22:39.972 [2024-04-26 12:18:29.870204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.972 [2024-04-26 12:18:29.870211] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.972 [2024-04-26 12:18:29.870217] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.972 [2024-04-26 12:18:29.870222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:8 PRP1 0x0 PRP2 0x0 00:22:39.972 [2024-04-26 12:18:29.870229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.972 [2024-04-26 12:18:29.870237] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.972 [2024-04-26 12:18:29.870242] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.972 [2024-04-26 12:18:29.870248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21096 len:8 PRP1 0x0 PRP2 0x0 00:22:39.972 [2024-04-26 12:18:29.870256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.972 [2024-04-26 12:18:29.870263] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.972 [2024-04-26 12:18:29.870268] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.972 [2024-04-26 12:18:29.870274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21104 len:8 PRP1 0x0 PRP2 0x0 00:22:39.972 [2024-04-26 12:18:29.870281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.972 [2024-04-26 12:18:29.870289] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.972 [2024-04-26 12:18:29.870294] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.972 [2024-04-26 12:18:29.870300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21112 len:8 PRP1 0x0 PRP2 0x0 00:22:39.972 [2024-04-26 12:18:29.870307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.972 [2024-04-26 12:18:29.870315] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.972 [2024-04-26 12:18:29.870320] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.972 [2024-04-26 12:18:29.870326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:8 PRP1 0x0 PRP2 0x0 00:22:39.972 [2024-04-26 12:18:29.870334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.972 [2024-04-26 12:18:29.870341] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.972 [2024-04-26 12:18:29.870347] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.972 [2024-04-26 12:18:29.870353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21128 len:8 PRP1 0x0 PRP2 0x0 00:22:39.972 [2024-04-26 12:18:29.870362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.972 [2024-04-26 12:18:29.870369] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.973 [2024-04-26 12:18:29.870374] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.973 [2024-04-26 12:18:29.870381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21136 len:8 PRP1 0x0 PRP2 0x0 00:22:39.973 [2024-04-26 12:18:29.870388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.973 [2024-04-26 12:18:29.870396] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.973 [2024-04-26 12:18:29.870401] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.973 [2024-04-26 12:18:29.870407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21144 len:8 PRP1 0x0 PRP2 0x0 00:22:39.973 [2024-04-26 12:18:29.870413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.973 [2024-04-26 12:18:29.870421] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.973 [2024-04-26 12:18:29.870427] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.973 [2024-04-26 12:18:29.870432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:8 PRP1 0x0 PRP2 0x0 00:22:39.973 [2024-04-26 12:18:29.870439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.973 [2024-04-26 12:18:29.870447] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.973 [2024-04-26 12:18:29.870452] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.973 [2024-04-26 12:18:29.870458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21160 len:8 PRP1 0x0 PRP2 0x0 00:22:39.973 [2024-04-26 12:18:29.870464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.973 [2024-04-26 12:18:29.870472] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.973 [2024-04-26 12:18:29.870477] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.973 [2024-04-26 12:18:29.870483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21168 len:8 PRP1 0x0 PRP2 0x0 00:22:39.973 [2024-04-26 12:18:29.870490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.973 [2024-04-26 12:18:29.870497] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.973 [2024-04-26 12:18:29.870502] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.973 [2024-04-26 12:18:29.870508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21176 len:8 PRP1 0x0 PRP2 0x0 00:22:39.973 [2024-04-26 12:18:29.870515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.973 [2024-04-26 12:18:29.870523] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.973 [2024-04-26 12:18:29.870528] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.973 [2024-04-26 12:18:29.870534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:8 PRP1 0x0 PRP2 0x0 00:22:39.973 [2024-04-26 12:18:29.870541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.973 [2024-04-26 12:18:29.870548] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.973 [2024-04-26 12:18:29.870554] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.973 [2024-04-26 12:18:29.870560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21192 len:8 PRP1 0x0 PRP2 0x0 00:22:39.973 [2024-04-26 12:18:29.870568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.973 [2024-04-26 12:18:29.870575] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.973 [2024-04-26 12:18:29.870580] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.973 [2024-04-26 12:18:29.870586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21200 len:8 PRP1 0x0 PRP2 0x0 00:22:39.973 [2024-04-26 12:18:29.870593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.973 [2024-04-26 12:18:29.870600] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.973 [2024-04-26 12:18:29.870605] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.973 [2024-04-26 12:18:29.870611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21208 len:8 PRP1 0x0 PRP2 0x0 00:22:39.973 [2024-04-26 12:18:29.870618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.973 [2024-04-26 12:18:29.870626] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.973 [2024-04-26 12:18:29.870632] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.973 [2024-04-26 12:18:29.870638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:8 PRP1 0x0 PRP2 0x0 00:22:39.973 [2024-04-26 12:18:29.870645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.973 [2024-04-26 12:18:29.870653] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.973 [2024-04-26 12:18:29.870658] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.973 [2024-04-26 12:18:29.870664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21224 len:8 PRP1 0x0 PRP2 0x0 00:22:39.973 [2024-04-26 12:18:29.870672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.973 [2024-04-26 12:18:29.870679] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.973 [2024-04-26 12:18:29.870685] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.973 [2024-04-26 12:18:29.870691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21232 len:8 PRP1 0x0 PRP2 0x0 00:22:39.973 [2024-04-26 12:18:29.870698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.973 [2024-04-26 12:18:29.870706] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.973 [2024-04-26 12:18:29.870711] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.973 [2024-04-26 12:18:29.870717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21240 len:8 PRP1 0x0 PRP2 0x0 00:22:39.973 [2024-04-26 12:18:29.870724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.973 [2024-04-26 12:18:29.870732] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.973 [2024-04-26 12:18:29.870737] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.973 [2024-04-26 12:18:29.870743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:8 PRP1 0x0 PRP2 0x0 00:22:39.973 [2024-04-26 12:18:29.870750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.973 [2024-04-26 12:18:29.870758] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.973 [2024-04-26 12:18:29.870764] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.973 [2024-04-26 12:18:29.870770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21256 len:8 PRP1 0x0 PRP2 0x0 00:22:39.973 [2024-04-26 12:18:29.870777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.973 [2024-04-26 12:18:29.870784] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.973 [2024-04-26 12:18:29.870789] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.973 [2024-04-26 12:18:29.881564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21264 len:8 PRP1 0x0 PRP2 0x0 00:22:39.973 [2024-04-26 12:18:29.881593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.973 [2024-04-26 12:18:29.881606] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.973 [2024-04-26 12:18:29.881613] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.973 [2024-04-26 12:18:29.881620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21272 len:8 PRP1 0x0 PRP2 0x0 00:22:39.973 [2024-04-26 12:18:29.881627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.973 [2024-04-26 12:18:29.881634] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.973 [2024-04-26 12:18:29.881640] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.973 [2024-04-26 12:18:29.881647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:8 PRP1 0x0 PRP2 0x0 00:22:39.973 [2024-04-26 12:18:29.881655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.973 [2024-04-26 12:18:29.881663] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.973 [2024-04-26 12:18:29.881668] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.973 [2024-04-26 12:18:29.881674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21288 len:8 PRP1 0x0 PRP2 0x0 00:22:39.973 [2024-04-26 12:18:29.881681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.973 [2024-04-26 12:18:29.881689] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.973 [2024-04-26 12:18:29.881695] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.973 [2024-04-26 12:18:29.881701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21296 len:8 PRP1 0x0 PRP2 0x0 00:22:39.973 [2024-04-26 12:18:29.881708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.973 [2024-04-26 12:18:29.881716] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.973 [2024-04-26 12:18:29.881721] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.973 [2024-04-26 12:18:29.881727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21304 len:8 PRP1 0x0 PRP2 0x0 00:22:39.973 [2024-04-26 12:18:29.881734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.973 [2024-04-26 12:18:29.881742] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.973 [2024-04-26 12:18:29.881747] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.973 [2024-04-26 12:18:29.881753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:8 PRP1 0x0 PRP2 0x0 00:22:39.973 [2024-04-26 12:18:29.881765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.973 [2024-04-26 12:18:29.881772] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.973 [2024-04-26 12:18:29.881779] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.973 [2024-04-26 12:18:29.881785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21320 len:8 PRP1 0x0 PRP2 0x0 00:22:39.973 [2024-04-26 12:18:29.881792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.973 [2024-04-26 12:18:29.881799] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.973 [2024-04-26 12:18:29.881804] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.973 [2024-04-26 12:18:29.881810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21328 len:8 PRP1 0x0 PRP2 0x0 00:22:39.973 [2024-04-26 12:18:29.881817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.973 [2024-04-26 12:18:29.881825] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.973 [2024-04-26 12:18:29.881831] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.973 [2024-04-26 12:18:29.881852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21336 len:8 PRP1 0x0 PRP2 0x0 00:22:39.973 [2024-04-26 12:18:29.881860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.973 [2024-04-26 12:18:29.881867] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.973 [2024-04-26 12:18:29.881873] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.973 [2024-04-26 12:18:29.881879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:8 PRP1 0x0 PRP2 0x0 00:22:39.973 [2024-04-26 12:18:29.881886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.973 [2024-04-26 12:18:29.881896] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.973 [2024-04-26 12:18:29.881901] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.973 [2024-04-26 12:18:29.881907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21352 len:8 PRP1 0x0 PRP2 0x0 00:22:39.973 [2024-04-26 12:18:29.881914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.973 [2024-04-26 12:18:29.881922] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.973 [2024-04-26 12:18:29.881927] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.973 [2024-04-26 12:18:29.881934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21360 len:8 PRP1 0x0 PRP2 0x0 00:22:39.973 [2024-04-26 12:18:29.881941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.973 [2024-04-26 12:18:29.881948] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.973 [2024-04-26 12:18:29.881953] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.973 [2024-04-26 12:18:29.881959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21368 len:8 PRP1 0x0 PRP2 0x0 00:22:39.973 [2024-04-26 12:18:29.881968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.973 [2024-04-26 12:18:29.881976] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.973 [2024-04-26 12:18:29.881981] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.973 [2024-04-26 12:18:29.881989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:8 PRP1 0x0 PRP2 0x0 00:22:39.973 [2024-04-26 12:18:29.881996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.973 [2024-04-26 12:18:29.882035] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e0b2f0 was disconnected and freed. reset controller. 00:22:39.973 [2024-04-26 12:18:29.882045] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:22:39.973 [2024-04-26 12:18:29.882071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:39.973 [2024-04-26 12:18:29.882079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.973 [2024-04-26 12:18:29.882088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:39.973 [2024-04-26 12:18:29.882096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.973 [2024-04-26 12:18:29.882104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:39.973 [2024-04-26 12:18:29.882111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.973 [2024-04-26 12:18:29.882119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:39.973 [2024-04-26 12:18:29.882126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.973 [2024-04-26 12:18:29.882134] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:39.973 [2024-04-26 12:18:29.882171] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dfee40 (9): Bad file descriptor 00:22:39.973 [2024-04-26 12:18:29.885660] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:39.973 [2024-04-26 12:18:29.932277] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:39.973 [2024-04-26 12:18:34.211612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:30152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.973 [2024-04-26 12:18:34.211651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.973 [2024-04-26 12:18:34.211669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:30160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.973 [2024-04-26 12:18:34.211678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.973 [2024-04-26 12:18:34.211689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:30168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.973 [2024-04-26 12:18:34.211696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.973 [2024-04-26 12:18:34.211705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:30176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.973 [2024-04-26 12:18:34.211712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.973 [2024-04-26 12:18:34.211721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:30184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.973 [2024-04-26 12:18:34.211728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.973 [2024-04-26 12:18:34.211738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:30192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.973 [2024-04-26 12:18:34.211750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.973 [2024-04-26 12:18:34.211759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.973 [2024-04-26 12:18:34.211766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.973 [2024-04-26 12:18:34.211775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:30208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.973 [2024-04-26 12:18:34.211782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.973 [2024-04-26 12:18:34.211791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:30216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.973 [2024-04-26 12:18:34.211798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.973 [2024-04-26 12:18:34.211807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:30224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.973 [2024-04-26 12:18:34.211815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.973 [2024-04-26 12:18:34.211824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:30232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.973 [2024-04-26 12:18:34.211831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.973 [2024-04-26 12:18:34.211845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:30240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.973 [2024-04-26 12:18:34.211853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.973 [2024-04-26 12:18:34.211862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:30248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.973 [2024-04-26 12:18:34.211869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.973 [2024-04-26 12:18:34.211878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:30256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.973 [2024-04-26 12:18:34.211885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.973 [2024-04-26 12:18:34.211893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:30264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.973 [2024-04-26 12:18:34.211901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.973 [2024-04-26 12:18:34.211910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:30272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.973 [2024-04-26 12:18:34.211917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.973 [2024-04-26 12:18:34.211926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:30280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.973 [2024-04-26 12:18:34.211933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.973 [2024-04-26 12:18:34.211942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:30288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.973 [2024-04-26 12:18:34.211950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.974 [2024-04-26 12:18:34.211960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:30296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.974 [2024-04-26 12:18:34.211968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.974 [2024-04-26 12:18:34.211977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:30304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.974 [2024-04-26 12:18:34.211984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.974 [2024-04-26 12:18:34.211993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:30312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.974 [2024-04-26 12:18:34.212000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.974 [2024-04-26 12:18:34.212008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:30320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.974 [2024-04-26 12:18:34.212016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.974 [2024-04-26 12:18:34.212024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:30328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.974 [2024-04-26 12:18:34.212031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.974 [2024-04-26 12:18:34.212040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:30336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.974 [2024-04-26 12:18:34.212047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.974 [2024-04-26 12:18:34.212056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:30344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.974 [2024-04-26 12:18:34.212063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.974 [2024-04-26 12:18:34.212072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:30352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.974 [2024-04-26 12:18:34.212079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.974 [2024-04-26 12:18:34.212088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:30360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.974 [2024-04-26 12:18:34.212095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.974 [2024-04-26 12:18:34.212104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:30368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.974 [2024-04-26 12:18:34.212111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.974 [2024-04-26 12:18:34.212120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:30376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.974 [2024-04-26 12:18:34.212127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.974 [2024-04-26 12:18:34.212135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:30384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.974 [2024-04-26 12:18:34.212143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.974 [2024-04-26 12:18:34.212152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:30392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.974 [2024-04-26 12:18:34.212161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.974 [2024-04-26 12:18:34.212170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:30400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.974 [2024-04-26 12:18:34.212176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.974 [2024-04-26 12:18:34.212185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:30408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.974 [2024-04-26 12:18:34.212193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.974 [2024-04-26 12:18:34.212202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:30416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.974 [2024-04-26 12:18:34.212209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.974 [2024-04-26 12:18:34.212218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.974 [2024-04-26 12:18:34.212225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.974 [2024-04-26 12:18:34.212234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:30432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.974 [2024-04-26 12:18:34.212241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.974 [2024-04-26 12:18:34.212251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:30440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.974 [2024-04-26 12:18:34.212258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.974 [2024-04-26 12:18:34.212267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:30448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.974 [2024-04-26 12:18:34.212274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.974 [2024-04-26 12:18:34.212283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.974 [2024-04-26 12:18:34.212290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.974 [2024-04-26 12:18:34.212299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:30464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.974 [2024-04-26 12:18:34.212306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.974 [2024-04-26 12:18:34.212315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:30472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.974 [2024-04-26 12:18:34.212322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.974 [2024-04-26 12:18:34.212331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:30480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.974 [2024-04-26 12:18:34.212338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.974 [2024-04-26 12:18:34.212347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:30488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.974 [2024-04-26 12:18:34.212355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.974 [2024-04-26 12:18:34.212369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:30496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.974 [2024-04-26 12:18:34.212376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.974 [2024-04-26 12:18:34.212385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:30504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.974 [2024-04-26 12:18:34.212392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.974 [2024-04-26 12:18:34.212402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:30512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.974 [2024-04-26 12:18:34.212409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.974 [2024-04-26 12:18:34.212418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:30520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.974 [2024-04-26 12:18:34.212425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.974 [2024-04-26 12:18:34.212434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:30528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.974 [2024-04-26 12:18:34.212441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.974 [2024-04-26 12:18:34.212450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:29664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.974 [2024-04-26 12:18:34.212457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.974 [2024-04-26 12:18:34.212467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:29672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.974 [2024-04-26 12:18:34.212474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.974 [2024-04-26 12:18:34.212483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.974 [2024-04-26 12:18:34.212491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.974 [2024-04-26 12:18:34.212499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:29688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.974 [2024-04-26 12:18:34.212507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.974 [2024-04-26 12:18:34.212516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:29696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.974 [2024-04-26 12:18:34.212523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.974 [2024-04-26 12:18:34.212532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:29704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.974 [2024-04-26 12:18:34.212539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.974 [2024-04-26 12:18:34.212548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:29712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.974 [2024-04-26 12:18:34.212555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.974 [2024-04-26 12:18:34.212564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:29720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.974 [2024-04-26 12:18:34.212571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.974 [2024-04-26 12:18:34.212581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:29728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.974 [2024-04-26 12:18:34.212588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.974 [2024-04-26 12:18:34.212597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.974 [2024-04-26 12:18:34.212605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.974 [2024-04-26 12:18:34.212614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:29744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.974 [2024-04-26 12:18:34.212621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.974 [2024-04-26 12:18:34.212630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:29752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.974 [2024-04-26 12:18:34.212637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.974 [2024-04-26 12:18:34.212646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:29760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.974 [2024-04-26 12:18:34.212654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.974 [2024-04-26 12:18:34.212663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:29768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.974 [2024-04-26 12:18:34.212670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.974 [2024-04-26 12:18:34.212679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:29776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.974 [2024-04-26 12:18:34.212685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.974 [2024-04-26 12:18:34.212694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.974 [2024-04-26 12:18:34.212702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.974 [2024-04-26 12:18:34.212711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:30544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.974 [2024-04-26 12:18:34.212718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.974 [2024-04-26 12:18:34.212727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:30552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.974 [2024-04-26 12:18:34.212734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.974 [2024-04-26 12:18:34.212743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.974 [2024-04-26 12:18:34.212750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.974 [2024-04-26 12:18:34.212759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:30568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.974 [2024-04-26 12:18:34.212766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.974 [2024-04-26 12:18:34.212776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:30576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.974 [2024-04-26 12:18:34.212784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.974 [2024-04-26 12:18:34.212793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:30584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.974 [2024-04-26 12:18:34.212800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.974 [2024-04-26 12:18:34.212810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:30592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.974 [2024-04-26 12:18:34.212817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.974 [2024-04-26 12:18:34.212825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:30600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.974 [2024-04-26 12:18:34.212832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.974 [2024-04-26 12:18:34.212845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:30608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.974 [2024-04-26 12:18:34.212852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.974 [2024-04-26 12:18:34.212861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:30616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.974 [2024-04-26 12:18:34.212868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.974 [2024-04-26 12:18:34.212877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:30624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.974 [2024-04-26 12:18:34.212884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.974 [2024-04-26 12:18:34.212893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:30632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.974 [2024-04-26 12:18:34.212900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.974 [2024-04-26 12:18:34.212908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.974 [2024-04-26 12:18:34.212915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.974 [2024-04-26 12:18:34.212924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:30648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.974 [2024-04-26 12:18:34.212931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.974 [2024-04-26 12:18:34.212940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:30656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.974 [2024-04-26 12:18:34.212947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.974 [2024-04-26 12:18:34.212955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:30664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.974 [2024-04-26 12:18:34.212963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.974 [2024-04-26 12:18:34.212972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:30672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.974 [2024-04-26 12:18:34.212979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.974 [2024-04-26 12:18:34.212989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:30680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.974 [2024-04-26 12:18:34.212997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.974 [2024-04-26 12:18:34.213005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:29784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.974 [2024-04-26 12:18:34.213012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.974 [2024-04-26 12:18:34.213021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.974 [2024-04-26 12:18:34.213029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.974 [2024-04-26 12:18:34.213037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.974 [2024-04-26 12:18:34.213044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.974 [2024-04-26 12:18:34.213053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:29808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.974 [2024-04-26 12:18:34.213061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.974 [2024-04-26 12:18:34.213070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:29816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.974 [2024-04-26 12:18:34.213077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.974 [2024-04-26 12:18:34.213086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:29824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.974 [2024-04-26 12:18:34.213093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.974 [2024-04-26 12:18:34.213102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:29832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.974 [2024-04-26 12:18:34.213110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.975 [2024-04-26 12:18:34.213119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:29840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.975 [2024-04-26 12:18:34.213126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.975 [2024-04-26 12:18:34.213134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:29848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.975 [2024-04-26 12:18:34.213141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.975 [2024-04-26 12:18:34.213150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:29856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.975 [2024-04-26 12:18:34.213158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.975 [2024-04-26 12:18:34.213167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:29864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.975 [2024-04-26 12:18:34.213174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.975 [2024-04-26 12:18:34.213183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:29872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.975 [2024-04-26 12:18:34.213192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.975 [2024-04-26 12:18:34.213201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:29880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.975 [2024-04-26 12:18:34.213208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.975 [2024-04-26 12:18:34.213218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:29888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.975 [2024-04-26 12:18:34.213226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.975 [2024-04-26 12:18:34.213236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:29896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.975 [2024-04-26 12:18:34.213245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.975 [2024-04-26 12:18:34.213256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:29904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.975 [2024-04-26 12:18:34.213265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.975 [2024-04-26 12:18:34.213274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:29912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.975 [2024-04-26 12:18:34.213282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.975 [2024-04-26 12:18:34.213291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:29920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.975 [2024-04-26 12:18:34.213299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.975 [2024-04-26 12:18:34.213308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:29928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.975 [2024-04-26 12:18:34.213317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.975 [2024-04-26 12:18:34.213327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:29936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.975 [2024-04-26 12:18:34.213335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.975 [2024-04-26 12:18:34.213346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:29944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.975 [2024-04-26 12:18:34.213355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.975 [2024-04-26 12:18:34.213364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:29952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.975 [2024-04-26 12:18:34.213373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.975 [2024-04-26 12:18:34.213383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:29960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.975 [2024-04-26 12:18:34.213391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.975 [2024-04-26 12:18:34.213401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:29968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.975 [2024-04-26 12:18:34.213409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.975 [2024-04-26 12:18:34.213418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:29976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.975 [2024-04-26 12:18:34.213427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.975 [2024-04-26 12:18:34.213437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:29984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.975 [2024-04-26 12:18:34.213445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.975 [2024-04-26 12:18:34.213453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:29992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.975 [2024-04-26 12:18:34.213461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.975 [2024-04-26 12:18:34.213470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:30000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.975 [2024-04-26 12:18:34.213477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.975 [2024-04-26 12:18:34.213486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:30008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.975 [2024-04-26 12:18:34.213493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.975 [2024-04-26 12:18:34.213502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:30016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.975 [2024-04-26 12:18:34.213510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.975 [2024-04-26 12:18:34.213519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:30024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.975 [2024-04-26 12:18:34.213526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.975 [2024-04-26 12:18:34.213536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:30032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.975 [2024-04-26 12:18:34.213542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.975 [2024-04-26 12:18:34.213551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:30040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.975 [2024-04-26 12:18:34.213558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.975 [2024-04-26 12:18:34.213567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:30048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.975 [2024-04-26 12:18:34.213574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.975 [2024-04-26 12:18:34.213583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:30056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.975 [2024-04-26 12:18:34.213591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.975 [2024-04-26 12:18:34.213600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:30064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.975 [2024-04-26 12:18:34.213607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.975 [2024-04-26 12:18:34.213616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:30072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.975 [2024-04-26 12:18:34.213623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.975 [2024-04-26 12:18:34.213634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:30080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.975 [2024-04-26 12:18:34.213641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.975 [2024-04-26 12:18:34.213650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.975 [2024-04-26 12:18:34.213657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.975 [2024-04-26 12:18:34.213666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:30096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.975 [2024-04-26 12:18:34.213673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.975 [2024-04-26 12:18:34.213682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:30104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.975 [2024-04-26 12:18:34.213690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.975 [2024-04-26 12:18:34.213699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:30112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.975 [2024-04-26 12:18:34.213706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.975 [2024-04-26 12:18:34.213726] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.975 [2024-04-26 12:18:34.213733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30120 len:8 PRP1 0x0 PRP2 0x0 00:22:39.975 [2024-04-26 12:18:34.213742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.975 [2024-04-26 12:18:34.213768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:39.975 [2024-04-26 12:18:34.213777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.975 [2024-04-26 12:18:34.213785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:39.975 [2024-04-26 12:18:34.213792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.975 [2024-04-26 12:18:34.213800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:39.975 [2024-04-26 12:18:34.213808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.975 [2024-04-26 12:18:34.213816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:39.975 [2024-04-26 12:18:34.213824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.975 [2024-04-26 12:18:34.213831] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfee40 is same with the state(5) to be set 00:22:39.975 [2024-04-26 12:18:34.214111] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.975 [2024-04-26 12:18:34.214120] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.975 [2024-04-26 12:18:34.214127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30128 len:8 PRP1 0x0 PRP2 0x0 00:22:39.975 [2024-04-26 12:18:34.214134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.975 [2024-04-26 12:18:34.214146] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.975 [2024-04-26 12:18:34.214152] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.975 [2024-04-26 12:18:34.214158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30136 len:8 PRP1 0x0 PRP2 0x0 00:22:39.975 [2024-04-26 12:18:34.214165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.975 [2024-04-26 12:18:34.214173] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.975 [2024-04-26 12:18:34.214178] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.975 [2024-04-26 12:18:34.214185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30144 len:8 PRP1 0x0 PRP2 0x0 00:22:39.975 [2024-04-26 12:18:34.214192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.975 [2024-04-26 12:18:34.214200] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.975 [2024-04-26 12:18:34.214205] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.975 [2024-04-26 12:18:34.214211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30152 len:8 PRP1 0x0 PRP2 0x0 00:22:39.975 [2024-04-26 12:18:34.214218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.975 [2024-04-26 12:18:34.214227] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.975 [2024-04-26 12:18:34.214233] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.975 [2024-04-26 12:18:34.214238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30160 len:8 PRP1 0x0 PRP2 0x0 00:22:39.975 [2024-04-26 12:18:34.214245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.975 [2024-04-26 12:18:34.214253] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.975 [2024-04-26 12:18:34.214258] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.975 [2024-04-26 12:18:34.214264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30168 len:8 PRP1 0x0 PRP2 0x0 00:22:39.975 [2024-04-26 12:18:34.214271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.975 [2024-04-26 12:18:34.214279] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.975 [2024-04-26 12:18:34.214284] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.975 [2024-04-26 12:18:34.214290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30176 len:8 PRP1 0x0 PRP2 0x0 00:22:39.975 [2024-04-26 12:18:34.214297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.975 [2024-04-26 12:18:34.214304] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.975 [2024-04-26 12:18:34.214310] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.975 [2024-04-26 12:18:34.214316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30184 len:8 PRP1 0x0 PRP2 0x0 00:22:39.975 [2024-04-26 12:18:34.214323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.975 [2024-04-26 12:18:34.214331] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.975 [2024-04-26 12:18:34.214336] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.975 [2024-04-26 12:18:34.214342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30192 len:8 PRP1 0x0 PRP2 0x0 00:22:39.975 [2024-04-26 12:18:34.214350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.975 [2024-04-26 12:18:34.224081] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.975 [2024-04-26 12:18:34.224110] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.975 [2024-04-26 12:18:34.224120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30200 len:8 PRP1 0x0 PRP2 0x0 00:22:39.975 [2024-04-26 12:18:34.224130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.975 [2024-04-26 12:18:34.224138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.975 [2024-04-26 12:18:34.224143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.975 [2024-04-26 12:18:34.224150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30208 len:8 PRP1 0x0 PRP2 0x0 00:22:39.975 [2024-04-26 12:18:34.224158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.975 [2024-04-26 12:18:34.224165] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.975 [2024-04-26 12:18:34.224170] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.975 [2024-04-26 12:18:34.224176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30216 len:8 PRP1 0x0 PRP2 0x0 00:22:39.975 [2024-04-26 12:18:34.224183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.975 [2024-04-26 12:18:34.224191] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.975 [2024-04-26 12:18:34.224196] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.975 [2024-04-26 12:18:34.224202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30224 len:8 PRP1 0x0 PRP2 0x0 00:22:39.975 [2024-04-26 12:18:34.224210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.975 [2024-04-26 12:18:34.224217] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.975 [2024-04-26 12:18:34.224223] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.975 [2024-04-26 12:18:34.224230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30232 len:8 PRP1 0x0 PRP2 0x0 00:22:39.975 [2024-04-26 12:18:34.224237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.975 [2024-04-26 12:18:34.224244] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.975 [2024-04-26 12:18:34.224250] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.975 [2024-04-26 12:18:34.224255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30240 len:8 PRP1 0x0 PRP2 0x0 00:22:39.975 [2024-04-26 12:18:34.224263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.975 [2024-04-26 12:18:34.224270] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.975 [2024-04-26 12:18:34.224276] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.975 [2024-04-26 12:18:34.224282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30248 len:8 PRP1 0x0 PRP2 0x0 00:22:39.975 [2024-04-26 12:18:34.224289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.975 [2024-04-26 12:18:34.224296] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.975 [2024-04-26 12:18:34.224302] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.975 [2024-04-26 12:18:34.224312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30256 len:8 PRP1 0x0 PRP2 0x0 00:22:39.975 [2024-04-26 12:18:34.224320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.975 [2024-04-26 12:18:34.224327] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.975 [2024-04-26 12:18:34.224333] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.975 [2024-04-26 12:18:34.224339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30264 len:8 PRP1 0x0 PRP2 0x0 00:22:39.975 [2024-04-26 12:18:34.224347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.975 [2024-04-26 12:18:34.224354] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.975 [2024-04-26 12:18:34.224360] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.975 [2024-04-26 12:18:34.224366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30272 len:8 PRP1 0x0 PRP2 0x0 00:22:39.975 [2024-04-26 12:18:34.224372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.975 [2024-04-26 12:18:34.224380] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.975 [2024-04-26 12:18:34.224386] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.975 [2024-04-26 12:18:34.224392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30280 len:8 PRP1 0x0 PRP2 0x0 00:22:39.975 [2024-04-26 12:18:34.224399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.975 [2024-04-26 12:18:34.224406] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.975 [2024-04-26 12:18:34.224412] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.975 [2024-04-26 12:18:34.224418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30288 len:8 PRP1 0x0 PRP2 0x0 00:22:39.975 [2024-04-26 12:18:34.224425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.975 [2024-04-26 12:18:34.224433] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.976 [2024-04-26 12:18:34.224438] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.976 [2024-04-26 12:18:34.224444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30296 len:8 PRP1 0x0 PRP2 0x0 00:22:39.976 [2024-04-26 12:18:34.224451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.976 [2024-04-26 12:18:34.224458] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.976 [2024-04-26 12:18:34.224464] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.976 [2024-04-26 12:18:34.224470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30304 len:8 PRP1 0x0 PRP2 0x0 00:22:39.976 [2024-04-26 12:18:34.224477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.976 [2024-04-26 12:18:34.224486] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.976 [2024-04-26 12:18:34.224492] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.976 [2024-04-26 12:18:34.224498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30312 len:8 PRP1 0x0 PRP2 0x0 00:22:39.976 [2024-04-26 12:18:34.224505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.976 [2024-04-26 12:18:34.224513] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.976 [2024-04-26 12:18:34.224520] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.976 [2024-04-26 12:18:34.224526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30320 len:8 PRP1 0x0 PRP2 0x0 00:22:39.976 [2024-04-26 12:18:34.224533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.976 [2024-04-26 12:18:34.224541] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.976 [2024-04-26 12:18:34.224547] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.976 [2024-04-26 12:18:34.224553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30328 len:8 PRP1 0x0 PRP2 0x0 00:22:39.976 [2024-04-26 12:18:34.224559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.976 [2024-04-26 12:18:34.224567] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.976 [2024-04-26 12:18:34.224572] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.976 [2024-04-26 12:18:34.224578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30336 len:8 PRP1 0x0 PRP2 0x0 00:22:39.976 [2024-04-26 12:18:34.224586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.976 [2024-04-26 12:18:34.224593] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.976 [2024-04-26 12:18:34.224598] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.976 [2024-04-26 12:18:34.224604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30344 len:8 PRP1 0x0 PRP2 0x0 00:22:39.976 [2024-04-26 12:18:34.224610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.976 [2024-04-26 12:18:34.224617] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.976 [2024-04-26 12:18:34.224623] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.976 [2024-04-26 12:18:34.224629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30352 len:8 PRP1 0x0 PRP2 0x0 00:22:39.976 [2024-04-26 12:18:34.224636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.976 [2024-04-26 12:18:34.224643] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.976 [2024-04-26 12:18:34.224648] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.976 [2024-04-26 12:18:34.224654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30360 len:8 PRP1 0x0 PRP2 0x0 00:22:39.976 [2024-04-26 12:18:34.224661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.976 [2024-04-26 12:18:34.224669] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.976 [2024-04-26 12:18:34.224674] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.976 [2024-04-26 12:18:34.224680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30368 len:8 PRP1 0x0 PRP2 0x0 00:22:39.976 [2024-04-26 12:18:34.224686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.976 [2024-04-26 12:18:34.224693] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.976 [2024-04-26 12:18:34.224700] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.976 [2024-04-26 12:18:34.224707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30376 len:8 PRP1 0x0 PRP2 0x0 00:22:39.976 [2024-04-26 12:18:34.224714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.976 [2024-04-26 12:18:34.224722] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.976 [2024-04-26 12:18:34.224728] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.976 [2024-04-26 12:18:34.224734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30384 len:8 PRP1 0x0 PRP2 0x0 00:22:39.976 [2024-04-26 12:18:34.224741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.976 [2024-04-26 12:18:34.224749] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.976 [2024-04-26 12:18:34.224754] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.976 [2024-04-26 12:18:34.224760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30392 len:8 PRP1 0x0 PRP2 0x0 00:22:39.976 [2024-04-26 12:18:34.224767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.976 [2024-04-26 12:18:34.224774] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.976 [2024-04-26 12:18:34.224780] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.976 [2024-04-26 12:18:34.224786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30400 len:8 PRP1 0x0 PRP2 0x0 00:22:39.976 [2024-04-26 12:18:34.224793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.976 [2024-04-26 12:18:34.224801] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.976 [2024-04-26 12:18:34.224806] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.976 [2024-04-26 12:18:34.224812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30408 len:8 PRP1 0x0 PRP2 0x0 00:22:39.976 [2024-04-26 12:18:34.224819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.976 [2024-04-26 12:18:34.224827] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.976 [2024-04-26 12:18:34.224833] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.976 [2024-04-26 12:18:34.224847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30416 len:8 PRP1 0x0 PRP2 0x0 00:22:39.976 [2024-04-26 12:18:34.224855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.976 [2024-04-26 12:18:34.224862] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.976 [2024-04-26 12:18:34.224869] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.976 [2024-04-26 12:18:34.224875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30424 len:8 PRP1 0x0 PRP2 0x0 00:22:39.976 [2024-04-26 12:18:34.224882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.976 [2024-04-26 12:18:34.224890] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.976 [2024-04-26 12:18:34.224895] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.976 [2024-04-26 12:18:34.224901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30432 len:8 PRP1 0x0 PRP2 0x0 00:22:39.976 [2024-04-26 12:18:34.224908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.976 [2024-04-26 12:18:34.224916] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.976 [2024-04-26 12:18:34.224921] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.976 [2024-04-26 12:18:34.224927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30440 len:8 PRP1 0x0 PRP2 0x0 00:22:39.976 [2024-04-26 12:18:34.224936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.976 [2024-04-26 12:18:34.224943] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.976 [2024-04-26 12:18:34.224948] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.976 [2024-04-26 12:18:34.224955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30448 len:8 PRP1 0x0 PRP2 0x0 00:22:39.976 [2024-04-26 12:18:34.224962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.976 [2024-04-26 12:18:34.224969] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.976 [2024-04-26 12:18:34.224975] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.976 [2024-04-26 12:18:34.224981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30456 len:8 PRP1 0x0 PRP2 0x0 00:22:39.976 [2024-04-26 12:18:34.224987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.976 [2024-04-26 12:18:34.224995] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.976 [2024-04-26 12:18:34.225000] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.976 [2024-04-26 12:18:34.225006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30464 len:8 PRP1 0x0 PRP2 0x0 00:22:39.976 [2024-04-26 12:18:34.225013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.976 [2024-04-26 12:18:34.225020] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.976 [2024-04-26 12:18:34.225026] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.976 [2024-04-26 12:18:34.225032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30472 len:8 PRP1 0x0 PRP2 0x0 00:22:39.976 [2024-04-26 12:18:34.225039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.976 [2024-04-26 12:18:34.225046] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.976 [2024-04-26 12:18:34.225051] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.976 [2024-04-26 12:18:34.225057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30480 len:8 PRP1 0x0 PRP2 0x0 00:22:39.976 [2024-04-26 12:18:34.225064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.976 [2024-04-26 12:18:34.225071] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.976 [2024-04-26 12:18:34.225077] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.976 [2024-04-26 12:18:34.225083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30488 len:8 PRP1 0x0 PRP2 0x0 00:22:39.976 [2024-04-26 12:18:34.225090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.976 [2024-04-26 12:18:34.225097] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.976 [2024-04-26 12:18:34.225102] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.976 [2024-04-26 12:18:34.225109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30496 len:8 PRP1 0x0 PRP2 0x0 00:22:39.976 [2024-04-26 12:18:34.225116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.976 [2024-04-26 12:18:34.225124] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.976 [2024-04-26 12:18:34.225130] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.976 [2024-04-26 12:18:34.225137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30504 len:8 PRP1 0x0 PRP2 0x0 00:22:39.976 [2024-04-26 12:18:34.225144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.976 [2024-04-26 12:18:34.225152] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.976 [2024-04-26 12:18:34.225158] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.976 [2024-04-26 12:18:34.225164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30512 len:8 PRP1 0x0 PRP2 0x0 00:22:39.976 [2024-04-26 12:18:34.225171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.976 [2024-04-26 12:18:34.225178] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.976 [2024-04-26 12:18:34.225183] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.976 [2024-04-26 12:18:34.225190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30520 len:8 PRP1 0x0 PRP2 0x0 00:22:39.976 [2024-04-26 12:18:34.225197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.976 [2024-04-26 12:18:34.225205] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.976 [2024-04-26 12:18:34.225210] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.976 [2024-04-26 12:18:34.225216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30528 len:8 PRP1 0x0 PRP2 0x0 00:22:39.976 [2024-04-26 12:18:34.225223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.976 [2024-04-26 12:18:34.225231] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.976 [2024-04-26 12:18:34.225237] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.976 [2024-04-26 12:18:34.225243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29664 len:8 PRP1 0x0 PRP2 0x0 00:22:39.976 [2024-04-26 12:18:34.225250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.976 [2024-04-26 12:18:34.225257] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.976 [2024-04-26 12:18:34.225263] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.976 [2024-04-26 12:18:34.225270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29672 len:8 PRP1 0x0 PRP2 0x0 00:22:39.976 [2024-04-26 12:18:34.225277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.976 [2024-04-26 12:18:34.225284] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.976 [2024-04-26 12:18:34.225289] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.976 [2024-04-26 12:18:34.225295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29680 len:8 PRP1 0x0 PRP2 0x0 00:22:39.976 [2024-04-26 12:18:34.225302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.976 [2024-04-26 12:18:34.225310] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.976 [2024-04-26 12:18:34.225315] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.976 [2024-04-26 12:18:34.225321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29688 len:8 PRP1 0x0 PRP2 0x0 00:22:39.976 [2024-04-26 12:18:34.225328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.976 [2024-04-26 12:18:34.225337] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.976 [2024-04-26 12:18:34.225342] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.976 [2024-04-26 12:18:34.225348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29696 len:8 PRP1 0x0 PRP2 0x0 00:22:39.976 [2024-04-26 12:18:34.225355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.976 [2024-04-26 12:18:34.225363] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.976 [2024-04-26 12:18:34.225368] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.976 [2024-04-26 12:18:34.225374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29704 len:8 PRP1 0x0 PRP2 0x0 00:22:39.976 [2024-04-26 12:18:34.225381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.976 [2024-04-26 12:18:34.225388] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.976 [2024-04-26 12:18:34.225394] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.976 [2024-04-26 12:18:34.225400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29712 len:8 PRP1 0x0 PRP2 0x0 00:22:39.976 [2024-04-26 12:18:34.225406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.976 [2024-04-26 12:18:34.225414] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.976 [2024-04-26 12:18:34.225419] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.976 [2024-04-26 12:18:34.225425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29720 len:8 PRP1 0x0 PRP2 0x0 00:22:39.976 [2024-04-26 12:18:34.225432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.976 [2024-04-26 12:18:34.225440] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.976 [2024-04-26 12:18:34.225446] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.976 [2024-04-26 12:18:34.225451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29728 len:8 PRP1 0x0 PRP2 0x0 00:22:39.976 [2024-04-26 12:18:34.225458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.976 [2024-04-26 12:18:34.225466] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.976 [2024-04-26 12:18:34.225471] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.976 [2024-04-26 12:18:34.225478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29736 len:8 PRP1 0x0 PRP2 0x0 00:22:39.976 [2024-04-26 12:18:34.225485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.976 [2024-04-26 12:18:34.225492] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.976 [2024-04-26 12:18:34.225498] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.976 [2024-04-26 12:18:34.225504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29744 len:8 PRP1 0x0 PRP2 0x0 00:22:39.976 [2024-04-26 12:18:34.225511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.976 [2024-04-26 12:18:34.225519] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.976 [2024-04-26 12:18:34.225525] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.977 [2024-04-26 12:18:34.225530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29752 len:8 PRP1 0x0 PRP2 0x0 00:22:39.977 [2024-04-26 12:18:34.225541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.977 [2024-04-26 12:18:34.225548] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.977 [2024-04-26 12:18:34.225555] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.977 [2024-04-26 12:18:34.225561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29760 len:8 PRP1 0x0 PRP2 0x0 00:22:39.977 [2024-04-26 12:18:34.225568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.977 [2024-04-26 12:18:34.225575] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.977 [2024-04-26 12:18:34.225580] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.977 [2024-04-26 12:18:34.225586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29768 len:8 PRP1 0x0 PRP2 0x0 00:22:39.977 [2024-04-26 12:18:34.225595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.977 [2024-04-26 12:18:34.225602] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.977 [2024-04-26 12:18:34.225608] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.977 [2024-04-26 12:18:34.225613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29776 len:8 PRP1 0x0 PRP2 0x0 00:22:39.977 [2024-04-26 12:18:34.225620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.977 [2024-04-26 12:18:34.225628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.977 [2024-04-26 12:18:34.225633] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.977 [2024-04-26 12:18:34.225639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30536 len:8 PRP1 0x0 PRP2 0x0 00:22:39.977 [2024-04-26 12:18:34.225647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.977 [2024-04-26 12:18:34.225654] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.977 [2024-04-26 12:18:34.225660] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.977 [2024-04-26 12:18:34.225665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30544 len:8 PRP1 0x0 PRP2 0x0 00:22:39.977 [2024-04-26 12:18:34.225672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.977 [2024-04-26 12:18:34.225680] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.977 [2024-04-26 12:18:34.225686] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.977 [2024-04-26 12:18:34.225692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30552 len:8 PRP1 0x0 PRP2 0x0 00:22:39.977 [2024-04-26 12:18:34.225700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.977 [2024-04-26 12:18:34.225708] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.977 [2024-04-26 12:18:34.225714] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.977 [2024-04-26 12:18:34.225720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30560 len:8 PRP1 0x0 PRP2 0x0 00:22:39.977 [2024-04-26 12:18:34.225727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.977 [2024-04-26 12:18:34.225735] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.977 [2024-04-26 12:18:34.225741] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.977 [2024-04-26 12:18:34.225748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30568 len:8 PRP1 0x0 PRP2 0x0 00:22:39.977 [2024-04-26 12:18:34.225755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.977 [2024-04-26 12:18:34.225763] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.977 [2024-04-26 12:18:34.225768] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.977 [2024-04-26 12:18:34.225774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30576 len:8 PRP1 0x0 PRP2 0x0 00:22:39.977 [2024-04-26 12:18:34.225780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.977 [2024-04-26 12:18:34.225788] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.977 [2024-04-26 12:18:34.225794] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.977 [2024-04-26 12:18:34.233679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30584 len:8 PRP1 0x0 PRP2 0x0 00:22:39.977 [2024-04-26 12:18:34.233707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.977 [2024-04-26 12:18:34.233721] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.977 [2024-04-26 12:18:34.233727] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.977 [2024-04-26 12:18:34.233734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30592 len:8 PRP1 0x0 PRP2 0x0 00:22:39.977 [2024-04-26 12:18:34.233741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.977 [2024-04-26 12:18:34.233749] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.977 [2024-04-26 12:18:34.233755] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.977 [2024-04-26 12:18:34.233761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30600 len:8 PRP1 0x0 PRP2 0x0 00:22:39.977 [2024-04-26 12:18:34.233768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.977 [2024-04-26 12:18:34.233776] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.977 [2024-04-26 12:18:34.233781] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.977 [2024-04-26 12:18:34.233788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30608 len:8 PRP1 0x0 PRP2 0x0 00:22:39.977 [2024-04-26 12:18:34.233796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.977 [2024-04-26 12:18:34.233803] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.977 [2024-04-26 12:18:34.233809] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.977 [2024-04-26 12:18:34.233815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30616 len:8 PRP1 0x0 PRP2 0x0 00:22:39.977 [2024-04-26 12:18:34.233822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.977 [2024-04-26 12:18:34.233830] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.977 [2024-04-26 12:18:34.233835] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.977 [2024-04-26 12:18:34.233850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30624 len:8 PRP1 0x0 PRP2 0x0 00:22:39.977 [2024-04-26 12:18:34.233858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.977 [2024-04-26 12:18:34.233866] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.977 [2024-04-26 12:18:34.233876] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.977 [2024-04-26 12:18:34.233882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30632 len:8 PRP1 0x0 PRP2 0x0 00:22:39.977 [2024-04-26 12:18:34.233889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.977 [2024-04-26 12:18:34.233897] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.977 [2024-04-26 12:18:34.233904] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.977 [2024-04-26 12:18:34.233911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30640 len:8 PRP1 0x0 PRP2 0x0 00:22:39.977 [2024-04-26 12:18:34.233919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.977 [2024-04-26 12:18:34.233926] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.977 [2024-04-26 12:18:34.233931] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.977 [2024-04-26 12:18:34.233937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30648 len:8 PRP1 0x0 PRP2 0x0 00:22:39.977 [2024-04-26 12:18:34.233944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.977 [2024-04-26 12:18:34.233952] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.977 [2024-04-26 12:18:34.233958] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.977 [2024-04-26 12:18:34.233963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30656 len:8 PRP1 0x0 PRP2 0x0 00:22:39.977 [2024-04-26 12:18:34.233970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.977 [2024-04-26 12:18:34.233978] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.977 [2024-04-26 12:18:34.233984] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.977 [2024-04-26 12:18:34.233990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30664 len:8 PRP1 0x0 PRP2 0x0 00:22:39.977 [2024-04-26 12:18:34.233997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.977 [2024-04-26 12:18:34.234005] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.977 [2024-04-26 12:18:34.234011] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.977 [2024-04-26 12:18:34.234017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30672 len:8 PRP1 0x0 PRP2 0x0 00:22:39.977 [2024-04-26 12:18:34.234024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.977 [2024-04-26 12:18:34.234032] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.977 [2024-04-26 12:18:34.234037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.977 [2024-04-26 12:18:34.234044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30680 len:8 PRP1 0x0 PRP2 0x0 00:22:39.977 [2024-04-26 12:18:34.234050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.977 [2024-04-26 12:18:34.234058] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.977 [2024-04-26 12:18:34.234063] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.977 [2024-04-26 12:18:34.234069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29784 len:8 PRP1 0x0 PRP2 0x0 00:22:39.977 [2024-04-26 12:18:34.234076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.977 [2024-04-26 12:18:34.234085] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.977 [2024-04-26 12:18:34.234091] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.977 [2024-04-26 12:18:34.234097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29792 len:8 PRP1 0x0 PRP2 0x0 00:22:39.977 [2024-04-26 12:18:34.234104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.977 [2024-04-26 12:18:34.234112] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.977 [2024-04-26 12:18:34.234118] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.977 [2024-04-26 12:18:34.234124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29800 len:8 PRP1 0x0 PRP2 0x0 00:22:39.977 [2024-04-26 12:18:34.234131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.977 [2024-04-26 12:18:34.234138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.977 [2024-04-26 12:18:34.234144] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.977 [2024-04-26 12:18:34.234150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29808 len:8 PRP1 0x0 PRP2 0x0 00:22:39.977 [2024-04-26 12:18:34.234157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.977 [2024-04-26 12:18:34.234166] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.977 [2024-04-26 12:18:34.234172] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.977 [2024-04-26 12:18:34.234178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29816 len:8 PRP1 0x0 PRP2 0x0 00:22:39.977 [2024-04-26 12:18:34.234185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.977 [2024-04-26 12:18:34.234192] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.977 [2024-04-26 12:18:34.234198] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.977 [2024-04-26 12:18:34.234204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29824 len:8 PRP1 0x0 PRP2 0x0 00:22:39.977 [2024-04-26 12:18:34.234211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.977 [2024-04-26 12:18:34.234218] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.977 [2024-04-26 12:18:34.234224] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.977 [2024-04-26 12:18:34.234230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29832 len:8 PRP1 0x0 PRP2 0x0 00:22:39.977 [2024-04-26 12:18:34.234237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.977 [2024-04-26 12:18:34.234245] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.977 [2024-04-26 12:18:34.234250] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.977 [2024-04-26 12:18:34.234257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29840 len:8 PRP1 0x0 PRP2 0x0 00:22:39.977 [2024-04-26 12:18:34.234265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.977 [2024-04-26 12:18:34.234273] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.977 [2024-04-26 12:18:34.234278] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.977 [2024-04-26 12:18:34.234284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29848 len:8 PRP1 0x0 PRP2 0x0 00:22:39.977 [2024-04-26 12:18:34.234293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.977 [2024-04-26 12:18:34.234301] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.977 [2024-04-26 12:18:34.234306] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.977 [2024-04-26 12:18:34.234312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29856 len:8 PRP1 0x0 PRP2 0x0 00:22:39.977 [2024-04-26 12:18:34.234319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.977 [2024-04-26 12:18:34.234327] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.977 [2024-04-26 12:18:34.234333] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.977 [2024-04-26 12:18:34.234339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29864 len:8 PRP1 0x0 PRP2 0x0 00:22:39.977 [2024-04-26 12:18:34.234346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.977 [2024-04-26 12:18:34.234354] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.977 [2024-04-26 12:18:34.234359] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.977 [2024-04-26 12:18:34.234365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29872 len:8 PRP1 0x0 PRP2 0x0 00:22:39.977 [2024-04-26 12:18:34.234373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.977 [2024-04-26 12:18:34.234380] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.977 [2024-04-26 12:18:34.234386] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.977 [2024-04-26 12:18:34.234391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29880 len:8 PRP1 0x0 PRP2 0x0 00:22:39.977 [2024-04-26 12:18:34.234398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.977 [2024-04-26 12:18:34.234406] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.977 [2024-04-26 12:18:34.234411] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.977 [2024-04-26 12:18:34.234417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29888 len:8 PRP1 0x0 PRP2 0x0 00:22:39.977 [2024-04-26 12:18:34.234424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.977 [2024-04-26 12:18:34.234431] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.977 [2024-04-26 12:18:34.234438] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.977 [2024-04-26 12:18:34.234444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29896 len:8 PRP1 0x0 PRP2 0x0 00:22:39.977 [2024-04-26 12:18:34.234451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.977 [2024-04-26 12:18:34.234458] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.977 [2024-04-26 12:18:34.234464] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.977 [2024-04-26 12:18:34.234470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29904 len:8 PRP1 0x0 PRP2 0x0 00:22:39.977 [2024-04-26 12:18:34.234477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.977 [2024-04-26 12:18:34.234485] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.977 [2024-04-26 12:18:34.234491] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.977 [2024-04-26 12:18:34.234498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29912 len:8 PRP1 0x0 PRP2 0x0 00:22:39.977 [2024-04-26 12:18:34.234506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.977 [2024-04-26 12:18:34.234513] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.977 [2024-04-26 12:18:34.234518] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.977 [2024-04-26 12:18:34.234524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29920 len:8 PRP1 0x0 PRP2 0x0 00:22:39.977 [2024-04-26 12:18:34.234531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.977 [2024-04-26 12:18:34.234538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.977 [2024-04-26 12:18:34.234544] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.977 [2024-04-26 12:18:34.234550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29928 len:8 PRP1 0x0 PRP2 0x0 00:22:39.977 [2024-04-26 12:18:34.234557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.977 [2024-04-26 12:18:34.234564] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.977 [2024-04-26 12:18:34.234570] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.977 [2024-04-26 12:18:34.234576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29936 len:8 PRP1 0x0 PRP2 0x0 00:22:39.977 [2024-04-26 12:18:34.234582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.977 [2024-04-26 12:18:34.234589] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.977 [2024-04-26 12:18:34.234595] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.977 [2024-04-26 12:18:34.234602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29944 len:8 PRP1 0x0 PRP2 0x0 00:22:39.977 [2024-04-26 12:18:34.234608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.977 [2024-04-26 12:18:34.234616] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.977 [2024-04-26 12:18:34.234621] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.977 [2024-04-26 12:18:34.234627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29952 len:8 PRP1 0x0 PRP2 0x0 00:22:39.977 [2024-04-26 12:18:34.234634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.977 [2024-04-26 12:18:34.234641] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.977 [2024-04-26 12:18:34.234647] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.977 [2024-04-26 12:18:34.234653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29960 len:8 PRP1 0x0 PRP2 0x0 00:22:39.978 [2024-04-26 12:18:34.234660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.978 [2024-04-26 12:18:34.234667] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.978 [2024-04-26 12:18:34.234672] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.978 [2024-04-26 12:18:34.234679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29968 len:8 PRP1 0x0 PRP2 0x0 00:22:39.978 [2024-04-26 12:18:34.234685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.978 [2024-04-26 12:18:34.234694] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.978 [2024-04-26 12:18:34.234699] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.978 [2024-04-26 12:18:34.234705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29976 len:8 PRP1 0x0 PRP2 0x0 00:22:39.978 [2024-04-26 12:18:34.234712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.978 [2024-04-26 12:18:34.234719] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.978 [2024-04-26 12:18:34.234725] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.978 [2024-04-26 12:18:34.234731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29984 len:8 PRP1 0x0 PRP2 0x0 00:22:39.978 [2024-04-26 12:18:34.234738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.978 [2024-04-26 12:18:34.234746] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.978 [2024-04-26 12:18:34.234751] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.978 [2024-04-26 12:18:34.234757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29992 len:8 PRP1 0x0 PRP2 0x0 00:22:39.978 [2024-04-26 12:18:34.234764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.978 [2024-04-26 12:18:34.234772] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.978 [2024-04-26 12:18:34.234777] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.978 [2024-04-26 12:18:34.234783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30000 len:8 PRP1 0x0 PRP2 0x0 00:22:39.978 [2024-04-26 12:18:34.234790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.978 [2024-04-26 12:18:34.234798] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.978 [2024-04-26 12:18:34.234803] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.978 [2024-04-26 12:18:34.234810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30008 len:8 PRP1 0x0 PRP2 0x0 00:22:39.978 [2024-04-26 12:18:34.234817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.978 [2024-04-26 12:18:34.234825] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.978 [2024-04-26 12:18:34.234830] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.978 [2024-04-26 12:18:34.234841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30016 len:8 PRP1 0x0 PRP2 0x0 00:22:39.978 [2024-04-26 12:18:34.234848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.978 [2024-04-26 12:18:34.234856] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.978 [2024-04-26 12:18:34.234862] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.978 [2024-04-26 12:18:34.234868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30024 len:8 PRP1 0x0 PRP2 0x0 00:22:39.978 [2024-04-26 12:18:34.234875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.978 [2024-04-26 12:18:34.234883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.978 [2024-04-26 12:18:34.234888] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.978 [2024-04-26 12:18:34.234895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30032 len:8 PRP1 0x0 PRP2 0x0 00:22:39.978 [2024-04-26 12:18:34.234901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.978 [2024-04-26 12:18:34.234910] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.978 [2024-04-26 12:18:34.234916] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.978 [2024-04-26 12:18:34.234922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30040 len:8 PRP1 0x0 PRP2 0x0 00:22:39.978 [2024-04-26 12:18:34.234929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.978 [2024-04-26 12:18:34.234937] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.978 [2024-04-26 12:18:34.234943] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.978 [2024-04-26 12:18:34.234949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30048 len:8 PRP1 0x0 PRP2 0x0 00:22:39.978 [2024-04-26 12:18:34.234956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.978 [2024-04-26 12:18:34.234965] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.978 [2024-04-26 12:18:34.234970] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.978 [2024-04-26 12:18:34.234976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30056 len:8 PRP1 0x0 PRP2 0x0 00:22:39.978 [2024-04-26 12:18:34.234983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.978 [2024-04-26 12:18:34.234991] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.978 [2024-04-26 12:18:34.234996] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.978 [2024-04-26 12:18:34.235002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30064 len:8 PRP1 0x0 PRP2 0x0 00:22:39.978 [2024-04-26 12:18:34.235010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.978 [2024-04-26 12:18:34.235017] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.978 [2024-04-26 12:18:34.235022] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.978 [2024-04-26 12:18:34.235028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30072 len:8 PRP1 0x0 PRP2 0x0 00:22:39.978 [2024-04-26 12:18:34.235035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.978 [2024-04-26 12:18:34.235043] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.978 [2024-04-26 12:18:34.235048] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.978 [2024-04-26 12:18:34.235054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30080 len:8 PRP1 0x0 PRP2 0x0 00:22:39.978 [2024-04-26 12:18:34.235062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.978 [2024-04-26 12:18:34.235069] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.978 [2024-04-26 12:18:34.235074] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.978 [2024-04-26 12:18:34.235080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30088 len:8 PRP1 0x0 PRP2 0x0 00:22:39.978 [2024-04-26 12:18:34.235087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.978 [2024-04-26 12:18:34.235095] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.978 [2024-04-26 12:18:34.235101] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.978 [2024-04-26 12:18:34.235107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30096 len:8 PRP1 0x0 PRP2 0x0 00:22:39.978 [2024-04-26 12:18:34.235115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.978 [2024-04-26 12:18:34.235123] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.978 [2024-04-26 12:18:34.235129] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.978 [2024-04-26 12:18:34.235135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30104 len:8 PRP1 0x0 PRP2 0x0 00:22:39.978 [2024-04-26 12:18:34.235142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.978 [2024-04-26 12:18:34.235150] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.978 [2024-04-26 12:18:34.235155] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.978 [2024-04-26 12:18:34.235162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30112 len:8 PRP1 0x0 PRP2 0x0 00:22:39.978 [2024-04-26 12:18:34.235169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.978 [2024-04-26 12:18:34.235176] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.978 [2024-04-26 12:18:34.235183] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.978 [2024-04-26 12:18:34.235189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30120 len:8 PRP1 0x0 PRP2 0x0 00:22:39.978 [2024-04-26 12:18:34.235196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.978 [2024-04-26 12:18:34.235235] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1fc8370 was disconnected and freed. reset controller. 00:22:39.978 [2024-04-26 12:18:34.235246] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:22:39.978 [2024-04-26 12:18:34.235253] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:39.978 [2024-04-26 12:18:34.235296] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dfee40 (9): Bad file descriptor 00:22:39.978 [2024-04-26 12:18:34.238784] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:39.978 [2024-04-26 12:18:34.283530] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:39.978 00:22:39.978 Latency(us) 00:22:39.978 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:39.978 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:39.978 Verification LBA range: start 0x0 length 0x4000 00:22:39.978 NVMe0n1 : 15.01 11223.95 43.84 280.67 0.00 11098.73 508.59 27852.80 00:22:39.978 =================================================================================================================== 00:22:39.978 Total : 11223.95 43.84 280.67 0.00 11098.73 508.59 27852.80 00:22:39.978 Received shutdown signal, test time was about 15.000000 seconds 00:22:39.978 00:22:39.978 Latency(us) 00:22:39.978 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:39.978 =================================================================================================================== 00:22:39.978 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:39.978 12:18:40 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:22:39.978 12:18:40 -- host/failover.sh@65 -- # count=3 00:22:39.978 12:18:40 -- host/failover.sh@67 -- # (( count != 3 )) 00:22:39.978 12:18:40 -- host/failover.sh@73 -- # bdevperf_pid=3503804 00:22:39.978 12:18:40 -- host/failover.sh@75 -- # waitforlisten 3503804 /var/tmp/bdevperf.sock 00:22:39.978 12:18:40 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:22:39.978 12:18:40 -- common/autotest_common.sh@817 -- # '[' -z 3503804 ']' 00:22:39.978 12:18:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:39.978 12:18:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:39.978 12:18:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:39.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:39.978 12:18:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:39.978 12:18:40 -- common/autotest_common.sh@10 -- # set +x 00:22:40.240 12:18:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:40.240 12:18:41 -- common/autotest_common.sh@850 -- # return 0 00:22:40.240 12:18:41 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:40.501 [2024-04-26 12:18:41.592217] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:40.501 12:18:41 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:40.762 [2024-04-26 12:18:41.764625] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:40.762 12:18:41 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:41.023 NVMe0n1 00:22:41.023 12:18:42 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:41.285 00:22:41.285 12:18:42 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:41.546 00:22:41.808 12:18:42 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:41.808 12:18:42 -- host/failover.sh@82 -- # grep -q NVMe0 00:22:41.808 12:18:42 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:42.069 12:18:43 -- host/failover.sh@87 -- # sleep 3 00:22:45.368 12:18:46 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:45.368 12:18:46 -- host/failover.sh@88 -- # grep -q NVMe0 00:22:45.368 12:18:46 -- host/failover.sh@90 -- # run_test_pid=3504840 00:22:45.368 12:18:46 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:45.368 12:18:46 -- host/failover.sh@92 -- # wait 3504840 00:22:46.311 0 00:22:46.311 12:18:47 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:46.311 [2024-04-26 12:18:40.675071] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:22:46.311 [2024-04-26 12:18:40.675128] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3503804 ] 00:22:46.311 EAL: No free 2048 kB hugepages reported on node 1 00:22:46.311 [2024-04-26 12:18:40.734962] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:46.311 [2024-04-26 12:18:40.796476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:46.311 [2024-04-26 12:18:43.084580] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:46.311 [2024-04-26 12:18:43.084627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.311 [2024-04-26 12:18:43.084638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.311 [2024-04-26 12:18:43.084647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.311 [2024-04-26 12:18:43.084655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.311 [2024-04-26 12:18:43.084663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.311 [2024-04-26 12:18:43.084671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.311 [2024-04-26 12:18:43.084679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.311 [2024-04-26 12:18:43.084686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.311 [2024-04-26 12:18:43.084693] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:46.311 [2024-04-26 12:18:43.084721] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:46.311 [2024-04-26 12:18:43.084736] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e50e40 (9): Bad file descriptor 00:22:46.311 [2024-04-26 12:18:43.140143] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:46.311 Running I/O for 1 seconds... 00:22:46.311 00:22:46.311 Latency(us) 00:22:46.311 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:46.311 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:46.311 Verification LBA range: start 0x0 length 0x4000 00:22:46.311 NVMe0n1 : 1.01 11205.76 43.77 0.00 0.00 11370.55 1843.20 13707.95 00:22:46.311 =================================================================================================================== 00:22:46.311 Total : 11205.76 43.77 0.00 0.00 11370.55 1843.20 13707.95 00:22:46.311 12:18:47 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:46.311 12:18:47 -- host/failover.sh@95 -- # grep -q NVMe0 00:22:46.571 12:18:47 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:46.571 12:18:47 -- host/failover.sh@99 -- # grep -q NVMe0 00:22:46.571 12:18:47 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:46.832 12:18:47 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:47.094 12:18:48 -- host/failover.sh@101 -- # sleep 3 00:22:50.389 12:18:51 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:50.389 12:18:51 -- host/failover.sh@103 -- # grep -q NVMe0 00:22:50.389 12:18:51 -- host/failover.sh@108 -- # killprocess 3503804 00:22:50.389 12:18:51 -- common/autotest_common.sh@936 -- # '[' -z 3503804 ']' 00:22:50.389 12:18:51 -- common/autotest_common.sh@940 -- # kill -0 3503804 00:22:50.389 12:18:51 -- common/autotest_common.sh@941 -- # uname 00:22:50.389 12:18:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:50.389 12:18:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3503804 00:22:50.389 12:18:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:50.389 12:18:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:50.389 12:18:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3503804' 00:22:50.389 killing process with pid 3503804 00:22:50.389 12:18:51 -- common/autotest_common.sh@955 -- # kill 3503804 00:22:50.389 12:18:51 -- common/autotest_common.sh@960 -- # wait 3503804 00:22:50.389 12:18:51 -- host/failover.sh@110 -- # sync 00:22:50.389 12:18:51 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:50.389 12:18:51 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:22:50.389 12:18:51 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:50.649 12:18:51 -- host/failover.sh@116 -- # nvmftestfini 00:22:50.649 12:18:51 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:50.649 12:18:51 -- nvmf/common.sh@117 -- # sync 00:22:50.649 12:18:51 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:50.649 12:18:51 -- nvmf/common.sh@120 -- # set +e 00:22:50.649 12:18:51 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:50.649 12:18:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:50.649 rmmod nvme_tcp 00:22:50.649 rmmod nvme_fabrics 00:22:50.649 rmmod nvme_keyring 00:22:50.649 12:18:51 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:50.649 12:18:51 -- nvmf/common.sh@124 -- # set -e 00:22:50.649 12:18:51 -- nvmf/common.sh@125 -- # return 0 00:22:50.649 12:18:51 -- nvmf/common.sh@478 -- # '[' -n 3500083 ']' 00:22:50.649 12:18:51 -- nvmf/common.sh@479 -- # killprocess 3500083 00:22:50.649 12:18:51 -- common/autotest_common.sh@936 -- # '[' -z 3500083 ']' 00:22:50.649 12:18:51 -- common/autotest_common.sh@940 -- # kill -0 3500083 00:22:50.649 12:18:51 -- common/autotest_common.sh@941 -- # uname 00:22:50.649 12:18:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:50.649 12:18:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3500083 00:22:50.649 12:18:51 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:50.649 12:18:51 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:50.649 12:18:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3500083' 00:22:50.649 killing process with pid 3500083 00:22:50.649 12:18:51 -- common/autotest_common.sh@955 -- # kill 3500083 00:22:50.649 12:18:51 -- common/autotest_common.sh@960 -- # wait 3500083 00:22:50.649 12:18:51 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:50.649 12:18:51 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:50.649 12:18:51 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:50.649 12:18:51 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:50.649 12:18:51 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:50.649 12:18:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:50.649 12:18:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:50.649 12:18:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:53.193 12:18:53 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:53.193 00:22:53.193 real 0m39.663s 00:22:53.193 user 2m2.028s 00:22:53.193 sys 0m8.120s 00:22:53.193 12:18:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:53.193 12:18:53 -- common/autotest_common.sh@10 -- # set +x 00:22:53.193 ************************************ 00:22:53.193 END TEST nvmf_failover 00:22:53.193 ************************************ 00:22:53.193 12:18:53 -- nvmf/nvmf.sh@99 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:53.193 12:18:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:53.193 12:18:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:53.193 12:18:53 -- common/autotest_common.sh@10 -- # set +x 00:22:53.193 ************************************ 00:22:53.193 START TEST nvmf_discovery 00:22:53.193 ************************************ 00:22:53.193 12:18:54 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:53.193 * Looking for test storage... 00:22:53.193 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:53.193 12:18:54 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:53.193 12:18:54 -- nvmf/common.sh@7 -- # uname -s 00:22:53.193 12:18:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:53.193 12:18:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:53.193 12:18:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:53.193 12:18:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:53.193 12:18:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:53.193 12:18:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:53.193 12:18:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:53.193 12:18:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:53.194 12:18:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:53.194 12:18:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:53.194 12:18:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:53.194 12:18:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:53.194 12:18:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:53.194 12:18:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:53.194 12:18:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:53.194 12:18:54 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:53.194 12:18:54 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:53.194 12:18:54 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:53.194 12:18:54 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:53.194 12:18:54 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:53.194 12:18:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.194 12:18:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.194 12:18:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.194 12:18:54 -- paths/export.sh@5 -- # export PATH 00:22:53.194 12:18:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.194 12:18:54 -- nvmf/common.sh@47 -- # : 0 00:22:53.194 12:18:54 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:53.194 12:18:54 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:53.194 12:18:54 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:53.194 12:18:54 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:53.194 12:18:54 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:53.194 12:18:54 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:53.194 12:18:54 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:53.194 12:18:54 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:53.194 12:18:54 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:22:53.194 12:18:54 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:22:53.194 12:18:54 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:22:53.194 12:18:54 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:22:53.194 12:18:54 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:22:53.194 12:18:54 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:22:53.194 12:18:54 -- host/discovery.sh@25 -- # nvmftestinit 00:22:53.194 12:18:54 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:53.194 12:18:54 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:53.194 12:18:54 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:53.194 12:18:54 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:53.194 12:18:54 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:53.194 12:18:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:53.194 12:18:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:53.194 12:18:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:53.194 12:18:54 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:22:53.194 12:18:54 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:22:53.194 12:18:54 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:53.194 12:18:54 -- common/autotest_common.sh@10 -- # set +x 00:23:01.338 12:19:01 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:01.338 12:19:01 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:01.338 12:19:01 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:01.338 12:19:01 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:01.338 12:19:01 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:01.338 12:19:01 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:01.338 12:19:01 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:01.338 12:19:01 -- nvmf/common.sh@295 -- # net_devs=() 00:23:01.338 12:19:01 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:01.338 12:19:01 -- nvmf/common.sh@296 -- # e810=() 00:23:01.338 12:19:01 -- nvmf/common.sh@296 -- # local -ga e810 00:23:01.338 12:19:01 -- nvmf/common.sh@297 -- # x722=() 00:23:01.338 12:19:01 -- nvmf/common.sh@297 -- # local -ga x722 00:23:01.338 12:19:01 -- nvmf/common.sh@298 -- # mlx=() 00:23:01.338 12:19:01 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:01.338 12:19:01 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:01.338 12:19:01 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:01.338 12:19:01 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:01.338 12:19:01 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:01.338 12:19:01 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:01.338 12:19:01 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:01.338 12:19:01 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:01.338 12:19:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:01.338 12:19:01 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:01.338 12:19:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:01.338 12:19:01 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:01.338 12:19:01 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:01.338 12:19:01 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:01.338 12:19:01 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:01.338 12:19:01 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:01.338 12:19:01 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:01.338 12:19:01 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:01.338 12:19:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:01.338 12:19:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:01.338 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:01.338 12:19:01 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:01.338 12:19:01 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:01.338 12:19:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:01.338 12:19:01 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:01.338 12:19:01 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:01.338 12:19:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:01.338 12:19:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:01.338 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:01.338 12:19:01 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:01.338 12:19:01 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:01.338 12:19:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:01.338 12:19:01 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:01.338 12:19:01 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:01.338 12:19:01 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:01.338 12:19:01 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:01.338 12:19:01 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:01.338 12:19:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:01.338 12:19:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:01.338 12:19:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:01.338 12:19:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:01.338 12:19:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:01.338 Found net devices under 0000:31:00.0: cvl_0_0 00:23:01.338 12:19:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:01.338 12:19:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:01.338 12:19:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:01.338 12:19:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:01.338 12:19:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:01.338 12:19:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:01.338 Found net devices under 0000:31:00.1: cvl_0_1 00:23:01.338 12:19:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:01.338 12:19:01 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:23:01.338 12:19:01 -- nvmf/common.sh@403 -- # is_hw=yes 00:23:01.338 12:19:01 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:23:01.338 12:19:01 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:23:01.338 12:19:01 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:23:01.338 12:19:01 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:01.338 12:19:01 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:01.338 12:19:01 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:01.338 12:19:01 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:01.338 12:19:01 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:01.338 12:19:01 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:01.338 12:19:01 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:01.338 12:19:01 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:01.338 12:19:01 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:01.338 12:19:01 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:01.338 12:19:01 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:01.338 12:19:01 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:01.338 12:19:01 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:01.338 12:19:01 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:01.338 12:19:01 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:01.338 12:19:01 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:01.338 12:19:01 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:01.338 12:19:01 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:01.338 12:19:01 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:01.338 12:19:01 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:01.338 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:01.338 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.671 ms 00:23:01.338 00:23:01.338 --- 10.0.0.2 ping statistics --- 00:23:01.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:01.339 rtt min/avg/max/mdev = 0.671/0.671/0.671/0.000 ms 00:23:01.339 12:19:01 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:01.339 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:01.339 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:23:01.339 00:23:01.339 --- 10.0.0.1 ping statistics --- 00:23:01.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:01.339 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:23:01.339 12:19:01 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:01.339 12:19:01 -- nvmf/common.sh@411 -- # return 0 00:23:01.339 12:19:01 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:01.339 12:19:01 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:01.339 12:19:01 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:01.339 12:19:01 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:01.339 12:19:01 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:01.339 12:19:01 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:01.339 12:19:01 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:01.339 12:19:01 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:23:01.339 12:19:01 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:01.339 12:19:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:01.339 12:19:01 -- common/autotest_common.sh@10 -- # set +x 00:23:01.339 12:19:01 -- nvmf/common.sh@470 -- # nvmfpid=3510225 00:23:01.339 12:19:01 -- nvmf/common.sh@471 -- # waitforlisten 3510225 00:23:01.339 12:19:01 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:01.339 12:19:01 -- common/autotest_common.sh@817 -- # '[' -z 3510225 ']' 00:23:01.339 12:19:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:01.339 12:19:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:01.339 12:19:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:01.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:01.339 12:19:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:01.339 12:19:01 -- common/autotest_common.sh@10 -- # set +x 00:23:01.339 [2024-04-26 12:19:01.542555] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:23:01.339 [2024-04-26 12:19:01.542604] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:01.339 EAL: No free 2048 kB hugepages reported on node 1 00:23:01.339 [2024-04-26 12:19:01.604789] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.339 [2024-04-26 12:19:01.661878] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:01.339 [2024-04-26 12:19:01.661913] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:01.339 [2024-04-26 12:19:01.661919] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:01.339 [2024-04-26 12:19:01.661924] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:01.339 [2024-04-26 12:19:01.661928] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:01.339 [2024-04-26 12:19:01.661944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:01.339 12:19:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:01.339 12:19:02 -- common/autotest_common.sh@850 -- # return 0 00:23:01.339 12:19:02 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:01.339 12:19:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:01.339 12:19:02 -- common/autotest_common.sh@10 -- # set +x 00:23:01.339 12:19:02 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:01.339 12:19:02 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:01.339 12:19:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:01.339 12:19:02 -- common/autotest_common.sh@10 -- # set +x 00:23:01.339 [2024-04-26 12:19:02.397998] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:01.339 12:19:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:01.339 12:19:02 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:01.339 12:19:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:01.339 12:19:02 -- common/autotest_common.sh@10 -- # set +x 00:23:01.339 [2024-04-26 12:19:02.410247] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:01.339 12:19:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:01.339 12:19:02 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:01.339 12:19:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:01.339 12:19:02 -- common/autotest_common.sh@10 -- # set +x 00:23:01.339 null0 00:23:01.339 12:19:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:01.339 12:19:02 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:01.339 12:19:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:01.339 12:19:02 -- common/autotest_common.sh@10 -- # set +x 00:23:01.339 null1 00:23:01.339 12:19:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:01.339 12:19:02 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:23:01.339 12:19:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:01.339 12:19:02 -- common/autotest_common.sh@10 -- # set +x 00:23:01.339 12:19:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:01.339 12:19:02 -- host/discovery.sh@45 -- # hostpid=3510490 00:23:01.339 12:19:02 -- host/discovery.sh@46 -- # waitforlisten 3510490 /tmp/host.sock 00:23:01.339 12:19:02 -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:01.339 12:19:02 -- common/autotest_common.sh@817 -- # '[' -z 3510490 ']' 00:23:01.339 12:19:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:23:01.339 12:19:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:01.339 12:19:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:01.339 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:01.339 12:19:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:01.339 12:19:02 -- common/autotest_common.sh@10 -- # set +x 00:23:01.339 [2024-04-26 12:19:02.505244] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:23:01.339 [2024-04-26 12:19:02.505306] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3510490 ] 00:23:01.339 EAL: No free 2048 kB hugepages reported on node 1 00:23:01.600 [2024-04-26 12:19:02.569655] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.600 [2024-04-26 12:19:02.642341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:02.172 12:19:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:02.172 12:19:03 -- common/autotest_common.sh@850 -- # return 0 00:23:02.172 12:19:03 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:02.172 12:19:03 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:02.172 12:19:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:02.172 12:19:03 -- common/autotest_common.sh@10 -- # set +x 00:23:02.172 12:19:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:02.172 12:19:03 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:23:02.172 12:19:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:02.172 12:19:03 -- common/autotest_common.sh@10 -- # set +x 00:23:02.172 12:19:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:02.172 12:19:03 -- host/discovery.sh@72 -- # notify_id=0 00:23:02.172 12:19:03 -- host/discovery.sh@83 -- # get_subsystem_names 00:23:02.172 12:19:03 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:02.172 12:19:03 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:02.172 12:19:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:02.172 12:19:03 -- host/discovery.sh@59 -- # sort 00:23:02.172 12:19:03 -- common/autotest_common.sh@10 -- # set +x 00:23:02.172 12:19:03 -- host/discovery.sh@59 -- # xargs 00:23:02.172 12:19:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:02.172 12:19:03 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:23:02.172 12:19:03 -- host/discovery.sh@84 -- # get_bdev_list 00:23:02.172 12:19:03 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:02.172 12:19:03 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:02.172 12:19:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:02.172 12:19:03 -- host/discovery.sh@55 -- # sort 00:23:02.172 12:19:03 -- common/autotest_common.sh@10 -- # set +x 00:23:02.172 12:19:03 -- host/discovery.sh@55 -- # xargs 00:23:02.172 12:19:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:02.431 12:19:03 -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:23:02.431 12:19:03 -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:02.431 12:19:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:02.431 12:19:03 -- common/autotest_common.sh@10 -- # set +x 00:23:02.431 12:19:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:02.431 12:19:03 -- host/discovery.sh@87 -- # get_subsystem_names 00:23:02.431 12:19:03 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:02.431 12:19:03 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:02.431 12:19:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:02.431 12:19:03 -- host/discovery.sh@59 -- # sort 00:23:02.431 12:19:03 -- common/autotest_common.sh@10 -- # set +x 00:23:02.431 12:19:03 -- host/discovery.sh@59 -- # xargs 00:23:02.431 12:19:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:02.431 12:19:03 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:23:02.431 12:19:03 -- host/discovery.sh@88 -- # get_bdev_list 00:23:02.431 12:19:03 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:02.431 12:19:03 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:02.431 12:19:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:02.431 12:19:03 -- host/discovery.sh@55 -- # sort 00:23:02.431 12:19:03 -- common/autotest_common.sh@10 -- # set +x 00:23:02.431 12:19:03 -- host/discovery.sh@55 -- # xargs 00:23:02.431 12:19:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:02.431 12:19:03 -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:23:02.431 12:19:03 -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:02.431 12:19:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:02.431 12:19:03 -- common/autotest_common.sh@10 -- # set +x 00:23:02.431 12:19:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:02.431 12:19:03 -- host/discovery.sh@91 -- # get_subsystem_names 00:23:02.431 12:19:03 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:02.431 12:19:03 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:02.431 12:19:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:02.431 12:19:03 -- host/discovery.sh@59 -- # sort 00:23:02.431 12:19:03 -- common/autotest_common.sh@10 -- # set +x 00:23:02.431 12:19:03 -- host/discovery.sh@59 -- # xargs 00:23:02.431 12:19:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:02.431 12:19:03 -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:23:02.431 12:19:03 -- host/discovery.sh@92 -- # get_bdev_list 00:23:02.431 12:19:03 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:02.431 12:19:03 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:02.432 12:19:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:02.432 12:19:03 -- host/discovery.sh@55 -- # sort 00:23:02.432 12:19:03 -- common/autotest_common.sh@10 -- # set +x 00:23:02.432 12:19:03 -- host/discovery.sh@55 -- # xargs 00:23:02.432 12:19:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:02.692 12:19:03 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:23:02.692 12:19:03 -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:02.692 12:19:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:02.692 12:19:03 -- common/autotest_common.sh@10 -- # set +x 00:23:02.692 [2024-04-26 12:19:03.661278] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:02.692 12:19:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:02.692 12:19:03 -- host/discovery.sh@97 -- # get_subsystem_names 00:23:02.692 12:19:03 -- host/discovery.sh@59 -- # xargs 00:23:02.692 12:19:03 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:02.692 12:19:03 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:02.692 12:19:03 -- host/discovery.sh@59 -- # sort 00:23:02.692 12:19:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:02.692 12:19:03 -- common/autotest_common.sh@10 -- # set +x 00:23:02.692 12:19:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:02.692 12:19:03 -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:23:02.692 12:19:03 -- host/discovery.sh@98 -- # get_bdev_list 00:23:02.692 12:19:03 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:02.692 12:19:03 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:02.692 12:19:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:02.692 12:19:03 -- host/discovery.sh@55 -- # sort 00:23:02.692 12:19:03 -- common/autotest_common.sh@10 -- # set +x 00:23:02.692 12:19:03 -- host/discovery.sh@55 -- # xargs 00:23:02.692 12:19:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:02.692 12:19:03 -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:23:02.692 12:19:03 -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:23:02.692 12:19:03 -- host/discovery.sh@79 -- # expected_count=0 00:23:02.692 12:19:03 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:02.692 12:19:03 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:02.692 12:19:03 -- common/autotest_common.sh@901 -- # local max=10 00:23:02.692 12:19:03 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:02.692 12:19:03 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:02.692 12:19:03 -- common/autotest_common.sh@903 -- # get_notification_count 00:23:02.692 12:19:03 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:02.692 12:19:03 -- host/discovery.sh@74 -- # jq '. | length' 00:23:02.692 12:19:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:02.692 12:19:03 -- common/autotest_common.sh@10 -- # set +x 00:23:02.692 12:19:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:02.692 12:19:03 -- host/discovery.sh@74 -- # notification_count=0 00:23:02.692 12:19:03 -- host/discovery.sh@75 -- # notify_id=0 00:23:02.692 12:19:03 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:23:02.692 12:19:03 -- common/autotest_common.sh@904 -- # return 0 00:23:02.692 12:19:03 -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:02.692 12:19:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:02.692 12:19:03 -- common/autotest_common.sh@10 -- # set +x 00:23:02.692 12:19:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:02.692 12:19:03 -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:02.692 12:19:03 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:02.692 12:19:03 -- common/autotest_common.sh@901 -- # local max=10 00:23:02.692 12:19:03 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:02.692 12:19:03 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:02.692 12:19:03 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:23:02.692 12:19:03 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:02.692 12:19:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:02.692 12:19:03 -- common/autotest_common.sh@10 -- # set +x 00:23:02.692 12:19:03 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:02.692 12:19:03 -- host/discovery.sh@59 -- # sort 00:23:02.692 12:19:03 -- host/discovery.sh@59 -- # xargs 00:23:02.692 12:19:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:02.692 12:19:03 -- common/autotest_common.sh@903 -- # [[ '' == \n\v\m\e\0 ]] 00:23:02.692 12:19:03 -- common/autotest_common.sh@906 -- # sleep 1 00:23:03.263 [2024-04-26 12:19:04.356002] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:03.263 [2024-04-26 12:19:04.356024] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:03.263 [2024-04-26 12:19:04.356040] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:03.263 [2024-04-26 12:19:04.442296] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:03.523 [2024-04-26 12:19:04.539753] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:03.523 [2024-04-26 12:19:04.539774] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:03.785 12:19:04 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:03.786 12:19:04 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:03.786 12:19:04 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:23:03.786 12:19:04 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:03.786 12:19:04 -- host/discovery.sh@59 -- # xargs 00:23:03.786 12:19:04 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:03.786 12:19:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:03.786 12:19:04 -- common/autotest_common.sh@10 -- # set +x 00:23:03.786 12:19:04 -- host/discovery.sh@59 -- # sort 00:23:03.786 12:19:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:03.786 12:19:04 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:03.786 12:19:04 -- common/autotest_common.sh@904 -- # return 0 00:23:03.786 12:19:04 -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:03.786 12:19:04 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:03.786 12:19:04 -- common/autotest_common.sh@901 -- # local max=10 00:23:03.786 12:19:04 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:03.786 12:19:04 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:23:03.786 12:19:04 -- common/autotest_common.sh@903 -- # get_bdev_list 00:23:03.786 12:19:04 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:03.786 12:19:04 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:03.786 12:19:04 -- host/discovery.sh@55 -- # sort 00:23:03.786 12:19:04 -- host/discovery.sh@55 -- # xargs 00:23:03.786 12:19:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:03.786 12:19:04 -- common/autotest_common.sh@10 -- # set +x 00:23:03.786 12:19:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:03.786 12:19:04 -- common/autotest_common.sh@903 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:23:03.786 12:19:04 -- common/autotest_common.sh@904 -- # return 0 00:23:03.786 12:19:04 -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:03.786 12:19:04 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:03.786 12:19:04 -- common/autotest_common.sh@901 -- # local max=10 00:23:03.786 12:19:04 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:03.786 12:19:04 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:23:03.786 12:19:04 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:23:03.786 12:19:04 -- host/discovery.sh@63 -- # sort -n 00:23:03.786 12:19:04 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:03.786 12:19:04 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:03.786 12:19:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:03.786 12:19:05 -- common/autotest_common.sh@10 -- # set +x 00:23:03.786 12:19:05 -- host/discovery.sh@63 -- # xargs 00:23:04.047 12:19:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:04.047 12:19:05 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0 ]] 00:23:04.047 12:19:05 -- common/autotest_common.sh@904 -- # return 0 00:23:04.047 12:19:05 -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:23:04.047 12:19:05 -- host/discovery.sh@79 -- # expected_count=1 00:23:04.047 12:19:05 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:04.047 12:19:05 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:04.047 12:19:05 -- common/autotest_common.sh@901 -- # local max=10 00:23:04.047 12:19:05 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:04.047 12:19:05 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:04.047 12:19:05 -- common/autotest_common.sh@903 -- # get_notification_count 00:23:04.047 12:19:05 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:04.047 12:19:05 -- host/discovery.sh@74 -- # jq '. | length' 00:23:04.047 12:19:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:04.047 12:19:05 -- common/autotest_common.sh@10 -- # set +x 00:23:04.047 12:19:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:04.047 12:19:05 -- host/discovery.sh@74 -- # notification_count=1 00:23:04.047 12:19:05 -- host/discovery.sh@75 -- # notify_id=1 00:23:04.047 12:19:05 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:23:04.047 12:19:05 -- common/autotest_common.sh@904 -- # return 0 00:23:04.047 12:19:05 -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:04.047 12:19:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:04.047 12:19:05 -- common/autotest_common.sh@10 -- # set +x 00:23:04.047 12:19:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:04.047 12:19:05 -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:04.047 12:19:05 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:04.047 12:19:05 -- common/autotest_common.sh@901 -- # local max=10 00:23:04.048 12:19:05 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:04.048 12:19:05 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:04.048 12:19:05 -- common/autotest_common.sh@903 -- # get_bdev_list 00:23:04.048 12:19:05 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:04.048 12:19:05 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:04.048 12:19:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:04.048 12:19:05 -- host/discovery.sh@55 -- # sort 00:23:04.048 12:19:05 -- common/autotest_common.sh@10 -- # set +x 00:23:04.048 12:19:05 -- host/discovery.sh@55 -- # xargs 00:23:04.309 12:19:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:04.309 12:19:05 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:04.309 12:19:05 -- common/autotest_common.sh@904 -- # return 0 00:23:04.309 12:19:05 -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:23:04.309 12:19:05 -- host/discovery.sh@79 -- # expected_count=1 00:23:04.309 12:19:05 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:04.309 12:19:05 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:04.309 12:19:05 -- common/autotest_common.sh@901 -- # local max=10 00:23:04.309 12:19:05 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:04.309 12:19:05 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:04.309 12:19:05 -- common/autotest_common.sh@903 -- # get_notification_count 00:23:04.309 12:19:05 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:23:04.309 12:19:05 -- host/discovery.sh@74 -- # jq '. | length' 00:23:04.309 12:19:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:04.309 12:19:05 -- common/autotest_common.sh@10 -- # set +x 00:23:04.309 12:19:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:04.309 12:19:05 -- host/discovery.sh@74 -- # notification_count=1 00:23:04.309 12:19:05 -- host/discovery.sh@75 -- # notify_id=2 00:23:04.309 12:19:05 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:23:04.309 12:19:05 -- common/autotest_common.sh@904 -- # return 0 00:23:04.309 12:19:05 -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:04.309 12:19:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:04.309 12:19:05 -- common/autotest_common.sh@10 -- # set +x 00:23:04.309 [2024-04-26 12:19:05.445958] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:04.309 [2024-04-26 12:19:05.446413] bdev_nvme.c:6905:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:04.309 [2024-04-26 12:19:05.446439] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:04.309 12:19:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:04.309 12:19:05 -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:04.309 12:19:05 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:04.309 12:19:05 -- common/autotest_common.sh@901 -- # local max=10 00:23:04.309 12:19:05 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:04.309 12:19:05 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:04.309 12:19:05 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:23:04.309 12:19:05 -- host/discovery.sh@59 -- # sort 00:23:04.309 12:19:05 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:04.309 12:19:05 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:04.309 12:19:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:04.309 12:19:05 -- common/autotest_common.sh@10 -- # set +x 00:23:04.309 12:19:05 -- host/discovery.sh@59 -- # xargs 00:23:04.309 12:19:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:04.310 12:19:05 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:04.310 12:19:05 -- common/autotest_common.sh@904 -- # return 0 00:23:04.310 12:19:05 -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:04.310 12:19:05 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:04.310 12:19:05 -- common/autotest_common.sh@901 -- # local max=10 00:23:04.310 12:19:05 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:04.310 12:19:05 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:04.310 12:19:05 -- common/autotest_common.sh@903 -- # get_bdev_list 00:23:04.310 12:19:05 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:04.310 12:19:05 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:04.310 12:19:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:04.310 12:19:05 -- host/discovery.sh@55 -- # sort 00:23:04.310 12:19:05 -- common/autotest_common.sh@10 -- # set +x 00:23:04.310 12:19:05 -- host/discovery.sh@55 -- # xargs 00:23:04.571 [2024-04-26 12:19:05.534085] bdev_nvme.c:6847:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:23:04.571 12:19:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:04.571 12:19:05 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:04.571 12:19:05 -- common/autotest_common.sh@904 -- # return 0 00:23:04.571 12:19:05 -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:04.571 12:19:05 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:04.571 12:19:05 -- common/autotest_common.sh@901 -- # local max=10 00:23:04.571 12:19:05 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:04.571 12:19:05 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:04.571 12:19:05 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:23:04.571 12:19:05 -- host/discovery.sh@63 -- # sort -n 00:23:04.571 12:19:05 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:04.571 12:19:05 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:04.571 12:19:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:04.571 12:19:05 -- common/autotest_common.sh@10 -- # set +x 00:23:04.571 12:19:05 -- host/discovery.sh@63 -- # xargs 00:23:04.571 12:19:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:04.571 12:19:05 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:23:04.571 12:19:05 -- common/autotest_common.sh@906 -- # sleep 1 00:23:04.571 [2024-04-26 12:19:05.632824] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:04.571 [2024-04-26 12:19:05.632846] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:04.571 [2024-04-26 12:19:05.632852] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:05.516 12:19:06 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:05.516 12:19:06 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:05.516 12:19:06 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:23:05.516 12:19:06 -- host/discovery.sh@63 -- # xargs 00:23:05.516 12:19:06 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:05.516 12:19:06 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:05.516 12:19:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:05.516 12:19:06 -- host/discovery.sh@63 -- # sort -n 00:23:05.516 12:19:06 -- common/autotest_common.sh@10 -- # set +x 00:23:05.516 12:19:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:05.516 12:19:06 -- common/autotest_common.sh@903 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:05.516 12:19:06 -- common/autotest_common.sh@904 -- # return 0 00:23:05.516 12:19:06 -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:23:05.516 12:19:06 -- host/discovery.sh@79 -- # expected_count=0 00:23:05.516 12:19:06 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:05.516 12:19:06 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:05.516 12:19:06 -- common/autotest_common.sh@901 -- # local max=10 00:23:05.516 12:19:06 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:05.516 12:19:06 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:05.516 12:19:06 -- common/autotest_common.sh@903 -- # get_notification_count 00:23:05.516 12:19:06 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:05.516 12:19:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:05.516 12:19:06 -- common/autotest_common.sh@10 -- # set +x 00:23:05.516 12:19:06 -- host/discovery.sh@74 -- # jq '. | length' 00:23:05.516 12:19:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:05.516 12:19:06 -- host/discovery.sh@74 -- # notification_count=0 00:23:05.516 12:19:06 -- host/discovery.sh@75 -- # notify_id=2 00:23:05.516 12:19:06 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:23:05.516 12:19:06 -- common/autotest_common.sh@904 -- # return 0 00:23:05.516 12:19:06 -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:05.516 12:19:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:05.516 12:19:06 -- common/autotest_common.sh@10 -- # set +x 00:23:05.516 [2024-04-26 12:19:06.730135] bdev_nvme.c:6905:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:05.516 [2024-04-26 12:19:06.730157] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:05.516 [2024-04-26 12:19:06.731429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.516 [2024-04-26 12:19:06.731447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.516 [2024-04-26 12:19:06.731457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.516 [2024-04-26 12:19:06.731465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.516 [2024-04-26 12:19:06.731473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.516 [2024-04-26 12:19:06.731480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.516 [2024-04-26 12:19:06.731488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.516 [2024-04-26 12:19:06.731495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.516 [2024-04-26 12:19:06.731502] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade670 is same with the state(5) to be set 00:23:05.516 12:19:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:05.516 12:19:06 -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:05.516 12:19:06 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:05.516 12:19:06 -- common/autotest_common.sh@901 -- # local max=10 00:23:05.516 12:19:06 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:05.779 12:19:06 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:05.779 12:19:06 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:23:05.779 12:19:06 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:05.779 12:19:06 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:05.779 12:19:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:05.779 12:19:06 -- host/discovery.sh@59 -- # sort 00:23:05.779 12:19:06 -- common/autotest_common.sh@10 -- # set +x 00:23:05.779 12:19:06 -- host/discovery.sh@59 -- # xargs 00:23:05.779 [2024-04-26 12:19:06.741444] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade670 (9): Bad file descriptor 00:23:05.779 [2024-04-26 12:19:06.751483] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:05.779 [2024-04-26 12:19:06.751794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.779 [2024-04-26 12:19:06.752266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.779 [2024-04-26 12:19:06.752302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade670 with addr=10.0.0.2, port=4420 00:23:05.779 [2024-04-26 12:19:06.752313] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade670 is same with the state(5) to be set 00:23:05.779 [2024-04-26 12:19:06.752332] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade670 (9): Bad file descriptor 00:23:05.779 [2024-04-26 12:19:06.752358] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:05.779 [2024-04-26 12:19:06.752366] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:05.779 [2024-04-26 12:19:06.752380] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:05.779 [2024-04-26 12:19:06.752396] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:05.779 12:19:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:05.779 [2024-04-26 12:19:06.761540] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:05.779 [2024-04-26 12:19:06.762079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.779 [2024-04-26 12:19:06.762442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.779 [2024-04-26 12:19:06.762455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade670 with addr=10.0.0.2, port=4420 00:23:05.779 [2024-04-26 12:19:06.762465] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade670 is same with the state(5) to be set 00:23:05.779 [2024-04-26 12:19:06.762484] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade670 (9): Bad file descriptor 00:23:05.779 [2024-04-26 12:19:06.762518] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:05.779 [2024-04-26 12:19:06.762527] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:05.779 [2024-04-26 12:19:06.762535] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:05.779 [2024-04-26 12:19:06.762550] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:05.779 12:19:06 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:05.779 12:19:06 -- common/autotest_common.sh@904 -- # return 0 00:23:05.779 12:19:06 -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:05.779 12:19:06 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:05.779 12:19:06 -- common/autotest_common.sh@901 -- # local max=10 00:23:05.779 12:19:06 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:05.779 12:19:06 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:05.779 12:19:06 -- common/autotest_common.sh@903 -- # get_bdev_list 00:23:05.779 12:19:06 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:05.780 12:19:06 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:05.780 12:19:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:05.780 12:19:06 -- host/discovery.sh@55 -- # sort 00:23:05.780 12:19:06 -- common/autotest_common.sh@10 -- # set +x 00:23:05.780 [2024-04-26 12:19:06.771593] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:05.780 12:19:06 -- host/discovery.sh@55 -- # xargs 00:23:05.780 [2024-04-26 12:19:06.772094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.780 [2024-04-26 12:19:06.772347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.780 [2024-04-26 12:19:06.772361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade670 with addr=10.0.0.2, port=4420 00:23:05.780 [2024-04-26 12:19:06.772372] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade670 is same with the state(5) to be set 00:23:05.780 [2024-04-26 12:19:06.772392] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade670 (9): Bad file descriptor 00:23:05.780 [2024-04-26 12:19:06.772404] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:05.780 [2024-04-26 12:19:06.772411] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:05.780 [2024-04-26 12:19:06.772419] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:05.780 [2024-04-26 12:19:06.772434] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:05.780 [2024-04-26 12:19:06.781649] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:05.780 [2024-04-26 12:19:06.781893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.780 [2024-04-26 12:19:06.782211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.780 [2024-04-26 12:19:06.782222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade670 with addr=10.0.0.2, port=4420 00:23:05.780 [2024-04-26 12:19:06.782230] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade670 is same with the state(5) to be set 00:23:05.780 [2024-04-26 12:19:06.782241] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade670 (9): Bad file descriptor 00:23:05.780 [2024-04-26 12:19:06.782252] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:05.780 [2024-04-26 12:19:06.782258] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:05.780 [2024-04-26 12:19:06.782266] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:05.780 [2024-04-26 12:19:06.782277] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:05.780 [2024-04-26 12:19:06.791708] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:05.780 [2024-04-26 12:19:06.791894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.780 [2024-04-26 12:19:06.792179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.780 [2024-04-26 12:19:06.792188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade670 with addr=10.0.0.2, port=4420 00:23:05.780 [2024-04-26 12:19:06.792195] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade670 is same with the state(5) to be set 00:23:05.780 [2024-04-26 12:19:06.792206] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade670 (9): Bad file descriptor 00:23:05.780 [2024-04-26 12:19:06.792216] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:05.780 [2024-04-26 12:19:06.792222] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:05.780 [2024-04-26 12:19:06.792229] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:05.780 [2024-04-26 12:19:06.792240] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:05.780 [2024-04-26 12:19:06.801758] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:05.780 [2024-04-26 12:19:06.802107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.780 [2024-04-26 12:19:06.802428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.780 [2024-04-26 12:19:06.802437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade670 with addr=10.0.0.2, port=4420 00:23:05.780 [2024-04-26 12:19:06.802444] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade670 is same with the state(5) to be set 00:23:05.780 [2024-04-26 12:19:06.802455] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade670 (9): Bad file descriptor 00:23:05.780 [2024-04-26 12:19:06.802472] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:05.780 [2024-04-26 12:19:06.802479] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:05.780 [2024-04-26 12:19:06.802486] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:05.780 [2024-04-26 12:19:06.802496] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:05.780 12:19:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:05.780 [2024-04-26 12:19:06.811808] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:05.780 [2024-04-26 12:19:06.812172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.780 [2024-04-26 12:19:06.812526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.780 [2024-04-26 12:19:06.812538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade670 with addr=10.0.0.2, port=4420 00:23:05.780 [2024-04-26 12:19:06.812546] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade670 is same with the state(5) to be set 00:23:05.780 [2024-04-26 12:19:06.812556] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade670 (9): Bad file descriptor 00:23:05.780 [2024-04-26 12:19:06.812573] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:05.780 [2024-04-26 12:19:06.812579] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:05.780 [2024-04-26 12:19:06.812586] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:05.780 [2024-04-26 12:19:06.812596] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:05.780 12:19:06 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:05.780 12:19:06 -- common/autotest_common.sh@904 -- # return 0 00:23:05.780 12:19:06 -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:05.780 12:19:06 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:05.780 12:19:06 -- common/autotest_common.sh@901 -- # local max=10 00:23:05.780 12:19:06 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:05.780 12:19:06 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:05.780 [2024-04-26 12:19:06.821857] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:05.780 [2024-04-26 12:19:06.822188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.780 [2024-04-26 12:19:06.822539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.780 [2024-04-26 12:19:06.822549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade670 with addr=10.0.0.2, port=4420 00:23:05.780 [2024-04-26 12:19:06.822557] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade670 is same with the state(5) to be set 00:23:05.780 [2024-04-26 12:19:06.822568] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade670 (9): Bad file descriptor 00:23:05.780 [2024-04-26 12:19:06.822578] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:05.780 [2024-04-26 12:19:06.822584] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:05.780 [2024-04-26 12:19:06.822591] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:05.780 [2024-04-26 12:19:06.822603] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:05.780 12:19:06 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:23:05.780 12:19:06 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:05.780 12:19:06 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:05.780 12:19:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:05.780 12:19:06 -- common/autotest_common.sh@10 -- # set +x 00:23:05.780 12:19:06 -- host/discovery.sh@63 -- # sort -n 00:23:05.780 12:19:06 -- host/discovery.sh@63 -- # xargs 00:23:05.780 [2024-04-26 12:19:06.831911] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:05.780 [2024-04-26 12:19:06.832264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.780 [2024-04-26 12:19:06.832560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.780 [2024-04-26 12:19:06.832571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade670 with addr=10.0.0.2, port=4420 00:23:05.780 [2024-04-26 12:19:06.832578] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade670 is same with the state(5) to be set 00:23:05.780 [2024-04-26 12:19:06.832592] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade670 (9): Bad file descriptor 00:23:05.780 [2024-04-26 12:19:06.832603] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:05.780 [2024-04-26 12:19:06.832609] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:05.780 [2024-04-26 12:19:06.832616] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:05.780 [2024-04-26 12:19:06.832627] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:05.780 12:19:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:05.780 [2024-04-26 12:19:06.841964] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:05.780 [2024-04-26 12:19:06.842270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.780 [2024-04-26 12:19:06.842608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.780 [2024-04-26 12:19:06.842617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade670 with addr=10.0.0.2, port=4420 00:23:05.780 [2024-04-26 12:19:06.842624] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade670 is same with the state(5) to be set 00:23:05.780 [2024-04-26 12:19:06.842635] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade670 (9): Bad file descriptor 00:23:05.780 [2024-04-26 12:19:06.842644] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:05.780 [2024-04-26 12:19:06.842651] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:05.780 [2024-04-26 12:19:06.842658] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:05.781 [2024-04-26 12:19:06.842668] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:05.781 [2024-04-26 12:19:06.852015] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:05.781 [2024-04-26 12:19:06.852318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.781 [2024-04-26 12:19:06.852666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.781 [2024-04-26 12:19:06.852675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade670 with addr=10.0.0.2, port=4420 00:23:05.781 [2024-04-26 12:19:06.852682] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade670 is same with the state(5) to be set 00:23:05.781 [2024-04-26 12:19:06.852693] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade670 (9): Bad file descriptor 00:23:05.781 [2024-04-26 12:19:06.852702] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:05.781 [2024-04-26 12:19:06.852708] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:05.781 [2024-04-26 12:19:06.852715] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:05.781 [2024-04-26 12:19:06.852725] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:05.781 [2024-04-26 12:19:06.858766] bdev_nvme.c:6710:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:05.781 [2024-04-26 12:19:06.858784] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:05.781 12:19:06 -- common/autotest_common.sh@903 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:23:05.781 12:19:06 -- common/autotest_common.sh@906 -- # sleep 1 00:23:06.725 12:19:07 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:06.725 12:19:07 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:06.725 12:19:07 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:23:06.725 12:19:07 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:06.725 12:19:07 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:06.725 12:19:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:06.725 12:19:07 -- host/discovery.sh@63 -- # sort -n 00:23:06.725 12:19:07 -- common/autotest_common.sh@10 -- # set +x 00:23:06.725 12:19:07 -- host/discovery.sh@63 -- # xargs 00:23:06.725 12:19:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:06.725 12:19:07 -- common/autotest_common.sh@903 -- # [[ 4421 == \4\4\2\1 ]] 00:23:06.725 12:19:07 -- common/autotest_common.sh@904 -- # return 0 00:23:06.725 12:19:07 -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:23:06.725 12:19:07 -- host/discovery.sh@79 -- # expected_count=0 00:23:06.725 12:19:07 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:06.725 12:19:07 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:06.725 12:19:07 -- common/autotest_common.sh@901 -- # local max=10 00:23:06.725 12:19:07 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:06.725 12:19:07 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:06.725 12:19:07 -- common/autotest_common.sh@903 -- # get_notification_count 00:23:06.725 12:19:07 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:06.725 12:19:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:06.725 12:19:07 -- common/autotest_common.sh@10 -- # set +x 00:23:06.725 12:19:07 -- host/discovery.sh@74 -- # jq '. | length' 00:23:06.987 12:19:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:06.987 12:19:07 -- host/discovery.sh@74 -- # notification_count=0 00:23:06.987 12:19:07 -- host/discovery.sh@75 -- # notify_id=2 00:23:06.987 12:19:07 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:23:06.987 12:19:07 -- common/autotest_common.sh@904 -- # return 0 00:23:06.987 12:19:07 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:23:06.987 12:19:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:06.987 12:19:07 -- common/autotest_common.sh@10 -- # set +x 00:23:06.987 12:19:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:06.987 12:19:07 -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:23:06.987 12:19:07 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:23:06.987 12:19:07 -- common/autotest_common.sh@901 -- # local max=10 00:23:06.987 12:19:07 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:06.987 12:19:07 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:23:06.987 12:19:08 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:23:06.987 12:19:08 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:06.987 12:19:08 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:06.987 12:19:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:06.987 12:19:08 -- host/discovery.sh@59 -- # sort 00:23:06.987 12:19:08 -- common/autotest_common.sh@10 -- # set +x 00:23:06.987 12:19:08 -- host/discovery.sh@59 -- # xargs 00:23:06.987 12:19:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:06.987 12:19:08 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:23:06.987 12:19:08 -- common/autotest_common.sh@904 -- # return 0 00:23:06.987 12:19:08 -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:23:06.987 12:19:08 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:23:06.987 12:19:08 -- common/autotest_common.sh@901 -- # local max=10 00:23:06.987 12:19:08 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:06.987 12:19:08 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:23:06.987 12:19:08 -- common/autotest_common.sh@903 -- # get_bdev_list 00:23:06.987 12:19:08 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:06.987 12:19:08 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:06.987 12:19:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:06.987 12:19:08 -- host/discovery.sh@55 -- # sort 00:23:06.987 12:19:08 -- common/autotest_common.sh@10 -- # set +x 00:23:06.987 12:19:08 -- host/discovery.sh@55 -- # xargs 00:23:06.987 12:19:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:06.987 12:19:08 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:23:06.987 12:19:08 -- common/autotest_common.sh@904 -- # return 0 00:23:06.987 12:19:08 -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:23:06.987 12:19:08 -- host/discovery.sh@79 -- # expected_count=2 00:23:06.987 12:19:08 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:06.987 12:19:08 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:06.987 12:19:08 -- common/autotest_common.sh@901 -- # local max=10 00:23:06.987 12:19:08 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:06.987 12:19:08 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:06.987 12:19:08 -- common/autotest_common.sh@903 -- # get_notification_count 00:23:06.987 12:19:08 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:06.987 12:19:08 -- host/discovery.sh@74 -- # jq '. | length' 00:23:06.987 12:19:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:06.987 12:19:08 -- common/autotest_common.sh@10 -- # set +x 00:23:06.987 12:19:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:06.987 12:19:08 -- host/discovery.sh@74 -- # notification_count=2 00:23:06.987 12:19:08 -- host/discovery.sh@75 -- # notify_id=4 00:23:06.987 12:19:08 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:23:06.987 12:19:08 -- common/autotest_common.sh@904 -- # return 0 00:23:06.987 12:19:08 -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:06.987 12:19:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:06.987 12:19:08 -- common/autotest_common.sh@10 -- # set +x 00:23:08.377 [2024-04-26 12:19:09.219993] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:08.377 [2024-04-26 12:19:09.220011] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:08.377 [2024-04-26 12:19:09.220024] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:08.378 [2024-04-26 12:19:09.308326] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:23:08.378 [2024-04-26 12:19:09.371922] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:08.378 [2024-04-26 12:19:09.371953] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:08.378 12:19:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:08.378 12:19:09 -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:08.378 12:19:09 -- common/autotest_common.sh@638 -- # local es=0 00:23:08.378 12:19:09 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:08.378 12:19:09 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:23:08.378 12:19:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:08.378 12:19:09 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:23:08.378 12:19:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:08.378 12:19:09 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:08.378 12:19:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.378 12:19:09 -- common/autotest_common.sh@10 -- # set +x 00:23:08.378 request: 00:23:08.378 { 00:23:08.378 "name": "nvme", 00:23:08.378 "trtype": "tcp", 00:23:08.378 "traddr": "10.0.0.2", 00:23:08.378 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:08.378 "adrfam": "ipv4", 00:23:08.378 "trsvcid": "8009", 00:23:08.378 "wait_for_attach": true, 00:23:08.378 "method": "bdev_nvme_start_discovery", 00:23:08.378 "req_id": 1 00:23:08.378 } 00:23:08.378 Got JSON-RPC error response 00:23:08.378 response: 00:23:08.378 { 00:23:08.378 "code": -17, 00:23:08.378 "message": "File exists" 00:23:08.378 } 00:23:08.378 12:19:09 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:23:08.378 12:19:09 -- common/autotest_common.sh@641 -- # es=1 00:23:08.378 12:19:09 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:08.378 12:19:09 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:08.378 12:19:09 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:08.378 12:19:09 -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:23:08.378 12:19:09 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:08.378 12:19:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.378 12:19:09 -- common/autotest_common.sh@10 -- # set +x 00:23:08.378 12:19:09 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:08.378 12:19:09 -- host/discovery.sh@67 -- # sort 00:23:08.378 12:19:09 -- host/discovery.sh@67 -- # xargs 00:23:08.378 12:19:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:08.378 12:19:09 -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:23:08.378 12:19:09 -- host/discovery.sh@146 -- # get_bdev_list 00:23:08.378 12:19:09 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:08.378 12:19:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.378 12:19:09 -- common/autotest_common.sh@10 -- # set +x 00:23:08.378 12:19:09 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:08.378 12:19:09 -- host/discovery.sh@55 -- # sort 00:23:08.378 12:19:09 -- host/discovery.sh@55 -- # xargs 00:23:08.378 12:19:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:08.378 12:19:09 -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:08.378 12:19:09 -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:08.378 12:19:09 -- common/autotest_common.sh@638 -- # local es=0 00:23:08.378 12:19:09 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:08.378 12:19:09 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:23:08.378 12:19:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:08.378 12:19:09 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:23:08.378 12:19:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:08.378 12:19:09 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:08.378 12:19:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.378 12:19:09 -- common/autotest_common.sh@10 -- # set +x 00:23:08.378 request: 00:23:08.378 { 00:23:08.378 "name": "nvme_second", 00:23:08.378 "trtype": "tcp", 00:23:08.378 "traddr": "10.0.0.2", 00:23:08.378 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:08.378 "adrfam": "ipv4", 00:23:08.378 "trsvcid": "8009", 00:23:08.378 "wait_for_attach": true, 00:23:08.378 "method": "bdev_nvme_start_discovery", 00:23:08.378 "req_id": 1 00:23:08.378 } 00:23:08.378 Got JSON-RPC error response 00:23:08.378 response: 00:23:08.378 { 00:23:08.378 "code": -17, 00:23:08.378 "message": "File exists" 00:23:08.378 } 00:23:08.378 12:19:09 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:23:08.378 12:19:09 -- common/autotest_common.sh@641 -- # es=1 00:23:08.378 12:19:09 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:08.378 12:19:09 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:08.378 12:19:09 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:08.378 12:19:09 -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:23:08.378 12:19:09 -- host/discovery.sh@67 -- # xargs 00:23:08.378 12:19:09 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:08.378 12:19:09 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:08.378 12:19:09 -- host/discovery.sh@67 -- # sort 00:23:08.378 12:19:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.378 12:19:09 -- common/autotest_common.sh@10 -- # set +x 00:23:08.378 12:19:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:08.378 12:19:09 -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:23:08.378 12:19:09 -- host/discovery.sh@152 -- # get_bdev_list 00:23:08.378 12:19:09 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:08.378 12:19:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.378 12:19:09 -- common/autotest_common.sh@10 -- # set +x 00:23:08.378 12:19:09 -- host/discovery.sh@55 -- # sort 00:23:08.378 12:19:09 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:08.378 12:19:09 -- host/discovery.sh@55 -- # xargs 00:23:08.378 12:19:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:08.736 12:19:09 -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:08.736 12:19:09 -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:08.736 12:19:09 -- common/autotest_common.sh@638 -- # local es=0 00:23:08.736 12:19:09 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:08.736 12:19:09 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:23:08.736 12:19:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:08.736 12:19:09 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:23:08.736 12:19:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:08.736 12:19:09 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:08.736 12:19:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.736 12:19:09 -- common/autotest_common.sh@10 -- # set +x 00:23:09.678 [2024-04-26 12:19:10.639446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.678 [2024-04-26 12:19:10.639782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.678 [2024-04-26 12:19:10.639794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af0d40 with addr=10.0.0.2, port=8010 00:23:09.678 [2024-04-26 12:19:10.639805] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:09.678 [2024-04-26 12:19:10.639812] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:09.678 [2024-04-26 12:19:10.639820] bdev_nvme.c:6985:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:10.618 [2024-04-26 12:19:11.641797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.618 [2024-04-26 12:19:11.642113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.618 [2024-04-26 12:19:11.642126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af0d40 with addr=10.0.0.2, port=8010 00:23:10.618 [2024-04-26 12:19:11.642138] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:10.618 [2024-04-26 12:19:11.642145] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:10.618 [2024-04-26 12:19:11.642152] bdev_nvme.c:6985:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:11.559 [2024-04-26 12:19:12.643772] bdev_nvme.c:6966:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:23:11.559 request: 00:23:11.559 { 00:23:11.559 "name": "nvme_second", 00:23:11.559 "trtype": "tcp", 00:23:11.559 "traddr": "10.0.0.2", 00:23:11.559 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:11.559 "adrfam": "ipv4", 00:23:11.559 "trsvcid": "8010", 00:23:11.559 "attach_timeout_ms": 3000, 00:23:11.559 "method": "bdev_nvme_start_discovery", 00:23:11.559 "req_id": 1 00:23:11.559 } 00:23:11.559 Got JSON-RPC error response 00:23:11.559 response: 00:23:11.559 { 00:23:11.559 "code": -110, 00:23:11.559 "message": "Connection timed out" 00:23:11.559 } 00:23:11.559 12:19:12 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:23:11.559 12:19:12 -- common/autotest_common.sh@641 -- # es=1 00:23:11.560 12:19:12 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:11.560 12:19:12 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:11.560 12:19:12 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:11.560 12:19:12 -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:23:11.560 12:19:12 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:11.560 12:19:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:11.560 12:19:12 -- common/autotest_common.sh@10 -- # set +x 00:23:11.560 12:19:12 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:11.560 12:19:12 -- host/discovery.sh@67 -- # sort 00:23:11.560 12:19:12 -- host/discovery.sh@67 -- # xargs 00:23:11.560 12:19:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:11.560 12:19:12 -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:23:11.560 12:19:12 -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:23:11.560 12:19:12 -- host/discovery.sh@161 -- # kill 3510490 00:23:11.560 12:19:12 -- host/discovery.sh@162 -- # nvmftestfini 00:23:11.560 12:19:12 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:11.560 12:19:12 -- nvmf/common.sh@117 -- # sync 00:23:11.560 12:19:12 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:11.560 12:19:12 -- nvmf/common.sh@120 -- # set +e 00:23:11.560 12:19:12 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:11.560 12:19:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:11.560 rmmod nvme_tcp 00:23:11.560 rmmod nvme_fabrics 00:23:11.560 rmmod nvme_keyring 00:23:11.560 12:19:12 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:11.560 12:19:12 -- nvmf/common.sh@124 -- # set -e 00:23:11.560 12:19:12 -- nvmf/common.sh@125 -- # return 0 00:23:11.560 12:19:12 -- nvmf/common.sh@478 -- # '[' -n 3510225 ']' 00:23:11.560 12:19:12 -- nvmf/common.sh@479 -- # killprocess 3510225 00:23:11.560 12:19:12 -- common/autotest_common.sh@936 -- # '[' -z 3510225 ']' 00:23:11.560 12:19:12 -- common/autotest_common.sh@940 -- # kill -0 3510225 00:23:11.560 12:19:12 -- common/autotest_common.sh@941 -- # uname 00:23:11.820 12:19:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:11.820 12:19:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3510225 00:23:11.820 12:19:12 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:11.820 12:19:12 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:11.820 12:19:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3510225' 00:23:11.820 killing process with pid 3510225 00:23:11.820 12:19:12 -- common/autotest_common.sh@955 -- # kill 3510225 00:23:11.820 12:19:12 -- common/autotest_common.sh@960 -- # wait 3510225 00:23:11.820 12:19:12 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:11.820 12:19:12 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:11.821 12:19:12 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:11.821 12:19:12 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:11.821 12:19:12 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:11.821 12:19:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:11.821 12:19:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:11.821 12:19:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:14.363 12:19:15 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:14.364 00:23:14.364 real 0m20.884s 00:23:14.364 user 0m25.539s 00:23:14.364 sys 0m6.715s 00:23:14.364 12:19:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:14.364 12:19:15 -- common/autotest_common.sh@10 -- # set +x 00:23:14.364 ************************************ 00:23:14.364 END TEST nvmf_discovery 00:23:14.364 ************************************ 00:23:14.364 12:19:15 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:14.364 12:19:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:14.364 12:19:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:14.364 12:19:15 -- common/autotest_common.sh@10 -- # set +x 00:23:14.364 ************************************ 00:23:14.364 START TEST nvmf_discovery_remove_ifc 00:23:14.364 ************************************ 00:23:14.364 12:19:15 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:14.364 * Looking for test storage... 00:23:14.364 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:14.364 12:19:15 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:14.364 12:19:15 -- nvmf/common.sh@7 -- # uname -s 00:23:14.364 12:19:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:14.364 12:19:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:14.364 12:19:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:14.364 12:19:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:14.364 12:19:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:14.364 12:19:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:14.364 12:19:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:14.364 12:19:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:14.364 12:19:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:14.364 12:19:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:14.364 12:19:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:14.364 12:19:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:14.364 12:19:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:14.364 12:19:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:14.364 12:19:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:14.364 12:19:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:14.364 12:19:15 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:14.364 12:19:15 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:14.364 12:19:15 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:14.364 12:19:15 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:14.364 12:19:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.364 12:19:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.364 12:19:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.364 12:19:15 -- paths/export.sh@5 -- # export PATH 00:23:14.364 12:19:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.364 12:19:15 -- nvmf/common.sh@47 -- # : 0 00:23:14.364 12:19:15 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:14.364 12:19:15 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:14.364 12:19:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:14.364 12:19:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:14.364 12:19:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:14.364 12:19:15 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:14.364 12:19:15 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:14.364 12:19:15 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:14.364 12:19:15 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:23:14.364 12:19:15 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:23:14.364 12:19:15 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:23:14.364 12:19:15 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:23:14.364 12:19:15 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:23:14.364 12:19:15 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:23:14.364 12:19:15 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:23:14.364 12:19:15 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:14.364 12:19:15 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:14.364 12:19:15 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:14.364 12:19:15 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:14.364 12:19:15 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:14.364 12:19:15 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:14.364 12:19:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:14.364 12:19:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:14.364 12:19:15 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:23:14.364 12:19:15 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:23:14.364 12:19:15 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:14.364 12:19:15 -- common/autotest_common.sh@10 -- # set +x 00:23:20.955 12:19:21 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:20.955 12:19:21 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:20.955 12:19:21 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:20.955 12:19:21 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:20.955 12:19:21 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:20.955 12:19:21 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:20.955 12:19:21 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:20.955 12:19:21 -- nvmf/common.sh@295 -- # net_devs=() 00:23:20.955 12:19:21 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:20.955 12:19:21 -- nvmf/common.sh@296 -- # e810=() 00:23:20.955 12:19:21 -- nvmf/common.sh@296 -- # local -ga e810 00:23:20.955 12:19:21 -- nvmf/common.sh@297 -- # x722=() 00:23:20.955 12:19:21 -- nvmf/common.sh@297 -- # local -ga x722 00:23:20.955 12:19:21 -- nvmf/common.sh@298 -- # mlx=() 00:23:20.955 12:19:21 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:20.955 12:19:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:20.955 12:19:21 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:20.955 12:19:21 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:20.955 12:19:21 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:20.955 12:19:21 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:20.955 12:19:21 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:20.955 12:19:21 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:20.955 12:19:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:20.955 12:19:21 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:20.955 12:19:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:20.955 12:19:21 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:20.955 12:19:21 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:20.955 12:19:21 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:20.955 12:19:21 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:20.955 12:19:21 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:20.955 12:19:21 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:20.955 12:19:21 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:20.955 12:19:21 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:20.955 12:19:21 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:20.955 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:20.955 12:19:21 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:20.955 12:19:21 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:20.955 12:19:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:20.955 12:19:21 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:20.955 12:19:21 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:20.955 12:19:21 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:20.955 12:19:21 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:20.955 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:20.955 12:19:21 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:20.955 12:19:21 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:20.955 12:19:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:20.955 12:19:21 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:20.955 12:19:21 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:20.955 12:19:21 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:20.955 12:19:21 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:20.955 12:19:21 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:20.955 12:19:21 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:20.955 12:19:21 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:20.955 12:19:21 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:20.955 12:19:21 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:20.955 12:19:21 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:20.955 Found net devices under 0000:31:00.0: cvl_0_0 00:23:20.955 12:19:21 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:20.955 12:19:21 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:20.955 12:19:21 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:20.955 12:19:21 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:20.955 12:19:21 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:20.955 12:19:21 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:20.955 Found net devices under 0000:31:00.1: cvl_0_1 00:23:20.955 12:19:21 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:20.955 12:19:21 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:23:20.955 12:19:21 -- nvmf/common.sh@403 -- # is_hw=yes 00:23:20.955 12:19:21 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:23:20.955 12:19:21 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:23:20.955 12:19:21 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:23:20.955 12:19:21 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:20.955 12:19:21 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:20.955 12:19:21 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:20.955 12:19:21 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:20.955 12:19:21 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:20.955 12:19:21 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:20.955 12:19:21 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:20.955 12:19:21 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:20.955 12:19:21 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:20.955 12:19:21 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:20.955 12:19:21 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:20.955 12:19:21 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:20.955 12:19:21 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:20.955 12:19:21 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:20.955 12:19:21 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:20.955 12:19:21 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:20.955 12:19:21 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:20.955 12:19:21 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:20.955 12:19:22 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:20.955 12:19:22 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:20.955 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:20.955 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.528 ms 00:23:20.955 00:23:20.955 --- 10.0.0.2 ping statistics --- 00:23:20.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.955 rtt min/avg/max/mdev = 0.528/0.528/0.528/0.000 ms 00:23:20.955 12:19:22 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:20.955 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:20.955 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:23:20.955 00:23:20.955 --- 10.0.0.1 ping statistics --- 00:23:20.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.955 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:23:20.955 12:19:22 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:20.955 12:19:22 -- nvmf/common.sh@411 -- # return 0 00:23:20.955 12:19:22 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:20.955 12:19:22 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:20.955 12:19:22 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:20.955 12:19:22 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:20.955 12:19:22 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:20.955 12:19:22 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:20.955 12:19:22 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:20.955 12:19:22 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:23:20.955 12:19:22 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:20.955 12:19:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:20.955 12:19:22 -- common/autotest_common.sh@10 -- # set +x 00:23:20.955 12:19:22 -- nvmf/common.sh@470 -- # nvmfpid=3516823 00:23:20.955 12:19:22 -- nvmf/common.sh@471 -- # waitforlisten 3516823 00:23:20.955 12:19:22 -- common/autotest_common.sh@817 -- # '[' -z 3516823 ']' 00:23:20.955 12:19:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:20.955 12:19:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:20.956 12:19:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:20.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:20.956 12:19:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:20.956 12:19:22 -- common/autotest_common.sh@10 -- # set +x 00:23:20.956 12:19:22 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:20.956 [2024-04-26 12:19:22.120959] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:23:20.956 [2024-04-26 12:19:22.121025] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:20.956 EAL: No free 2048 kB hugepages reported on node 1 00:23:21.215 [2024-04-26 12:19:22.207802] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:21.215 [2024-04-26 12:19:22.298278] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:21.215 [2024-04-26 12:19:22.298335] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:21.215 [2024-04-26 12:19:22.298344] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:21.215 [2024-04-26 12:19:22.298351] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:21.215 [2024-04-26 12:19:22.298357] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:21.215 [2024-04-26 12:19:22.298383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:21.786 12:19:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:21.786 12:19:22 -- common/autotest_common.sh@850 -- # return 0 00:23:21.786 12:19:22 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:21.786 12:19:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:21.786 12:19:22 -- common/autotest_common.sh@10 -- # set +x 00:23:21.786 12:19:22 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:21.786 12:19:22 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:23:21.786 12:19:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:21.786 12:19:22 -- common/autotest_common.sh@10 -- # set +x 00:23:21.786 [2024-04-26 12:19:22.945673] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:21.786 [2024-04-26 12:19:22.953902] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:21.786 null0 00:23:21.786 [2024-04-26 12:19:22.985870] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:22.047 12:19:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:22.047 12:19:23 -- host/discovery_remove_ifc.sh@59 -- # hostpid=3516886 00:23:22.047 12:19:23 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3516886 /tmp/host.sock 00:23:22.047 12:19:23 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:23:22.047 12:19:23 -- common/autotest_common.sh@817 -- # '[' -z 3516886 ']' 00:23:22.047 12:19:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:23:22.047 12:19:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:22.047 12:19:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:22.047 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:22.047 12:19:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:22.047 12:19:23 -- common/autotest_common.sh@10 -- # set +x 00:23:22.047 [2024-04-26 12:19:23.057924] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:23:22.047 [2024-04-26 12:19:23.057987] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3516886 ] 00:23:22.047 EAL: No free 2048 kB hugepages reported on node 1 00:23:22.047 [2024-04-26 12:19:23.122764] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:22.047 [2024-04-26 12:19:23.195814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:22.616 12:19:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:22.616 12:19:23 -- common/autotest_common.sh@850 -- # return 0 00:23:22.616 12:19:23 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:22.616 12:19:23 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:23:22.616 12:19:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:22.616 12:19:23 -- common/autotest_common.sh@10 -- # set +x 00:23:22.616 12:19:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:22.616 12:19:23 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:23:22.616 12:19:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:22.616 12:19:23 -- common/autotest_common.sh@10 -- # set +x 00:23:22.877 12:19:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:22.877 12:19:23 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:23:22.877 12:19:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:22.877 12:19:23 -- common/autotest_common.sh@10 -- # set +x 00:23:23.818 [2024-04-26 12:19:24.955863] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:23.818 [2024-04-26 12:19:24.955883] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:23.818 [2024-04-26 12:19:24.955897] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:24.079 [2024-04-26 12:19:25.085322] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:24.341 [2024-04-26 12:19:25.310266] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:24.341 [2024-04-26 12:19:25.310316] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:24.341 [2024-04-26 12:19:25.310336] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:24.341 [2024-04-26 12:19:25.310350] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:24.341 [2024-04-26 12:19:25.310371] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:24.341 12:19:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:24.341 12:19:25 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:23:24.341 [2024-04-26 12:19:25.313971] bdev_nvme.c:1606:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x2158ed0 was disconnected and freed. delete nvme_qpair. 00:23:24.341 12:19:25 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:24.341 12:19:25 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:24.341 12:19:25 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:24.341 12:19:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:24.341 12:19:25 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:24.341 12:19:25 -- common/autotest_common.sh@10 -- # set +x 00:23:24.341 12:19:25 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:24.341 12:19:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:24.341 12:19:25 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:23:24.341 12:19:25 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:23:24.341 12:19:25 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:23:24.341 12:19:25 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:23:24.341 12:19:25 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:24.341 12:19:25 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:24.341 12:19:25 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:24.341 12:19:25 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:24.341 12:19:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:24.341 12:19:25 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:24.341 12:19:25 -- common/autotest_common.sh@10 -- # set +x 00:23:24.341 12:19:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:24.341 12:19:25 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:24.341 12:19:25 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:25.726 12:19:26 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:25.726 12:19:26 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:25.726 12:19:26 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:25.726 12:19:26 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:25.726 12:19:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:25.726 12:19:26 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:25.726 12:19:26 -- common/autotest_common.sh@10 -- # set +x 00:23:25.726 12:19:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:25.726 12:19:26 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:25.726 12:19:26 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:26.667 12:19:27 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:26.667 12:19:27 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:26.667 12:19:27 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:26.667 12:19:27 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:26.667 12:19:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:26.667 12:19:27 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:26.667 12:19:27 -- common/autotest_common.sh@10 -- # set +x 00:23:26.667 12:19:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:26.667 12:19:27 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:26.667 12:19:27 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:27.611 12:19:28 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:27.611 12:19:28 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:27.611 12:19:28 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:27.611 12:19:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:27.611 12:19:28 -- common/autotest_common.sh@10 -- # set +x 00:23:27.611 12:19:28 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:27.611 12:19:28 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:27.611 12:19:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:27.611 12:19:28 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:27.611 12:19:28 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:28.556 12:19:29 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:28.556 12:19:29 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:28.556 12:19:29 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:28.556 12:19:29 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:28.556 12:19:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:28.556 12:19:29 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:28.556 12:19:29 -- common/autotest_common.sh@10 -- # set +x 00:23:28.556 12:19:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:28.847 12:19:29 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:28.847 12:19:29 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:29.788 [2024-04-26 12:19:30.750729] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:23:29.788 [2024-04-26 12:19:30.750773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.788 [2024-04-26 12:19:30.750784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.788 [2024-04-26 12:19:30.750794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.788 [2024-04-26 12:19:30.750801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.788 [2024-04-26 12:19:30.750809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.788 [2024-04-26 12:19:30.750816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.788 [2024-04-26 12:19:30.750824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.788 [2024-04-26 12:19:30.750831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.788 [2024-04-26 12:19:30.750843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.788 [2024-04-26 12:19:30.750850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.788 [2024-04-26 12:19:30.750858] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211f3f0 is same with the state(5) to be set 00:23:29.788 [2024-04-26 12:19:30.760749] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211f3f0 (9): Bad file descriptor 00:23:29.788 [2024-04-26 12:19:30.770789] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:29.788 12:19:30 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:29.788 12:19:30 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:29.788 12:19:30 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:29.788 12:19:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:29.788 12:19:30 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:29.788 12:19:30 -- common/autotest_common.sh@10 -- # set +x 00:23:29.788 12:19:30 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:30.730 [2024-04-26 12:19:31.784894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:23:31.671 [2024-04-26 12:19:32.808863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:23:31.671 [2024-04-26 12:19:32.808909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211f3f0 with addr=10.0.0.2, port=4420 00:23:31.671 [2024-04-26 12:19:32.808923] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211f3f0 is same with the state(5) to be set 00:23:31.671 [2024-04-26 12:19:32.809302] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211f3f0 (9): Bad file descriptor 00:23:31.671 [2024-04-26 12:19:32.809328] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:31.671 [2024-04-26 12:19:32.809350] bdev_nvme.c:6674:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:23:31.671 [2024-04-26 12:19:32.809373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.671 [2024-04-26 12:19:32.809383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.671 [2024-04-26 12:19:32.809393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.671 [2024-04-26 12:19:32.809401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.671 [2024-04-26 12:19:32.809409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.672 [2024-04-26 12:19:32.809416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.672 [2024-04-26 12:19:32.809424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.672 [2024-04-26 12:19:32.809431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.672 [2024-04-26 12:19:32.809439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.672 [2024-04-26 12:19:32.809446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.672 [2024-04-26 12:19:32.809453] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:23:31.672 [2024-04-26 12:19:32.809947] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211f800 (9): Bad file descriptor 00:23:31.672 [2024-04-26 12:19:32.810959] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:23:31.672 [2024-04-26 12:19:32.810971] nvme_ctrlr.c:1148:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:23:31.672 12:19:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.672 12:19:32 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:31.672 12:19:32 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:33.056 12:19:33 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:33.056 12:19:33 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:33.056 12:19:33 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:33.056 12:19:33 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:33.056 12:19:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:33.056 12:19:33 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:33.056 12:19:33 -- common/autotest_common.sh@10 -- # set +x 00:23:33.056 12:19:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:33.056 12:19:33 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:23:33.056 12:19:33 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:33.056 12:19:33 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:33.056 12:19:33 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:23:33.056 12:19:33 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:33.056 12:19:33 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:33.056 12:19:33 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:33.056 12:19:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:33.057 12:19:33 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:33.057 12:19:33 -- common/autotest_common.sh@10 -- # set +x 00:23:33.057 12:19:33 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:33.057 12:19:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:33.057 12:19:34 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:33.057 12:19:34 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:33.998 [2024-04-26 12:19:34.870778] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:33.998 [2024-04-26 12:19:34.870799] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:33.998 [2024-04-26 12:19:34.870812] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:33.998 [2024-04-26 12:19:35.001269] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:23:33.998 12:19:35 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:33.998 12:19:35 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:33.998 12:19:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:33.998 12:19:35 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:33.998 12:19:35 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:33.998 12:19:35 -- common/autotest_common.sh@10 -- # set +x 00:23:33.998 12:19:35 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:33.998 12:19:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:33.998 12:19:35 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:33.998 12:19:35 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:34.259 [2024-04-26 12:19:35.223561] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:34.259 [2024-04-26 12:19:35.223604] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:34.259 [2024-04-26 12:19:35.223624] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:34.259 [2024-04-26 12:19:35.223639] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:23:34.259 [2024-04-26 12:19:35.223648] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:34.259 [2024-04-26 12:19:35.268567] bdev_nvme.c:1606:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x212fcc0 was disconnected and freed. delete nvme_qpair. 00:23:35.201 12:19:36 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:35.201 12:19:36 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:35.201 12:19:36 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:35.201 12:19:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:35.201 12:19:36 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:35.201 12:19:36 -- common/autotest_common.sh@10 -- # set +x 00:23:35.201 12:19:36 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:35.201 12:19:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:35.201 12:19:36 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:23:35.201 12:19:36 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:23:35.201 12:19:36 -- host/discovery_remove_ifc.sh@90 -- # killprocess 3516886 00:23:35.201 12:19:36 -- common/autotest_common.sh@936 -- # '[' -z 3516886 ']' 00:23:35.201 12:19:36 -- common/autotest_common.sh@940 -- # kill -0 3516886 00:23:35.201 12:19:36 -- common/autotest_common.sh@941 -- # uname 00:23:35.201 12:19:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:35.201 12:19:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3516886 00:23:35.201 12:19:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:35.201 12:19:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:35.201 12:19:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3516886' 00:23:35.201 killing process with pid 3516886 00:23:35.201 12:19:36 -- common/autotest_common.sh@955 -- # kill 3516886 00:23:35.201 12:19:36 -- common/autotest_common.sh@960 -- # wait 3516886 00:23:35.201 12:19:36 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:23:35.201 12:19:36 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:35.201 12:19:36 -- nvmf/common.sh@117 -- # sync 00:23:35.201 12:19:36 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:35.201 12:19:36 -- nvmf/common.sh@120 -- # set +e 00:23:35.201 12:19:36 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:35.201 12:19:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:35.201 rmmod nvme_tcp 00:23:35.201 rmmod nvme_fabrics 00:23:35.201 rmmod nvme_keyring 00:23:35.201 12:19:36 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:35.201 12:19:36 -- nvmf/common.sh@124 -- # set -e 00:23:35.201 12:19:36 -- nvmf/common.sh@125 -- # return 0 00:23:35.201 12:19:36 -- nvmf/common.sh@478 -- # '[' -n 3516823 ']' 00:23:35.201 12:19:36 -- nvmf/common.sh@479 -- # killprocess 3516823 00:23:35.201 12:19:36 -- common/autotest_common.sh@936 -- # '[' -z 3516823 ']' 00:23:35.201 12:19:36 -- common/autotest_common.sh@940 -- # kill -0 3516823 00:23:35.201 12:19:36 -- common/autotest_common.sh@941 -- # uname 00:23:35.201 12:19:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:35.201 12:19:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3516823 00:23:35.463 12:19:36 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:35.463 12:19:36 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:35.463 12:19:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3516823' 00:23:35.463 killing process with pid 3516823 00:23:35.463 12:19:36 -- common/autotest_common.sh@955 -- # kill 3516823 00:23:35.463 12:19:36 -- common/autotest_common.sh@960 -- # wait 3516823 00:23:35.463 12:19:36 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:35.463 12:19:36 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:35.463 12:19:36 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:35.463 12:19:36 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:35.463 12:19:36 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:35.463 12:19:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:35.463 12:19:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:35.463 12:19:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:38.013 12:19:38 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:38.013 00:23:38.013 real 0m23.427s 00:23:38.013 user 0m28.079s 00:23:38.013 sys 0m6.230s 00:23:38.013 12:19:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:38.013 12:19:38 -- common/autotest_common.sh@10 -- # set +x 00:23:38.013 ************************************ 00:23:38.013 END TEST nvmf_discovery_remove_ifc 00:23:38.013 ************************************ 00:23:38.013 12:19:38 -- nvmf/nvmf.sh@101 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:38.013 12:19:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:38.013 12:19:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:38.013 12:19:38 -- common/autotest_common.sh@10 -- # set +x 00:23:38.013 ************************************ 00:23:38.013 START TEST nvmf_identify_kernel_target 00:23:38.013 ************************************ 00:23:38.013 12:19:38 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:38.013 * Looking for test storage... 00:23:38.013 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:38.013 12:19:38 -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:38.013 12:19:38 -- nvmf/common.sh@7 -- # uname -s 00:23:38.013 12:19:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:38.013 12:19:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:38.013 12:19:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:38.013 12:19:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:38.013 12:19:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:38.013 12:19:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:38.013 12:19:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:38.013 12:19:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:38.013 12:19:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:38.013 12:19:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:38.013 12:19:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:38.013 12:19:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:38.013 12:19:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:38.013 12:19:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:38.013 12:19:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:38.013 12:19:38 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:38.013 12:19:38 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:38.013 12:19:38 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:38.013 12:19:38 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:38.013 12:19:38 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:38.013 12:19:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.013 12:19:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.013 12:19:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.013 12:19:38 -- paths/export.sh@5 -- # export PATH 00:23:38.013 12:19:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.013 12:19:38 -- nvmf/common.sh@47 -- # : 0 00:23:38.013 12:19:38 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:38.013 12:19:38 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:38.013 12:19:38 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:38.013 12:19:38 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:38.013 12:19:38 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:38.013 12:19:38 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:38.013 12:19:38 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:38.013 12:19:38 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:38.013 12:19:38 -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:23:38.013 12:19:38 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:38.013 12:19:38 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:38.013 12:19:38 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:38.013 12:19:38 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:38.013 12:19:38 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:38.013 12:19:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:38.013 12:19:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:38.013 12:19:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:38.013 12:19:38 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:23:38.013 12:19:38 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:23:38.013 12:19:38 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:38.013 12:19:38 -- common/autotest_common.sh@10 -- # set +x 00:23:44.707 12:19:45 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:44.707 12:19:45 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:44.707 12:19:45 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:44.707 12:19:45 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:44.707 12:19:45 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:44.707 12:19:45 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:44.707 12:19:45 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:44.707 12:19:45 -- nvmf/common.sh@295 -- # net_devs=() 00:23:44.707 12:19:45 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:44.707 12:19:45 -- nvmf/common.sh@296 -- # e810=() 00:23:44.707 12:19:45 -- nvmf/common.sh@296 -- # local -ga e810 00:23:44.707 12:19:45 -- nvmf/common.sh@297 -- # x722=() 00:23:44.707 12:19:45 -- nvmf/common.sh@297 -- # local -ga x722 00:23:44.707 12:19:45 -- nvmf/common.sh@298 -- # mlx=() 00:23:44.707 12:19:45 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:44.707 12:19:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:44.707 12:19:45 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:44.707 12:19:45 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:44.707 12:19:45 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:44.707 12:19:45 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:44.707 12:19:45 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:44.707 12:19:45 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:44.707 12:19:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:44.707 12:19:45 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:44.707 12:19:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:44.707 12:19:45 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:44.707 12:19:45 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:44.707 12:19:45 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:44.707 12:19:45 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:44.707 12:19:45 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:44.707 12:19:45 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:44.707 12:19:45 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:44.707 12:19:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:44.707 12:19:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:44.707 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:44.707 12:19:45 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:44.707 12:19:45 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:44.707 12:19:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:44.707 12:19:45 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:44.707 12:19:45 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:44.707 12:19:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:44.707 12:19:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:44.707 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:44.707 12:19:45 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:44.707 12:19:45 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:44.707 12:19:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:44.707 12:19:45 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:44.707 12:19:45 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:44.707 12:19:45 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:44.707 12:19:45 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:44.707 12:19:45 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:44.707 12:19:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:44.707 12:19:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:44.707 12:19:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:44.707 12:19:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:44.707 12:19:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:44.707 Found net devices under 0000:31:00.0: cvl_0_0 00:23:44.707 12:19:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:44.708 12:19:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:44.708 12:19:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:44.708 12:19:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:44.708 12:19:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:44.708 12:19:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:44.708 Found net devices under 0000:31:00.1: cvl_0_1 00:23:44.708 12:19:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:44.708 12:19:45 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:23:44.708 12:19:45 -- nvmf/common.sh@403 -- # is_hw=yes 00:23:44.708 12:19:45 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:23:44.708 12:19:45 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:23:44.708 12:19:45 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:23:44.708 12:19:45 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:44.708 12:19:45 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:44.708 12:19:45 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:44.708 12:19:45 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:44.708 12:19:45 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:44.708 12:19:45 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:44.708 12:19:45 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:44.708 12:19:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:44.708 12:19:45 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:44.708 12:19:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:44.968 12:19:45 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:44.968 12:19:45 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:44.968 12:19:45 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:44.968 12:19:46 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:44.968 12:19:46 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:44.968 12:19:46 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:44.968 12:19:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:45.229 12:19:46 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:45.229 12:19:46 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:45.229 12:19:46 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:45.229 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:45.229 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.678 ms 00:23:45.229 00:23:45.229 --- 10.0.0.2 ping statistics --- 00:23:45.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:45.229 rtt min/avg/max/mdev = 0.678/0.678/0.678/0.000 ms 00:23:45.229 12:19:46 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:45.229 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:45.229 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:23:45.229 00:23:45.229 --- 10.0.0.1 ping statistics --- 00:23:45.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:45.229 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:23:45.229 12:19:46 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:45.229 12:19:46 -- nvmf/common.sh@411 -- # return 0 00:23:45.229 12:19:46 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:45.229 12:19:46 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:45.229 12:19:46 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:45.229 12:19:46 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:45.229 12:19:46 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:45.229 12:19:46 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:45.229 12:19:46 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:45.229 12:19:46 -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:23:45.229 12:19:46 -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:23:45.229 12:19:46 -- nvmf/common.sh@717 -- # local ip 00:23:45.229 12:19:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:45.229 12:19:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:45.229 12:19:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:45.229 12:19:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:45.229 12:19:46 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:45.229 12:19:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:45.229 12:19:46 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:45.229 12:19:46 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:45.229 12:19:46 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:45.229 12:19:46 -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:23:45.229 12:19:46 -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:23:45.229 12:19:46 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:23:45.229 12:19:46 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:23:45.229 12:19:46 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:45.229 12:19:46 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:45.229 12:19:46 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:45.229 12:19:46 -- nvmf/common.sh@628 -- # local block nvme 00:23:45.229 12:19:46 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:23:45.229 12:19:46 -- nvmf/common.sh@631 -- # modprobe nvmet 00:23:45.229 12:19:46 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:45.229 12:19:46 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:48.531 Waiting for block devices as requested 00:23:48.531 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:23:48.531 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:23:48.792 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:23:48.792 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:23:48.792 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:23:49.053 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:23:49.053 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:23:49.053 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:23:49.313 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:23:49.313 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:23:49.573 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:23:49.573 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:23:49.573 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:23:49.573 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:23:49.834 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:23:49.834 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:23:49.834 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:23:50.094 12:19:51 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:23:50.094 12:19:51 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:50.095 12:19:51 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:23:50.095 12:19:51 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:23:50.095 12:19:51 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:50.095 12:19:51 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:23:50.095 12:19:51 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:23:50.095 12:19:51 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:50.095 12:19:51 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:50.095 No valid GPT data, bailing 00:23:50.095 12:19:51 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:50.095 12:19:51 -- scripts/common.sh@391 -- # pt= 00:23:50.095 12:19:51 -- scripts/common.sh@392 -- # return 1 00:23:50.095 12:19:51 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:23:50.095 12:19:51 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:23:50.095 12:19:51 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:50.479 12:19:51 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:50.479 12:19:51 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:50.479 12:19:51 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:23:50.479 12:19:51 -- nvmf/common.sh@656 -- # echo 1 00:23:50.479 12:19:51 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:23:50.479 12:19:51 -- nvmf/common.sh@658 -- # echo 1 00:23:50.479 12:19:51 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:23:50.479 12:19:51 -- nvmf/common.sh@661 -- # echo tcp 00:23:50.479 12:19:51 -- nvmf/common.sh@662 -- # echo 4420 00:23:50.479 12:19:51 -- nvmf/common.sh@663 -- # echo ipv4 00:23:50.479 12:19:51 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:50.479 12:19:51 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:23:50.479 00:23:50.479 Discovery Log Number of Records 2, Generation counter 2 00:23:50.479 =====Discovery Log Entry 0====== 00:23:50.479 trtype: tcp 00:23:50.479 adrfam: ipv4 00:23:50.479 subtype: current discovery subsystem 00:23:50.479 treq: not specified, sq flow control disable supported 00:23:50.479 portid: 1 00:23:50.479 trsvcid: 4420 00:23:50.479 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:50.479 traddr: 10.0.0.1 00:23:50.479 eflags: none 00:23:50.479 sectype: none 00:23:50.479 =====Discovery Log Entry 1====== 00:23:50.479 trtype: tcp 00:23:50.479 adrfam: ipv4 00:23:50.479 subtype: nvme subsystem 00:23:50.479 treq: not specified, sq flow control disable supported 00:23:50.479 portid: 1 00:23:50.479 trsvcid: 4420 00:23:50.479 subnqn: nqn.2016-06.io.spdk:testnqn 00:23:50.479 traddr: 10.0.0.1 00:23:50.479 eflags: none 00:23:50.479 sectype: none 00:23:50.479 12:19:51 -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:23:50.479 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:23:50.479 EAL: No free 2048 kB hugepages reported on node 1 00:23:50.479 ===================================================== 00:23:50.479 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:50.479 ===================================================== 00:23:50.479 Controller Capabilities/Features 00:23:50.479 ================================ 00:23:50.479 Vendor ID: 0000 00:23:50.479 Subsystem Vendor ID: 0000 00:23:50.479 Serial Number: c6ca3901e8801301e929 00:23:50.479 Model Number: Linux 00:23:50.479 Firmware Version: 6.7.0-68 00:23:50.479 Recommended Arb Burst: 0 00:23:50.479 IEEE OUI Identifier: 00 00 00 00:23:50.479 Multi-path I/O 00:23:50.479 May have multiple subsystem ports: No 00:23:50.479 May have multiple controllers: No 00:23:50.479 Associated with SR-IOV VF: No 00:23:50.479 Max Data Transfer Size: Unlimited 00:23:50.479 Max Number of Namespaces: 0 00:23:50.479 Max Number of I/O Queues: 1024 00:23:50.479 NVMe Specification Version (VS): 1.3 00:23:50.479 NVMe Specification Version (Identify): 1.3 00:23:50.479 Maximum Queue Entries: 1024 00:23:50.479 Contiguous Queues Required: No 00:23:50.479 Arbitration Mechanisms Supported 00:23:50.479 Weighted Round Robin: Not Supported 00:23:50.479 Vendor Specific: Not Supported 00:23:50.479 Reset Timeout: 7500 ms 00:23:50.479 Doorbell Stride: 4 bytes 00:23:50.479 NVM Subsystem Reset: Not Supported 00:23:50.479 Command Sets Supported 00:23:50.479 NVM Command Set: Supported 00:23:50.479 Boot Partition: Not Supported 00:23:50.479 Memory Page Size Minimum: 4096 bytes 00:23:50.479 Memory Page Size Maximum: 4096 bytes 00:23:50.479 Persistent Memory Region: Not Supported 00:23:50.479 Optional Asynchronous Events Supported 00:23:50.479 Namespace Attribute Notices: Not Supported 00:23:50.479 Firmware Activation Notices: Not Supported 00:23:50.480 ANA Change Notices: Not Supported 00:23:50.480 PLE Aggregate Log Change Notices: Not Supported 00:23:50.480 LBA Status Info Alert Notices: Not Supported 00:23:50.480 EGE Aggregate Log Change Notices: Not Supported 00:23:50.480 Normal NVM Subsystem Shutdown event: Not Supported 00:23:50.480 Zone Descriptor Change Notices: Not Supported 00:23:50.480 Discovery Log Change Notices: Supported 00:23:50.480 Controller Attributes 00:23:50.480 128-bit Host Identifier: Not Supported 00:23:50.480 Non-Operational Permissive Mode: Not Supported 00:23:50.480 NVM Sets: Not Supported 00:23:50.480 Read Recovery Levels: Not Supported 00:23:50.480 Endurance Groups: Not Supported 00:23:50.480 Predictable Latency Mode: Not Supported 00:23:50.480 Traffic Based Keep ALive: Not Supported 00:23:50.480 Namespace Granularity: Not Supported 00:23:50.480 SQ Associations: Not Supported 00:23:50.480 UUID List: Not Supported 00:23:50.480 Multi-Domain Subsystem: Not Supported 00:23:50.480 Fixed Capacity Management: Not Supported 00:23:50.480 Variable Capacity Management: Not Supported 00:23:50.480 Delete Endurance Group: Not Supported 00:23:50.480 Delete NVM Set: Not Supported 00:23:50.480 Extended LBA Formats Supported: Not Supported 00:23:50.480 Flexible Data Placement Supported: Not Supported 00:23:50.480 00:23:50.480 Controller Memory Buffer Support 00:23:50.480 ================================ 00:23:50.480 Supported: No 00:23:50.480 00:23:50.480 Persistent Memory Region Support 00:23:50.480 ================================ 00:23:50.480 Supported: No 00:23:50.480 00:23:50.480 Admin Command Set Attributes 00:23:50.480 ============================ 00:23:50.480 Security Send/Receive: Not Supported 00:23:50.480 Format NVM: Not Supported 00:23:50.480 Firmware Activate/Download: Not Supported 00:23:50.480 Namespace Management: Not Supported 00:23:50.480 Device Self-Test: Not Supported 00:23:50.480 Directives: Not Supported 00:23:50.480 NVMe-MI: Not Supported 00:23:50.480 Virtualization Management: Not Supported 00:23:50.480 Doorbell Buffer Config: Not Supported 00:23:50.480 Get LBA Status Capability: Not Supported 00:23:50.480 Command & Feature Lockdown Capability: Not Supported 00:23:50.480 Abort Command Limit: 1 00:23:50.480 Async Event Request Limit: 1 00:23:50.480 Number of Firmware Slots: N/A 00:23:50.480 Firmware Slot 1 Read-Only: N/A 00:23:50.480 Firmware Activation Without Reset: N/A 00:23:50.480 Multiple Update Detection Support: N/A 00:23:50.480 Firmware Update Granularity: No Information Provided 00:23:50.480 Per-Namespace SMART Log: No 00:23:50.480 Asymmetric Namespace Access Log Page: Not Supported 00:23:50.480 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:50.480 Command Effects Log Page: Not Supported 00:23:50.480 Get Log Page Extended Data: Supported 00:23:50.480 Telemetry Log Pages: Not Supported 00:23:50.480 Persistent Event Log Pages: Not Supported 00:23:50.480 Supported Log Pages Log Page: May Support 00:23:50.480 Commands Supported & Effects Log Page: Not Supported 00:23:50.480 Feature Identifiers & Effects Log Page:May Support 00:23:50.480 NVMe-MI Commands & Effects Log Page: May Support 00:23:50.480 Data Area 4 for Telemetry Log: Not Supported 00:23:50.480 Error Log Page Entries Supported: 1 00:23:50.480 Keep Alive: Not Supported 00:23:50.480 00:23:50.480 NVM Command Set Attributes 00:23:50.480 ========================== 00:23:50.480 Submission Queue Entry Size 00:23:50.480 Max: 1 00:23:50.480 Min: 1 00:23:50.480 Completion Queue Entry Size 00:23:50.480 Max: 1 00:23:50.480 Min: 1 00:23:50.480 Number of Namespaces: 0 00:23:50.480 Compare Command: Not Supported 00:23:50.480 Write Uncorrectable Command: Not Supported 00:23:50.480 Dataset Management Command: Not Supported 00:23:50.480 Write Zeroes Command: Not Supported 00:23:50.480 Set Features Save Field: Not Supported 00:23:50.480 Reservations: Not Supported 00:23:50.480 Timestamp: Not Supported 00:23:50.480 Copy: Not Supported 00:23:50.480 Volatile Write Cache: Not Present 00:23:50.480 Atomic Write Unit (Normal): 1 00:23:50.480 Atomic Write Unit (PFail): 1 00:23:50.480 Atomic Compare & Write Unit: 1 00:23:50.480 Fused Compare & Write: Not Supported 00:23:50.480 Scatter-Gather List 00:23:50.480 SGL Command Set: Supported 00:23:50.480 SGL Keyed: Not Supported 00:23:50.480 SGL Bit Bucket Descriptor: Not Supported 00:23:50.480 SGL Metadata Pointer: Not Supported 00:23:50.480 Oversized SGL: Not Supported 00:23:50.480 SGL Metadata Address: Not Supported 00:23:50.480 SGL Offset: Supported 00:23:50.480 Transport SGL Data Block: Not Supported 00:23:50.480 Replay Protected Memory Block: Not Supported 00:23:50.480 00:23:50.480 Firmware Slot Information 00:23:50.480 ========================= 00:23:50.480 Active slot: 0 00:23:50.480 00:23:50.480 00:23:50.480 Error Log 00:23:50.480 ========= 00:23:50.480 00:23:50.480 Active Namespaces 00:23:50.480 ================= 00:23:50.480 Discovery Log Page 00:23:50.480 ================== 00:23:50.480 Generation Counter: 2 00:23:50.480 Number of Records: 2 00:23:50.480 Record Format: 0 00:23:50.480 00:23:50.480 Discovery Log Entry 0 00:23:50.480 ---------------------- 00:23:50.480 Transport Type: 3 (TCP) 00:23:50.480 Address Family: 1 (IPv4) 00:23:50.480 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:50.480 Entry Flags: 00:23:50.480 Duplicate Returned Information: 0 00:23:50.480 Explicit Persistent Connection Support for Discovery: 0 00:23:50.480 Transport Requirements: 00:23:50.480 Secure Channel: Not Specified 00:23:50.480 Port ID: 1 (0x0001) 00:23:50.480 Controller ID: 65535 (0xffff) 00:23:50.480 Admin Max SQ Size: 32 00:23:50.480 Transport Service Identifier: 4420 00:23:50.480 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:50.480 Transport Address: 10.0.0.1 00:23:50.480 Discovery Log Entry 1 00:23:50.480 ---------------------- 00:23:50.480 Transport Type: 3 (TCP) 00:23:50.480 Address Family: 1 (IPv4) 00:23:50.480 Subsystem Type: 2 (NVM Subsystem) 00:23:50.480 Entry Flags: 00:23:50.480 Duplicate Returned Information: 0 00:23:50.480 Explicit Persistent Connection Support for Discovery: 0 00:23:50.480 Transport Requirements: 00:23:50.480 Secure Channel: Not Specified 00:23:50.480 Port ID: 1 (0x0001) 00:23:50.480 Controller ID: 65535 (0xffff) 00:23:50.480 Admin Max SQ Size: 32 00:23:50.480 Transport Service Identifier: 4420 00:23:50.480 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:23:50.480 Transport Address: 10.0.0.1 00:23:50.480 12:19:51 -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:50.480 EAL: No free 2048 kB hugepages reported on node 1 00:23:50.480 get_feature(0x01) failed 00:23:50.480 get_feature(0x02) failed 00:23:50.480 get_feature(0x04) failed 00:23:50.480 ===================================================== 00:23:50.480 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:50.480 ===================================================== 00:23:50.480 Controller Capabilities/Features 00:23:50.480 ================================ 00:23:50.480 Vendor ID: 0000 00:23:50.480 Subsystem Vendor ID: 0000 00:23:50.480 Serial Number: a779affd5634e1de7ea2 00:23:50.480 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:23:50.480 Firmware Version: 6.7.0-68 00:23:50.480 Recommended Arb Burst: 6 00:23:50.481 IEEE OUI Identifier: 00 00 00 00:23:50.481 Multi-path I/O 00:23:50.481 May have multiple subsystem ports: Yes 00:23:50.481 May have multiple controllers: Yes 00:23:50.481 Associated with SR-IOV VF: No 00:23:50.481 Max Data Transfer Size: Unlimited 00:23:50.481 Max Number of Namespaces: 1024 00:23:50.481 Max Number of I/O Queues: 128 00:23:50.481 NVMe Specification Version (VS): 1.3 00:23:50.481 NVMe Specification Version (Identify): 1.3 00:23:50.481 Maximum Queue Entries: 1024 00:23:50.481 Contiguous Queues Required: No 00:23:50.481 Arbitration Mechanisms Supported 00:23:50.481 Weighted Round Robin: Not Supported 00:23:50.481 Vendor Specific: Not Supported 00:23:50.481 Reset Timeout: 7500 ms 00:23:50.481 Doorbell Stride: 4 bytes 00:23:50.481 NVM Subsystem Reset: Not Supported 00:23:50.481 Command Sets Supported 00:23:50.481 NVM Command Set: Supported 00:23:50.481 Boot Partition: Not Supported 00:23:50.481 Memory Page Size Minimum: 4096 bytes 00:23:50.481 Memory Page Size Maximum: 4096 bytes 00:23:50.481 Persistent Memory Region: Not Supported 00:23:50.481 Optional Asynchronous Events Supported 00:23:50.481 Namespace Attribute Notices: Supported 00:23:50.481 Firmware Activation Notices: Not Supported 00:23:50.481 ANA Change Notices: Supported 00:23:50.481 PLE Aggregate Log Change Notices: Not Supported 00:23:50.481 LBA Status Info Alert Notices: Not Supported 00:23:50.481 EGE Aggregate Log Change Notices: Not Supported 00:23:50.481 Normal NVM Subsystem Shutdown event: Not Supported 00:23:50.481 Zone Descriptor Change Notices: Not Supported 00:23:50.481 Discovery Log Change Notices: Not Supported 00:23:50.481 Controller Attributes 00:23:50.481 128-bit Host Identifier: Supported 00:23:50.481 Non-Operational Permissive Mode: Not Supported 00:23:50.481 NVM Sets: Not Supported 00:23:50.481 Read Recovery Levels: Not Supported 00:23:50.481 Endurance Groups: Not Supported 00:23:50.481 Predictable Latency Mode: Not Supported 00:23:50.481 Traffic Based Keep ALive: Supported 00:23:50.481 Namespace Granularity: Not Supported 00:23:50.481 SQ Associations: Not Supported 00:23:50.481 UUID List: Not Supported 00:23:50.481 Multi-Domain Subsystem: Not Supported 00:23:50.481 Fixed Capacity Management: Not Supported 00:23:50.481 Variable Capacity Management: Not Supported 00:23:50.481 Delete Endurance Group: Not Supported 00:23:50.481 Delete NVM Set: Not Supported 00:23:50.481 Extended LBA Formats Supported: Not Supported 00:23:50.481 Flexible Data Placement Supported: Not Supported 00:23:50.481 00:23:50.481 Controller Memory Buffer Support 00:23:50.481 ================================ 00:23:50.481 Supported: No 00:23:50.481 00:23:50.481 Persistent Memory Region Support 00:23:50.481 ================================ 00:23:50.481 Supported: No 00:23:50.481 00:23:50.481 Admin Command Set Attributes 00:23:50.481 ============================ 00:23:50.481 Security Send/Receive: Not Supported 00:23:50.481 Format NVM: Not Supported 00:23:50.481 Firmware Activate/Download: Not Supported 00:23:50.481 Namespace Management: Not Supported 00:23:50.481 Device Self-Test: Not Supported 00:23:50.481 Directives: Not Supported 00:23:50.481 NVMe-MI: Not Supported 00:23:50.481 Virtualization Management: Not Supported 00:23:50.481 Doorbell Buffer Config: Not Supported 00:23:50.481 Get LBA Status Capability: Not Supported 00:23:50.481 Command & Feature Lockdown Capability: Not Supported 00:23:50.481 Abort Command Limit: 4 00:23:50.481 Async Event Request Limit: 4 00:23:50.481 Number of Firmware Slots: N/A 00:23:50.481 Firmware Slot 1 Read-Only: N/A 00:23:50.481 Firmware Activation Without Reset: N/A 00:23:50.481 Multiple Update Detection Support: N/A 00:23:50.481 Firmware Update Granularity: No Information Provided 00:23:50.481 Per-Namespace SMART Log: Yes 00:23:50.481 Asymmetric Namespace Access Log Page: Supported 00:23:50.481 ANA Transition Time : 10 sec 00:23:50.481 00:23:50.481 Asymmetric Namespace Access Capabilities 00:23:50.481 ANA Optimized State : Supported 00:23:50.481 ANA Non-Optimized State : Supported 00:23:50.481 ANA Inaccessible State : Supported 00:23:50.481 ANA Persistent Loss State : Supported 00:23:50.481 ANA Change State : Supported 00:23:50.481 ANAGRPID is not changed : No 00:23:50.481 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:23:50.481 00:23:50.481 ANA Group Identifier Maximum : 128 00:23:50.481 Number of ANA Group Identifiers : 128 00:23:50.481 Max Number of Allowed Namespaces : 1024 00:23:50.481 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:23:50.481 Command Effects Log Page: Supported 00:23:50.481 Get Log Page Extended Data: Supported 00:23:50.481 Telemetry Log Pages: Not Supported 00:23:50.481 Persistent Event Log Pages: Not Supported 00:23:50.481 Supported Log Pages Log Page: May Support 00:23:50.481 Commands Supported & Effects Log Page: Not Supported 00:23:50.481 Feature Identifiers & Effects Log Page:May Support 00:23:50.481 NVMe-MI Commands & Effects Log Page: May Support 00:23:50.481 Data Area 4 for Telemetry Log: Not Supported 00:23:50.481 Error Log Page Entries Supported: 128 00:23:50.481 Keep Alive: Supported 00:23:50.481 Keep Alive Granularity: 1000 ms 00:23:50.481 00:23:50.481 NVM Command Set Attributes 00:23:50.481 ========================== 00:23:50.481 Submission Queue Entry Size 00:23:50.481 Max: 64 00:23:50.481 Min: 64 00:23:50.481 Completion Queue Entry Size 00:23:50.481 Max: 16 00:23:50.481 Min: 16 00:23:50.481 Number of Namespaces: 1024 00:23:50.481 Compare Command: Not Supported 00:23:50.481 Write Uncorrectable Command: Not Supported 00:23:50.481 Dataset Management Command: Supported 00:23:50.481 Write Zeroes Command: Supported 00:23:50.481 Set Features Save Field: Not Supported 00:23:50.481 Reservations: Not Supported 00:23:50.481 Timestamp: Not Supported 00:23:50.481 Copy: Not Supported 00:23:50.481 Volatile Write Cache: Present 00:23:50.481 Atomic Write Unit (Normal): 1 00:23:50.481 Atomic Write Unit (PFail): 1 00:23:50.481 Atomic Compare & Write Unit: 1 00:23:50.481 Fused Compare & Write: Not Supported 00:23:50.481 Scatter-Gather List 00:23:50.481 SGL Command Set: Supported 00:23:50.481 SGL Keyed: Not Supported 00:23:50.481 SGL Bit Bucket Descriptor: Not Supported 00:23:50.481 SGL Metadata Pointer: Not Supported 00:23:50.481 Oversized SGL: Not Supported 00:23:50.481 SGL Metadata Address: Not Supported 00:23:50.481 SGL Offset: Supported 00:23:50.482 Transport SGL Data Block: Not Supported 00:23:50.482 Replay Protected Memory Block: Not Supported 00:23:50.482 00:23:50.482 Firmware Slot Information 00:23:50.482 ========================= 00:23:50.482 Active slot: 0 00:23:50.482 00:23:50.482 Asymmetric Namespace Access 00:23:50.482 =========================== 00:23:50.482 Change Count : 0 00:23:50.482 Number of ANA Group Descriptors : 1 00:23:50.482 ANA Group Descriptor : 0 00:23:50.482 ANA Group ID : 1 00:23:50.482 Number of NSID Values : 1 00:23:50.482 Change Count : 0 00:23:50.482 ANA State : 1 00:23:50.482 Namespace Identifier : 1 00:23:50.482 00:23:50.482 Commands Supported and Effects 00:23:50.482 ============================== 00:23:50.482 Admin Commands 00:23:50.482 -------------- 00:23:50.482 Get Log Page (02h): Supported 00:23:50.482 Identify (06h): Supported 00:23:50.482 Abort (08h): Supported 00:23:50.482 Set Features (09h): Supported 00:23:50.482 Get Features (0Ah): Supported 00:23:50.482 Asynchronous Event Request (0Ch): Supported 00:23:50.482 Keep Alive (18h): Supported 00:23:50.482 I/O Commands 00:23:50.482 ------------ 00:23:50.482 Flush (00h): Supported 00:23:50.482 Write (01h): Supported LBA-Change 00:23:50.482 Read (02h): Supported 00:23:50.482 Write Zeroes (08h): Supported LBA-Change 00:23:50.482 Dataset Management (09h): Supported 00:23:50.482 00:23:50.482 Error Log 00:23:50.482 ========= 00:23:50.482 Entry: 0 00:23:50.482 Error Count: 0x3 00:23:50.482 Submission Queue Id: 0x0 00:23:50.482 Command Id: 0x5 00:23:50.482 Phase Bit: 0 00:23:50.482 Status Code: 0x2 00:23:50.482 Status Code Type: 0x0 00:23:50.482 Do Not Retry: 1 00:23:50.482 Error Location: 0x28 00:23:50.482 LBA: 0x0 00:23:50.482 Namespace: 0x0 00:23:50.482 Vendor Log Page: 0x0 00:23:50.482 ----------- 00:23:50.482 Entry: 1 00:23:50.482 Error Count: 0x2 00:23:50.482 Submission Queue Id: 0x0 00:23:50.482 Command Id: 0x5 00:23:50.482 Phase Bit: 0 00:23:50.482 Status Code: 0x2 00:23:50.482 Status Code Type: 0x0 00:23:50.482 Do Not Retry: 1 00:23:50.482 Error Location: 0x28 00:23:50.482 LBA: 0x0 00:23:50.482 Namespace: 0x0 00:23:50.482 Vendor Log Page: 0x0 00:23:50.482 ----------- 00:23:50.482 Entry: 2 00:23:50.482 Error Count: 0x1 00:23:50.482 Submission Queue Id: 0x0 00:23:50.482 Command Id: 0x4 00:23:50.482 Phase Bit: 0 00:23:50.482 Status Code: 0x2 00:23:50.482 Status Code Type: 0x0 00:23:50.482 Do Not Retry: 1 00:23:50.482 Error Location: 0x28 00:23:50.482 LBA: 0x0 00:23:50.482 Namespace: 0x0 00:23:50.482 Vendor Log Page: 0x0 00:23:50.482 00:23:50.482 Number of Queues 00:23:50.482 ================ 00:23:50.482 Number of I/O Submission Queues: 128 00:23:50.482 Number of I/O Completion Queues: 128 00:23:50.482 00:23:50.482 ZNS Specific Controller Data 00:23:50.482 ============================ 00:23:50.482 Zone Append Size Limit: 0 00:23:50.482 00:23:50.482 00:23:50.482 Active Namespaces 00:23:50.482 ================= 00:23:50.482 get_feature(0x05) failed 00:23:50.482 Namespace ID:1 00:23:50.482 Command Set Identifier: NVM (00h) 00:23:50.482 Deallocate: Supported 00:23:50.482 Deallocated/Unwritten Error: Not Supported 00:23:50.482 Deallocated Read Value: Unknown 00:23:50.482 Deallocate in Write Zeroes: Not Supported 00:23:50.482 Deallocated Guard Field: 0xFFFF 00:23:50.482 Flush: Supported 00:23:50.482 Reservation: Not Supported 00:23:50.482 Namespace Sharing Capabilities: Multiple Controllers 00:23:50.482 Size (in LBAs): 3750748848 (1788GiB) 00:23:50.482 Capacity (in LBAs): 3750748848 (1788GiB) 00:23:50.482 Utilization (in LBAs): 3750748848 (1788GiB) 00:23:50.482 UUID: d1f1e4a1-ac9a-4585-8d53-9c7a55321c54 00:23:50.482 Thin Provisioning: Not Supported 00:23:50.482 Per-NS Atomic Units: Yes 00:23:50.482 Atomic Write Unit (Normal): 8 00:23:50.482 Atomic Write Unit (PFail): 8 00:23:50.482 Preferred Write Granularity: 8 00:23:50.482 Atomic Compare & Write Unit: 8 00:23:50.482 Atomic Boundary Size (Normal): 0 00:23:50.482 Atomic Boundary Size (PFail): 0 00:23:50.482 Atomic Boundary Offset: 0 00:23:50.482 NGUID/EUI64 Never Reused: No 00:23:50.482 ANA group ID: 1 00:23:50.482 Namespace Write Protected: No 00:23:50.482 Number of LBA Formats: 1 00:23:50.482 Current LBA Format: LBA Format #00 00:23:50.482 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:50.482 00:23:50.482 12:19:51 -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:23:50.482 12:19:51 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:50.482 12:19:51 -- nvmf/common.sh@117 -- # sync 00:23:50.482 12:19:51 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:50.482 12:19:51 -- nvmf/common.sh@120 -- # set +e 00:23:50.482 12:19:51 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:50.482 12:19:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:50.482 rmmod nvme_tcp 00:23:50.482 rmmod nvme_fabrics 00:23:50.482 12:19:51 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:50.482 12:19:51 -- nvmf/common.sh@124 -- # set -e 00:23:50.482 12:19:51 -- nvmf/common.sh@125 -- # return 0 00:23:50.482 12:19:51 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:23:50.482 12:19:51 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:50.482 12:19:51 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:50.482 12:19:51 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:50.482 12:19:51 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:50.482 12:19:51 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:50.482 12:19:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:50.482 12:19:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:50.482 12:19:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:53.027 12:19:53 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:53.027 12:19:53 -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:23:53.027 12:19:53 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:23:53.027 12:19:53 -- nvmf/common.sh@675 -- # echo 0 00:23:53.027 12:19:53 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:53.027 12:19:53 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:53.027 12:19:53 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:53.027 12:19:53 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:53.027 12:19:53 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:23:53.027 12:19:53 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:23:53.027 12:19:53 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:56.327 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:23:56.327 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:23:56.327 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:23:56.327 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:23:56.327 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:23:56.327 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:23:56.327 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:23:56.327 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:23:56.327 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:23:56.327 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:23:56.327 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:23:56.327 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:23:56.327 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:23:56.327 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:23:56.327 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:23:56.327 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:23:57.717 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:23:58.293 00:23:58.293 real 0m20.365s 00:23:58.293 user 0m4.976s 00:23:58.293 sys 0m10.609s 00:23:58.293 12:19:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:58.293 12:19:59 -- common/autotest_common.sh@10 -- # set +x 00:23:58.293 ************************************ 00:23:58.293 END TEST nvmf_identify_kernel_target 00:23:58.293 ************************************ 00:23:58.293 12:19:59 -- nvmf/nvmf.sh@102 -- # run_test nvmf_auth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:58.293 12:19:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:58.293 12:19:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:58.293 12:19:59 -- common/autotest_common.sh@10 -- # set +x 00:23:58.293 ************************************ 00:23:58.293 START TEST nvmf_auth 00:23:58.293 ************************************ 00:23:58.293 12:19:59 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:58.293 * Looking for test storage... 00:23:58.293 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:58.293 12:19:59 -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:58.293 12:19:59 -- nvmf/common.sh@7 -- # uname -s 00:23:58.293 12:19:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:58.293 12:19:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:58.293 12:19:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:58.293 12:19:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:58.293 12:19:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:58.293 12:19:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:58.293 12:19:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:58.293 12:19:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:58.293 12:19:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:58.293 12:19:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:58.293 12:19:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:58.293 12:19:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:58.293 12:19:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:58.293 12:19:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:58.293 12:19:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:58.293 12:19:59 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:58.293 12:19:59 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:58.293 12:19:59 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:58.293 12:19:59 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:58.293 12:19:59 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:58.293 12:19:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.293 12:19:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.293 12:19:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.293 12:19:59 -- paths/export.sh@5 -- # export PATH 00:23:58.293 12:19:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.293 12:19:59 -- nvmf/common.sh@47 -- # : 0 00:23:58.293 12:19:59 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:58.293 12:19:59 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:58.293 12:19:59 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:58.293 12:19:59 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:58.293 12:19:59 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:58.293 12:19:59 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:58.293 12:19:59 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:58.293 12:19:59 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:58.553 12:19:59 -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:23:58.553 12:19:59 -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:23:58.553 12:19:59 -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:23:58.553 12:19:59 -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:23:58.553 12:19:59 -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:58.553 12:19:59 -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:58.553 12:19:59 -- host/auth.sh@21 -- # keys=() 00:23:58.553 12:19:59 -- host/auth.sh@77 -- # nvmftestinit 00:23:58.553 12:19:59 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:58.553 12:19:59 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:58.553 12:19:59 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:58.553 12:19:59 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:58.553 12:19:59 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:58.553 12:19:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:58.553 12:19:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:58.553 12:19:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:58.553 12:19:59 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:23:58.553 12:19:59 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:23:58.553 12:19:59 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:58.553 12:19:59 -- common/autotest_common.sh@10 -- # set +x 00:24:05.136 12:20:06 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:05.136 12:20:06 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:05.136 12:20:06 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:05.136 12:20:06 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:05.136 12:20:06 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:05.136 12:20:06 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:05.136 12:20:06 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:05.136 12:20:06 -- nvmf/common.sh@295 -- # net_devs=() 00:24:05.136 12:20:06 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:05.136 12:20:06 -- nvmf/common.sh@296 -- # e810=() 00:24:05.136 12:20:06 -- nvmf/common.sh@296 -- # local -ga e810 00:24:05.136 12:20:06 -- nvmf/common.sh@297 -- # x722=() 00:24:05.136 12:20:06 -- nvmf/common.sh@297 -- # local -ga x722 00:24:05.136 12:20:06 -- nvmf/common.sh@298 -- # mlx=() 00:24:05.136 12:20:06 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:05.136 12:20:06 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:05.136 12:20:06 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:05.136 12:20:06 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:05.136 12:20:06 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:05.136 12:20:06 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:05.136 12:20:06 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:05.136 12:20:06 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:05.136 12:20:06 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:05.136 12:20:06 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:05.136 12:20:06 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:05.136 12:20:06 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:05.136 12:20:06 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:05.136 12:20:06 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:05.136 12:20:06 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:05.136 12:20:06 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:05.136 12:20:06 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:05.136 12:20:06 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:05.136 12:20:06 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:05.136 12:20:06 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:05.136 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:05.136 12:20:06 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:05.136 12:20:06 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:05.136 12:20:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:05.136 12:20:06 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:05.136 12:20:06 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:05.136 12:20:06 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:05.136 12:20:06 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:05.136 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:05.136 12:20:06 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:05.136 12:20:06 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:05.136 12:20:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:05.136 12:20:06 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:05.136 12:20:06 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:05.136 12:20:06 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:05.136 12:20:06 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:05.136 12:20:06 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:05.136 12:20:06 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:05.136 12:20:06 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:05.136 12:20:06 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:05.136 12:20:06 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:05.136 12:20:06 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:05.136 Found net devices under 0000:31:00.0: cvl_0_0 00:24:05.136 12:20:06 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:05.136 12:20:06 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:05.136 12:20:06 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:05.136 12:20:06 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:05.136 12:20:06 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:05.136 12:20:06 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:05.136 Found net devices under 0000:31:00.1: cvl_0_1 00:24:05.136 12:20:06 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:05.136 12:20:06 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:24:05.136 12:20:06 -- nvmf/common.sh@403 -- # is_hw=yes 00:24:05.136 12:20:06 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:24:05.136 12:20:06 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:24:05.136 12:20:06 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:24:05.136 12:20:06 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:05.136 12:20:06 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:05.136 12:20:06 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:05.136 12:20:06 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:05.136 12:20:06 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:05.136 12:20:06 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:05.136 12:20:06 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:05.136 12:20:06 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:05.136 12:20:06 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:05.136 12:20:06 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:05.136 12:20:06 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:05.136 12:20:06 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:05.136 12:20:06 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:05.396 12:20:06 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:05.396 12:20:06 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:05.396 12:20:06 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:05.396 12:20:06 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:05.657 12:20:06 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:05.657 12:20:06 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:05.657 12:20:06 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:05.657 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:05.657 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.622 ms 00:24:05.657 00:24:05.657 --- 10.0.0.2 ping statistics --- 00:24:05.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.657 rtt min/avg/max/mdev = 0.622/0.622/0.622/0.000 ms 00:24:05.657 12:20:06 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:05.657 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:05.657 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:24:05.657 00:24:05.657 --- 10.0.0.1 ping statistics --- 00:24:05.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.657 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:24:05.657 12:20:06 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:05.657 12:20:06 -- nvmf/common.sh@411 -- # return 0 00:24:05.658 12:20:06 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:05.658 12:20:06 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:05.658 12:20:06 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:05.658 12:20:06 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:05.658 12:20:06 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:05.658 12:20:06 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:05.658 12:20:06 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:05.658 12:20:06 -- host/auth.sh@78 -- # nvmfappstart -L nvme_auth 00:24:05.658 12:20:06 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:05.658 12:20:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:05.658 12:20:06 -- common/autotest_common.sh@10 -- # set +x 00:24:05.658 12:20:06 -- nvmf/common.sh@470 -- # nvmfpid=3531831 00:24:05.658 12:20:06 -- nvmf/common.sh@471 -- # waitforlisten 3531831 00:24:05.658 12:20:06 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:24:05.658 12:20:06 -- common/autotest_common.sh@817 -- # '[' -z 3531831 ']' 00:24:05.658 12:20:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:05.658 12:20:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:05.658 12:20:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:05.658 12:20:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:05.658 12:20:06 -- common/autotest_common.sh@10 -- # set +x 00:24:06.597 12:20:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:06.597 12:20:07 -- common/autotest_common.sh@850 -- # return 0 00:24:06.597 12:20:07 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:06.597 12:20:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:06.597 12:20:07 -- common/autotest_common.sh@10 -- # set +x 00:24:06.597 12:20:07 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:06.597 12:20:07 -- host/auth.sh@79 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:24:06.597 12:20:07 -- host/auth.sh@81 -- # gen_key null 32 00:24:06.597 12:20:07 -- host/auth.sh@53 -- # local digest len file key 00:24:06.597 12:20:07 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:06.597 12:20:07 -- host/auth.sh@54 -- # local -A digests 00:24:06.597 12:20:07 -- host/auth.sh@56 -- # digest=null 00:24:06.597 12:20:07 -- host/auth.sh@56 -- # len=32 00:24:06.597 12:20:07 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:06.597 12:20:07 -- host/auth.sh@57 -- # key=7271533628f360b359f3cdccaf7eedef 00:24:06.597 12:20:07 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:24:06.597 12:20:07 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.jjQ 00:24:06.597 12:20:07 -- host/auth.sh@59 -- # format_dhchap_key 7271533628f360b359f3cdccaf7eedef 0 00:24:06.597 12:20:07 -- nvmf/common.sh@708 -- # format_key DHHC-1 7271533628f360b359f3cdccaf7eedef 0 00:24:06.597 12:20:07 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:06.597 12:20:07 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:24:06.597 12:20:07 -- nvmf/common.sh@693 -- # key=7271533628f360b359f3cdccaf7eedef 00:24:06.597 12:20:07 -- nvmf/common.sh@693 -- # digest=0 00:24:06.597 12:20:07 -- nvmf/common.sh@694 -- # python - 00:24:06.597 12:20:07 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.jjQ 00:24:06.597 12:20:07 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.jjQ 00:24:06.597 12:20:07 -- host/auth.sh@81 -- # keys[0]=/tmp/spdk.key-null.jjQ 00:24:06.597 12:20:07 -- host/auth.sh@82 -- # gen_key null 48 00:24:06.597 12:20:07 -- host/auth.sh@53 -- # local digest len file key 00:24:06.597 12:20:07 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:06.597 12:20:07 -- host/auth.sh@54 -- # local -A digests 00:24:06.597 12:20:07 -- host/auth.sh@56 -- # digest=null 00:24:06.597 12:20:07 -- host/auth.sh@56 -- # len=48 00:24:06.597 12:20:07 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:06.597 12:20:07 -- host/auth.sh@57 -- # key=b6578fe61cc83c395c15d614fa2c07cd30a4727168556b37 00:24:06.597 12:20:07 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:24:06.597 12:20:07 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.8XF 00:24:06.597 12:20:07 -- host/auth.sh@59 -- # format_dhchap_key b6578fe61cc83c395c15d614fa2c07cd30a4727168556b37 0 00:24:06.597 12:20:07 -- nvmf/common.sh@708 -- # format_key DHHC-1 b6578fe61cc83c395c15d614fa2c07cd30a4727168556b37 0 00:24:06.597 12:20:07 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:06.597 12:20:07 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:24:06.597 12:20:07 -- nvmf/common.sh@693 -- # key=b6578fe61cc83c395c15d614fa2c07cd30a4727168556b37 00:24:06.597 12:20:07 -- nvmf/common.sh@693 -- # digest=0 00:24:06.597 12:20:07 -- nvmf/common.sh@694 -- # python - 00:24:06.597 12:20:07 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.8XF 00:24:06.597 12:20:07 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.8XF 00:24:06.597 12:20:07 -- host/auth.sh@82 -- # keys[1]=/tmp/spdk.key-null.8XF 00:24:06.597 12:20:07 -- host/auth.sh@83 -- # gen_key sha256 32 00:24:06.597 12:20:07 -- host/auth.sh@53 -- # local digest len file key 00:24:06.597 12:20:07 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:06.597 12:20:07 -- host/auth.sh@54 -- # local -A digests 00:24:06.597 12:20:07 -- host/auth.sh@56 -- # digest=sha256 00:24:06.597 12:20:07 -- host/auth.sh@56 -- # len=32 00:24:06.597 12:20:07 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:06.597 12:20:07 -- host/auth.sh@57 -- # key=3d1a4ab32555634896d4716dda9b5500 00:24:06.597 12:20:07 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha256.XXX 00:24:06.597 12:20:07 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha256.A4R 00:24:06.597 12:20:07 -- host/auth.sh@59 -- # format_dhchap_key 3d1a4ab32555634896d4716dda9b5500 1 00:24:06.597 12:20:07 -- nvmf/common.sh@708 -- # format_key DHHC-1 3d1a4ab32555634896d4716dda9b5500 1 00:24:06.597 12:20:07 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:06.597 12:20:07 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:24:06.597 12:20:07 -- nvmf/common.sh@693 -- # key=3d1a4ab32555634896d4716dda9b5500 00:24:06.597 12:20:07 -- nvmf/common.sh@693 -- # digest=1 00:24:06.597 12:20:07 -- nvmf/common.sh@694 -- # python - 00:24:06.597 12:20:07 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha256.A4R 00:24:06.597 12:20:07 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha256.A4R 00:24:06.597 12:20:07 -- host/auth.sh@83 -- # keys[2]=/tmp/spdk.key-sha256.A4R 00:24:06.597 12:20:07 -- host/auth.sh@84 -- # gen_key sha384 48 00:24:06.597 12:20:07 -- host/auth.sh@53 -- # local digest len file key 00:24:06.597 12:20:07 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:06.597 12:20:07 -- host/auth.sh@54 -- # local -A digests 00:24:06.597 12:20:07 -- host/auth.sh@56 -- # digest=sha384 00:24:06.597 12:20:07 -- host/auth.sh@56 -- # len=48 00:24:06.597 12:20:07 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:06.597 12:20:07 -- host/auth.sh@57 -- # key=bc9b276c489be84eb6d147536fcd653916c37f9d5f2d0046 00:24:06.597 12:20:07 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha384.XXX 00:24:06.597 12:20:07 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha384.9kr 00:24:06.597 12:20:07 -- host/auth.sh@59 -- # format_dhchap_key bc9b276c489be84eb6d147536fcd653916c37f9d5f2d0046 2 00:24:06.597 12:20:07 -- nvmf/common.sh@708 -- # format_key DHHC-1 bc9b276c489be84eb6d147536fcd653916c37f9d5f2d0046 2 00:24:06.597 12:20:07 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:06.597 12:20:07 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:24:06.597 12:20:07 -- nvmf/common.sh@693 -- # key=bc9b276c489be84eb6d147536fcd653916c37f9d5f2d0046 00:24:06.597 12:20:07 -- nvmf/common.sh@693 -- # digest=2 00:24:06.597 12:20:07 -- nvmf/common.sh@694 -- # python - 00:24:06.597 12:20:07 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha384.9kr 00:24:06.597 12:20:07 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha384.9kr 00:24:06.857 12:20:07 -- host/auth.sh@84 -- # keys[3]=/tmp/spdk.key-sha384.9kr 00:24:06.857 12:20:07 -- host/auth.sh@85 -- # gen_key sha512 64 00:24:06.857 12:20:07 -- host/auth.sh@53 -- # local digest len file key 00:24:06.857 12:20:07 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:06.857 12:20:07 -- host/auth.sh@54 -- # local -A digests 00:24:06.857 12:20:07 -- host/auth.sh@56 -- # digest=sha512 00:24:06.857 12:20:07 -- host/auth.sh@56 -- # len=64 00:24:06.857 12:20:07 -- host/auth.sh@57 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:06.857 12:20:07 -- host/auth.sh@57 -- # key=31cdbda2132047b43d0b0f89cbddf47fb13c328873412e5f8164e6511ac77d92 00:24:06.857 12:20:07 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha512.XXX 00:24:06.857 12:20:07 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha512.brD 00:24:06.857 12:20:07 -- host/auth.sh@59 -- # format_dhchap_key 31cdbda2132047b43d0b0f89cbddf47fb13c328873412e5f8164e6511ac77d92 3 00:24:06.857 12:20:07 -- nvmf/common.sh@708 -- # format_key DHHC-1 31cdbda2132047b43d0b0f89cbddf47fb13c328873412e5f8164e6511ac77d92 3 00:24:06.857 12:20:07 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:06.857 12:20:07 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:24:06.857 12:20:07 -- nvmf/common.sh@693 -- # key=31cdbda2132047b43d0b0f89cbddf47fb13c328873412e5f8164e6511ac77d92 00:24:06.857 12:20:07 -- nvmf/common.sh@693 -- # digest=3 00:24:06.857 12:20:07 -- nvmf/common.sh@694 -- # python - 00:24:06.857 12:20:07 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha512.brD 00:24:06.857 12:20:07 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha512.brD 00:24:06.857 12:20:07 -- host/auth.sh@85 -- # keys[4]=/tmp/spdk.key-sha512.brD 00:24:06.857 12:20:07 -- host/auth.sh@87 -- # waitforlisten 3531831 00:24:06.857 12:20:07 -- common/autotest_common.sh@817 -- # '[' -z 3531831 ']' 00:24:06.857 12:20:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:06.857 12:20:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:06.857 12:20:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:06.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:06.857 12:20:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:06.857 12:20:07 -- common/autotest_common.sh@10 -- # set +x 00:24:06.857 12:20:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:06.857 12:20:08 -- common/autotest_common.sh@850 -- # return 0 00:24:06.857 12:20:08 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:24:06.857 12:20:08 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.jjQ 00:24:06.857 12:20:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:06.857 12:20:08 -- common/autotest_common.sh@10 -- # set +x 00:24:06.857 12:20:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:06.857 12:20:08 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:24:06.857 12:20:08 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.8XF 00:24:06.857 12:20:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:06.857 12:20:08 -- common/autotest_common.sh@10 -- # set +x 00:24:06.857 12:20:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:06.857 12:20:08 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:24:06.857 12:20:08 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.A4R 00:24:06.857 12:20:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.118 12:20:08 -- common/autotest_common.sh@10 -- # set +x 00:24:07.118 12:20:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.118 12:20:08 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:24:07.118 12:20:08 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.9kr 00:24:07.118 12:20:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.118 12:20:08 -- common/autotest_common.sh@10 -- # set +x 00:24:07.118 12:20:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.118 12:20:08 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:24:07.118 12:20:08 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.brD 00:24:07.118 12:20:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.118 12:20:08 -- common/autotest_common.sh@10 -- # set +x 00:24:07.118 12:20:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.118 12:20:08 -- host/auth.sh@92 -- # nvmet_auth_init 00:24:07.118 12:20:08 -- host/auth.sh@35 -- # get_main_ns_ip 00:24:07.118 12:20:08 -- nvmf/common.sh@717 -- # local ip 00:24:07.118 12:20:08 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:07.118 12:20:08 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:07.118 12:20:08 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:07.118 12:20:08 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:07.118 12:20:08 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:07.118 12:20:08 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:07.118 12:20:08 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:07.118 12:20:08 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:07.118 12:20:08 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:07.118 12:20:08 -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:24:07.118 12:20:08 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:24:07.118 12:20:08 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:24:07.118 12:20:08 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:07.118 12:20:08 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:07.118 12:20:08 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:07.118 12:20:08 -- nvmf/common.sh@628 -- # local block nvme 00:24:07.118 12:20:08 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:24:07.118 12:20:08 -- nvmf/common.sh@631 -- # modprobe nvmet 00:24:07.118 12:20:08 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:07.118 12:20:08 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:10.417 Waiting for block devices as requested 00:24:10.417 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:24:10.417 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:24:10.417 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:24:10.678 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:24:10.679 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:24:10.679 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:24:10.938 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:24:10.938 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:24:10.938 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:24:11.199 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:24:11.199 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:24:11.199 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:24:11.458 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:24:11.458 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:24:11.458 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:24:11.718 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:24:11.718 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:24:12.661 12:20:13 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:24:12.661 12:20:13 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:12.661 12:20:13 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:24:12.661 12:20:13 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:24:12.661 12:20:13 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:12.661 12:20:13 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:12.661 12:20:13 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:24:12.661 12:20:13 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:24:12.661 12:20:13 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:12.661 No valid GPT data, bailing 00:24:12.661 12:20:13 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:12.661 12:20:13 -- scripts/common.sh@391 -- # pt= 00:24:12.661 12:20:13 -- scripts/common.sh@392 -- # return 1 00:24:12.661 12:20:13 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:24:12.661 12:20:13 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:24:12.661 12:20:13 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:12.661 12:20:13 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:12.661 12:20:13 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:12.661 12:20:13 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:24:12.662 12:20:13 -- nvmf/common.sh@656 -- # echo 1 00:24:12.662 12:20:13 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:24:12.662 12:20:13 -- nvmf/common.sh@658 -- # echo 1 00:24:12.662 12:20:13 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:24:12.662 12:20:13 -- nvmf/common.sh@661 -- # echo tcp 00:24:12.662 12:20:13 -- nvmf/common.sh@662 -- # echo 4420 00:24:12.662 12:20:13 -- nvmf/common.sh@663 -- # echo ipv4 00:24:12.662 12:20:13 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:12.662 12:20:13 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:24:12.662 00:24:12.662 Discovery Log Number of Records 2, Generation counter 2 00:24:12.662 =====Discovery Log Entry 0====== 00:24:12.662 trtype: tcp 00:24:12.662 adrfam: ipv4 00:24:12.662 subtype: current discovery subsystem 00:24:12.662 treq: not specified, sq flow control disable supported 00:24:12.662 portid: 1 00:24:12.662 trsvcid: 4420 00:24:12.662 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:12.662 traddr: 10.0.0.1 00:24:12.662 eflags: none 00:24:12.662 sectype: none 00:24:12.662 =====Discovery Log Entry 1====== 00:24:12.662 trtype: tcp 00:24:12.662 adrfam: ipv4 00:24:12.662 subtype: nvme subsystem 00:24:12.662 treq: not specified, sq flow control disable supported 00:24:12.662 portid: 1 00:24:12.662 trsvcid: 4420 00:24:12.662 subnqn: nqn.2024-02.io.spdk:cnode0 00:24:12.662 traddr: 10.0.0.1 00:24:12.662 eflags: none 00:24:12.662 sectype: none 00:24:12.662 12:20:13 -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:12.662 12:20:13 -- host/auth.sh@37 -- # echo 0 00:24:12.662 12:20:13 -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:12.662 12:20:13 -- host/auth.sh@95 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:12.662 12:20:13 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:12.662 12:20:13 -- host/auth.sh@44 -- # digest=sha256 00:24:12.662 12:20:13 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:12.662 12:20:13 -- host/auth.sh@44 -- # keyid=1 00:24:12.662 12:20:13 -- host/auth.sh@45 -- # key=DHHC-1:00:YjY1NzhmZTYxY2M4M2MzOTVjMTVkNjE0ZmEyYzA3Y2QzMGE0NzI3MTY4NTU2YjM3PF0etg==: 00:24:12.662 12:20:13 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:12.662 12:20:13 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:12.662 12:20:13 -- host/auth.sh@49 -- # echo DHHC-1:00:YjY1NzhmZTYxY2M4M2MzOTVjMTVkNjE0ZmEyYzA3Y2QzMGE0NzI3MTY4NTU2YjM3PF0etg==: 00:24:12.662 12:20:13 -- host/auth.sh@100 -- # IFS=, 00:24:12.662 12:20:13 -- host/auth.sh@101 -- # printf %s sha256,sha384,sha512 00:24:12.662 12:20:13 -- host/auth.sh@100 -- # IFS=, 00:24:12.662 12:20:13 -- host/auth.sh@101 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:12.662 12:20:13 -- host/auth.sh@100 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:24:12.662 12:20:13 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:12.662 12:20:13 -- host/auth.sh@68 -- # digest=sha256,sha384,sha512 00:24:12.662 12:20:13 -- host/auth.sh@68 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:12.662 12:20:13 -- host/auth.sh@68 -- # keyid=1 00:24:12.662 12:20:13 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:12.662 12:20:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:12.662 12:20:13 -- common/autotest_common.sh@10 -- # set +x 00:24:12.662 12:20:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:12.662 12:20:13 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:12.662 12:20:13 -- nvmf/common.sh@717 -- # local ip 00:24:12.662 12:20:13 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:12.662 12:20:13 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:12.662 12:20:13 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.662 12:20:13 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.662 12:20:13 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:12.662 12:20:13 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.662 12:20:13 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:12.662 12:20:13 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:12.662 12:20:13 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:12.662 12:20:13 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:12.662 12:20:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:12.662 12:20:13 -- common/autotest_common.sh@10 -- # set +x 00:24:12.922 nvme0n1 00:24:12.922 12:20:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:12.922 12:20:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:12.922 12:20:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:12.922 12:20:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:12.922 12:20:14 -- common/autotest_common.sh@10 -- # set +x 00:24:12.922 12:20:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:12.922 12:20:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.922 12:20:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:12.922 12:20:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:12.922 12:20:14 -- common/autotest_common.sh@10 -- # set +x 00:24:12.922 12:20:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:12.922 12:20:14 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:24:12.922 12:20:14 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:12.922 12:20:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:12.922 12:20:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:24:12.922 12:20:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:12.922 12:20:14 -- host/auth.sh@44 -- # digest=sha256 00:24:12.922 12:20:14 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:12.922 12:20:14 -- host/auth.sh@44 -- # keyid=0 00:24:12.922 12:20:14 -- host/auth.sh@45 -- # key=DHHC-1:00:NzI3MTUzMzYyOGYzNjBiMzU5ZjNjZGNjYWY3ZWVkZWatAOsZ: 00:24:12.922 12:20:14 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:12.922 12:20:14 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:12.922 12:20:14 -- host/auth.sh@49 -- # echo DHHC-1:00:NzI3MTUzMzYyOGYzNjBiMzU5ZjNjZGNjYWY3ZWVkZWatAOsZ: 00:24:12.922 12:20:14 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 0 00:24:12.922 12:20:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:12.922 12:20:14 -- host/auth.sh@68 -- # digest=sha256 00:24:12.922 12:20:14 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:12.922 12:20:14 -- host/auth.sh@68 -- # keyid=0 00:24:12.922 12:20:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:12.922 12:20:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:12.922 12:20:14 -- common/autotest_common.sh@10 -- # set +x 00:24:12.922 12:20:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:12.922 12:20:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:12.922 12:20:14 -- nvmf/common.sh@717 -- # local ip 00:24:12.922 12:20:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:12.922 12:20:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:12.922 12:20:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.922 12:20:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.922 12:20:14 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:12.922 12:20:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.922 12:20:14 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:12.922 12:20:14 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:12.922 12:20:14 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:12.922 12:20:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:12.922 12:20:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:12.922 12:20:14 -- common/autotest_common.sh@10 -- # set +x 00:24:13.183 nvme0n1 00:24:13.183 12:20:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:13.183 12:20:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.183 12:20:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:13.183 12:20:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:13.183 12:20:14 -- common/autotest_common.sh@10 -- # set +x 00:24:13.183 12:20:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:13.183 12:20:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.183 12:20:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.183 12:20:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:13.183 12:20:14 -- common/autotest_common.sh@10 -- # set +x 00:24:13.183 12:20:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:13.183 12:20:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:13.183 12:20:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:13.183 12:20:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:13.183 12:20:14 -- host/auth.sh@44 -- # digest=sha256 00:24:13.183 12:20:14 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:13.183 12:20:14 -- host/auth.sh@44 -- # keyid=1 00:24:13.183 12:20:14 -- host/auth.sh@45 -- # key=DHHC-1:00:YjY1NzhmZTYxY2M4M2MzOTVjMTVkNjE0ZmEyYzA3Y2QzMGE0NzI3MTY4NTU2YjM3PF0etg==: 00:24:13.183 12:20:14 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:13.183 12:20:14 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:13.183 12:20:14 -- host/auth.sh@49 -- # echo DHHC-1:00:YjY1NzhmZTYxY2M4M2MzOTVjMTVkNjE0ZmEyYzA3Y2QzMGE0NzI3MTY4NTU2YjM3PF0etg==: 00:24:13.183 12:20:14 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 1 00:24:13.183 12:20:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:13.183 12:20:14 -- host/auth.sh@68 -- # digest=sha256 00:24:13.183 12:20:14 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:13.183 12:20:14 -- host/auth.sh@68 -- # keyid=1 00:24:13.183 12:20:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:13.183 12:20:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:13.183 12:20:14 -- common/autotest_common.sh@10 -- # set +x 00:24:13.183 12:20:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:13.183 12:20:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:13.183 12:20:14 -- nvmf/common.sh@717 -- # local ip 00:24:13.183 12:20:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:13.183 12:20:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:13.183 12:20:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.183 12:20:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.183 12:20:14 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:13.183 12:20:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.183 12:20:14 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:13.183 12:20:14 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:13.183 12:20:14 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:13.183 12:20:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:13.183 12:20:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:13.183 12:20:14 -- common/autotest_common.sh@10 -- # set +x 00:24:13.443 nvme0n1 00:24:13.443 12:20:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:13.443 12:20:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.443 12:20:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:13.443 12:20:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:13.443 12:20:14 -- common/autotest_common.sh@10 -- # set +x 00:24:13.443 12:20:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:13.444 12:20:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.444 12:20:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.444 12:20:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:13.444 12:20:14 -- common/autotest_common.sh@10 -- # set +x 00:24:13.444 12:20:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:13.444 12:20:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:13.444 12:20:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:13.444 12:20:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:13.444 12:20:14 -- host/auth.sh@44 -- # digest=sha256 00:24:13.444 12:20:14 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:13.444 12:20:14 -- host/auth.sh@44 -- # keyid=2 00:24:13.444 12:20:14 -- host/auth.sh@45 -- # key=DHHC-1:01:M2QxYTRhYjMyNTU1NjM0ODk2ZDQ3MTZkZGE5YjU1MDCXHfQe: 00:24:13.444 12:20:14 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:13.444 12:20:14 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:13.444 12:20:14 -- host/auth.sh@49 -- # echo DHHC-1:01:M2QxYTRhYjMyNTU1NjM0ODk2ZDQ3MTZkZGE5YjU1MDCXHfQe: 00:24:13.444 12:20:14 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 2 00:24:13.444 12:20:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:13.444 12:20:14 -- host/auth.sh@68 -- # digest=sha256 00:24:13.444 12:20:14 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:13.444 12:20:14 -- host/auth.sh@68 -- # keyid=2 00:24:13.444 12:20:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:13.444 12:20:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:13.444 12:20:14 -- common/autotest_common.sh@10 -- # set +x 00:24:13.444 12:20:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:13.444 12:20:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:13.444 12:20:14 -- nvmf/common.sh@717 -- # local ip 00:24:13.444 12:20:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:13.444 12:20:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:13.444 12:20:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.444 12:20:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.444 12:20:14 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:13.444 12:20:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.444 12:20:14 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:13.444 12:20:14 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:13.444 12:20:14 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:13.444 12:20:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:13.444 12:20:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:13.444 12:20:14 -- common/autotest_common.sh@10 -- # set +x 00:24:13.704 nvme0n1 00:24:13.704 12:20:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:13.704 12:20:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.704 12:20:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:13.704 12:20:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:13.704 12:20:14 -- common/autotest_common.sh@10 -- # set +x 00:24:13.704 12:20:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:13.704 12:20:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.704 12:20:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.704 12:20:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:13.704 12:20:14 -- common/autotest_common.sh@10 -- # set +x 00:24:13.704 12:20:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:13.704 12:20:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:13.704 12:20:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:24:13.704 12:20:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:13.704 12:20:14 -- host/auth.sh@44 -- # digest=sha256 00:24:13.704 12:20:14 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:13.704 12:20:14 -- host/auth.sh@44 -- # keyid=3 00:24:13.704 12:20:14 -- host/auth.sh@45 -- # key=DHHC-1:02:YmM5YjI3NmM0ODliZTg0ZWI2ZDE0NzUzNmZjZDY1MzkxNmMzN2Y5ZDVmMmQwMDQ2bsg/gA==: 00:24:13.704 12:20:14 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:13.704 12:20:14 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:13.704 12:20:14 -- host/auth.sh@49 -- # echo DHHC-1:02:YmM5YjI3NmM0ODliZTg0ZWI2ZDE0NzUzNmZjZDY1MzkxNmMzN2Y5ZDVmMmQwMDQ2bsg/gA==: 00:24:13.704 12:20:14 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 3 00:24:13.704 12:20:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:13.704 12:20:14 -- host/auth.sh@68 -- # digest=sha256 00:24:13.704 12:20:14 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:13.704 12:20:14 -- host/auth.sh@68 -- # keyid=3 00:24:13.704 12:20:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:13.704 12:20:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:13.704 12:20:14 -- common/autotest_common.sh@10 -- # set +x 00:24:13.704 12:20:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:13.704 12:20:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:13.704 12:20:14 -- nvmf/common.sh@717 -- # local ip 00:24:13.704 12:20:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:13.704 12:20:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:13.704 12:20:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.704 12:20:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.704 12:20:14 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:13.704 12:20:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.704 12:20:14 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:13.704 12:20:14 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:13.704 12:20:14 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:13.704 12:20:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:13.704 12:20:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:13.704 12:20:14 -- common/autotest_common.sh@10 -- # set +x 00:24:13.704 nvme0n1 00:24:13.704 12:20:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:13.964 12:20:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.964 12:20:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:13.964 12:20:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:13.964 12:20:14 -- common/autotest_common.sh@10 -- # set +x 00:24:13.964 12:20:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:13.964 12:20:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.964 12:20:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.964 12:20:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:13.964 12:20:14 -- common/autotest_common.sh@10 -- # set +x 00:24:13.964 12:20:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:13.964 12:20:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:13.964 12:20:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:24:13.964 12:20:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:13.964 12:20:14 -- host/auth.sh@44 -- # digest=sha256 00:24:13.964 12:20:14 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:13.964 12:20:14 -- host/auth.sh@44 -- # keyid=4 00:24:13.964 12:20:14 -- host/auth.sh@45 -- # key=DHHC-1:03:MzFjZGJkYTIxMzIwNDdiNDNkMGIwZjg5Y2JkZGY0N2ZiMTNjMzI4ODczNDEyZTVmODE2NGU2NTExYWM3N2Q5MhKdksY=: 00:24:13.964 12:20:14 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:13.964 12:20:14 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:13.964 12:20:14 -- host/auth.sh@49 -- # echo DHHC-1:03:MzFjZGJkYTIxMzIwNDdiNDNkMGIwZjg5Y2JkZGY0N2ZiMTNjMzI4ODczNDEyZTVmODE2NGU2NTExYWM3N2Q5MhKdksY=: 00:24:13.964 12:20:14 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 4 00:24:13.964 12:20:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:13.964 12:20:14 -- host/auth.sh@68 -- # digest=sha256 00:24:13.964 12:20:14 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:13.964 12:20:14 -- host/auth.sh@68 -- # keyid=4 00:24:13.964 12:20:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:13.964 12:20:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:13.964 12:20:14 -- common/autotest_common.sh@10 -- # set +x 00:24:13.964 12:20:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:13.964 12:20:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:13.964 12:20:15 -- nvmf/common.sh@717 -- # local ip 00:24:13.964 12:20:15 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:13.964 12:20:15 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:13.964 12:20:15 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.964 12:20:15 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.964 12:20:15 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:13.964 12:20:15 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.964 12:20:15 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:13.964 12:20:15 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:13.964 12:20:15 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:13.964 12:20:15 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:13.964 12:20:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:13.964 12:20:15 -- common/autotest_common.sh@10 -- # set +x 00:24:13.964 nvme0n1 00:24:13.964 12:20:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:13.964 12:20:15 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.964 12:20:15 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:13.964 12:20:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:13.964 12:20:15 -- common/autotest_common.sh@10 -- # set +x 00:24:13.964 12:20:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.223 12:20:15 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.223 12:20:15 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.223 12:20:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.223 12:20:15 -- common/autotest_common.sh@10 -- # set +x 00:24:14.223 12:20:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.223 12:20:15 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:14.223 12:20:15 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:14.223 12:20:15 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:24:14.223 12:20:15 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:14.223 12:20:15 -- host/auth.sh@44 -- # digest=sha256 00:24:14.223 12:20:15 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:14.223 12:20:15 -- host/auth.sh@44 -- # keyid=0 00:24:14.223 12:20:15 -- host/auth.sh@45 -- # key=DHHC-1:00:NzI3MTUzMzYyOGYzNjBiMzU5ZjNjZGNjYWY3ZWVkZWatAOsZ: 00:24:14.223 12:20:15 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:14.223 12:20:15 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:14.223 12:20:15 -- host/auth.sh@49 -- # echo DHHC-1:00:NzI3MTUzMzYyOGYzNjBiMzU5ZjNjZGNjYWY3ZWVkZWatAOsZ: 00:24:14.223 12:20:15 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 0 00:24:14.223 12:20:15 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:14.223 12:20:15 -- host/auth.sh@68 -- # digest=sha256 00:24:14.223 12:20:15 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:14.223 12:20:15 -- host/auth.sh@68 -- # keyid=0 00:24:14.223 12:20:15 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:14.223 12:20:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.223 12:20:15 -- common/autotest_common.sh@10 -- # set +x 00:24:14.223 12:20:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.223 12:20:15 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:14.223 12:20:15 -- nvmf/common.sh@717 -- # local ip 00:24:14.223 12:20:15 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:14.223 12:20:15 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:14.223 12:20:15 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.223 12:20:15 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.223 12:20:15 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:14.223 12:20:15 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.223 12:20:15 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:14.223 12:20:15 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:14.223 12:20:15 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:14.223 12:20:15 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:14.223 12:20:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.223 12:20:15 -- common/autotest_common.sh@10 -- # set +x 00:24:14.223 nvme0n1 00:24:14.223 12:20:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.223 12:20:15 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.223 12:20:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.223 12:20:15 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:14.223 12:20:15 -- common/autotest_common.sh@10 -- # set +x 00:24:14.223 12:20:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.483 12:20:15 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.483 12:20:15 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.483 12:20:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.483 12:20:15 -- common/autotest_common.sh@10 -- # set +x 00:24:14.483 12:20:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.483 12:20:15 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:14.483 12:20:15 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:24:14.483 12:20:15 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:14.483 12:20:15 -- host/auth.sh@44 -- # digest=sha256 00:24:14.483 12:20:15 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:14.483 12:20:15 -- host/auth.sh@44 -- # keyid=1 00:24:14.483 12:20:15 -- host/auth.sh@45 -- # key=DHHC-1:00:YjY1NzhmZTYxY2M4M2MzOTVjMTVkNjE0ZmEyYzA3Y2QzMGE0NzI3MTY4NTU2YjM3PF0etg==: 00:24:14.483 12:20:15 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:14.483 12:20:15 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:14.483 12:20:15 -- host/auth.sh@49 -- # echo DHHC-1:00:YjY1NzhmZTYxY2M4M2MzOTVjMTVkNjE0ZmEyYzA3Y2QzMGE0NzI3MTY4NTU2YjM3PF0etg==: 00:24:14.483 12:20:15 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 1 00:24:14.483 12:20:15 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:14.483 12:20:15 -- host/auth.sh@68 -- # digest=sha256 00:24:14.483 12:20:15 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:14.483 12:20:15 -- host/auth.sh@68 -- # keyid=1 00:24:14.483 12:20:15 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:14.483 12:20:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.483 12:20:15 -- common/autotest_common.sh@10 -- # set +x 00:24:14.483 12:20:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.483 12:20:15 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:14.483 12:20:15 -- nvmf/common.sh@717 -- # local ip 00:24:14.483 12:20:15 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:14.483 12:20:15 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:14.483 12:20:15 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.483 12:20:15 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.483 12:20:15 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:14.483 12:20:15 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.483 12:20:15 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:14.483 12:20:15 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:14.483 12:20:15 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:14.483 12:20:15 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:14.483 12:20:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.483 12:20:15 -- common/autotest_common.sh@10 -- # set +x 00:24:14.483 nvme0n1 00:24:14.483 12:20:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.483 12:20:15 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.483 12:20:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.483 12:20:15 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:14.483 12:20:15 -- common/autotest_common.sh@10 -- # set +x 00:24:14.483 12:20:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.742 12:20:15 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.742 12:20:15 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.742 12:20:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.742 12:20:15 -- common/autotest_common.sh@10 -- # set +x 00:24:14.742 12:20:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.742 12:20:15 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:14.742 12:20:15 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:24:14.742 12:20:15 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:14.742 12:20:15 -- host/auth.sh@44 -- # digest=sha256 00:24:14.742 12:20:15 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:14.742 12:20:15 -- host/auth.sh@44 -- # keyid=2 00:24:14.742 12:20:15 -- host/auth.sh@45 -- # key=DHHC-1:01:M2QxYTRhYjMyNTU1NjM0ODk2ZDQ3MTZkZGE5YjU1MDCXHfQe: 00:24:14.742 12:20:15 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:14.742 12:20:15 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:14.742 12:20:15 -- host/auth.sh@49 -- # echo DHHC-1:01:M2QxYTRhYjMyNTU1NjM0ODk2ZDQ3MTZkZGE5YjU1MDCXHfQe: 00:24:14.742 12:20:15 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 2 00:24:14.742 12:20:15 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:14.742 12:20:15 -- host/auth.sh@68 -- # digest=sha256 00:24:14.742 12:20:15 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:14.742 12:20:15 -- host/auth.sh@68 -- # keyid=2 00:24:14.742 12:20:15 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:14.742 12:20:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.742 12:20:15 -- common/autotest_common.sh@10 -- # set +x 00:24:14.742 12:20:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.742 12:20:15 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:14.742 12:20:15 -- nvmf/common.sh@717 -- # local ip 00:24:14.742 12:20:15 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:14.742 12:20:15 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:14.742 12:20:15 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.742 12:20:15 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.742 12:20:15 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:14.742 12:20:15 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.742 12:20:15 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:14.742 12:20:15 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:14.742 12:20:15 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:14.742 12:20:15 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:14.742 12:20:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.742 12:20:15 -- common/autotest_common.sh@10 -- # set +x 00:24:14.742 nvme0n1 00:24:14.742 12:20:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.742 12:20:15 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.742 12:20:15 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:14.742 12:20:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.742 12:20:15 -- common/autotest_common.sh@10 -- # set +x 00:24:14.742 12:20:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.014 12:20:15 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.014 12:20:15 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:15.014 12:20:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.014 12:20:15 -- common/autotest_common.sh@10 -- # set +x 00:24:15.014 12:20:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.014 12:20:15 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:15.014 12:20:15 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:24:15.014 12:20:15 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:15.014 12:20:15 -- host/auth.sh@44 -- # digest=sha256 00:24:15.014 12:20:15 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:15.014 12:20:15 -- host/auth.sh@44 -- # keyid=3 00:24:15.014 12:20:15 -- host/auth.sh@45 -- # key=DHHC-1:02:YmM5YjI3NmM0ODliZTg0ZWI2ZDE0NzUzNmZjZDY1MzkxNmMzN2Y5ZDVmMmQwMDQ2bsg/gA==: 00:24:15.014 12:20:15 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:15.014 12:20:15 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:15.014 12:20:15 -- host/auth.sh@49 -- # echo DHHC-1:02:YmM5YjI3NmM0ODliZTg0ZWI2ZDE0NzUzNmZjZDY1MzkxNmMzN2Y5ZDVmMmQwMDQ2bsg/gA==: 00:24:15.014 12:20:15 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 3 00:24:15.014 12:20:15 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:15.014 12:20:15 -- host/auth.sh@68 -- # digest=sha256 00:24:15.014 12:20:15 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:15.014 12:20:15 -- host/auth.sh@68 -- # keyid=3 00:24:15.014 12:20:15 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:15.014 12:20:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.014 12:20:15 -- common/autotest_common.sh@10 -- # set +x 00:24:15.014 12:20:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.014 12:20:16 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:15.014 12:20:16 -- nvmf/common.sh@717 -- # local ip 00:24:15.014 12:20:16 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:15.014 12:20:16 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:15.014 12:20:16 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:15.014 12:20:16 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:15.014 12:20:16 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:15.014 12:20:16 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:15.014 12:20:16 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:15.014 12:20:16 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:15.014 12:20:16 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:15.014 12:20:16 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:15.014 12:20:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.014 12:20:16 -- common/autotest_common.sh@10 -- # set +x 00:24:15.014 nvme0n1 00:24:15.014 12:20:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.014 12:20:16 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:15.014 12:20:16 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:15.014 12:20:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.014 12:20:16 -- common/autotest_common.sh@10 -- # set +x 00:24:15.014 12:20:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.014 12:20:16 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.014 12:20:16 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:15.014 12:20:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.014 12:20:16 -- common/autotest_common.sh@10 -- # set +x 00:24:15.275 12:20:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.275 12:20:16 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:15.275 12:20:16 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:24:15.275 12:20:16 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:15.275 12:20:16 -- host/auth.sh@44 -- # digest=sha256 00:24:15.275 12:20:16 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:15.275 12:20:16 -- host/auth.sh@44 -- # keyid=4 00:24:15.275 12:20:16 -- host/auth.sh@45 -- # key=DHHC-1:03:MzFjZGJkYTIxMzIwNDdiNDNkMGIwZjg5Y2JkZGY0N2ZiMTNjMzI4ODczNDEyZTVmODE2NGU2NTExYWM3N2Q5MhKdksY=: 00:24:15.275 12:20:16 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:15.275 12:20:16 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:15.275 12:20:16 -- host/auth.sh@49 -- # echo DHHC-1:03:MzFjZGJkYTIxMzIwNDdiNDNkMGIwZjg5Y2JkZGY0N2ZiMTNjMzI4ODczNDEyZTVmODE2NGU2NTExYWM3N2Q5MhKdksY=: 00:24:15.275 12:20:16 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 4 00:24:15.275 12:20:16 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:15.275 12:20:16 -- host/auth.sh@68 -- # digest=sha256 00:24:15.275 12:20:16 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:15.275 12:20:16 -- host/auth.sh@68 -- # keyid=4 00:24:15.275 12:20:16 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:15.275 12:20:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.275 12:20:16 -- common/autotest_common.sh@10 -- # set +x 00:24:15.275 12:20:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.275 12:20:16 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:15.275 12:20:16 -- nvmf/common.sh@717 -- # local ip 00:24:15.275 12:20:16 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:15.275 12:20:16 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:15.275 12:20:16 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:15.275 12:20:16 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:15.275 12:20:16 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:15.275 12:20:16 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:15.275 12:20:16 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:15.275 12:20:16 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:15.275 12:20:16 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:15.275 12:20:16 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:15.275 12:20:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.275 12:20:16 -- common/autotest_common.sh@10 -- # set +x 00:24:15.275 nvme0n1 00:24:15.275 12:20:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.275 12:20:16 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:15.275 12:20:16 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:15.275 12:20:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.275 12:20:16 -- common/autotest_common.sh@10 -- # set +x 00:24:15.275 12:20:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.536 12:20:16 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.536 12:20:16 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:15.536 12:20:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.536 12:20:16 -- common/autotest_common.sh@10 -- # set +x 00:24:15.536 12:20:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.536 12:20:16 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:15.536 12:20:16 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:15.536 12:20:16 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:24:15.536 12:20:16 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:15.536 12:20:16 -- host/auth.sh@44 -- # digest=sha256 00:24:15.536 12:20:16 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:15.536 12:20:16 -- host/auth.sh@44 -- # keyid=0 00:24:15.536 12:20:16 -- host/auth.sh@45 -- # key=DHHC-1:00:NzI3MTUzMzYyOGYzNjBiMzU5ZjNjZGNjYWY3ZWVkZWatAOsZ: 00:24:15.536 12:20:16 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:15.536 12:20:16 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:15.536 12:20:16 -- host/auth.sh@49 -- # echo DHHC-1:00:NzI3MTUzMzYyOGYzNjBiMzU5ZjNjZGNjYWY3ZWVkZWatAOsZ: 00:24:15.536 12:20:16 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 0 00:24:15.536 12:20:16 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:15.536 12:20:16 -- host/auth.sh@68 -- # digest=sha256 00:24:15.536 12:20:16 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:15.536 12:20:16 -- host/auth.sh@68 -- # keyid=0 00:24:15.536 12:20:16 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:15.536 12:20:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.536 12:20:16 -- common/autotest_common.sh@10 -- # set +x 00:24:15.536 12:20:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.536 12:20:16 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:15.536 12:20:16 -- nvmf/common.sh@717 -- # local ip 00:24:15.536 12:20:16 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:15.536 12:20:16 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:15.536 12:20:16 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:15.536 12:20:16 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:15.536 12:20:16 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:15.536 12:20:16 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:15.536 12:20:16 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:15.536 12:20:16 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:15.536 12:20:16 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:15.536 12:20:16 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:15.536 12:20:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.536 12:20:16 -- common/autotest_common.sh@10 -- # set +x 00:24:15.906 nvme0n1 00:24:15.906 12:20:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.906 12:20:16 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:15.906 12:20:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.906 12:20:16 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:15.906 12:20:16 -- common/autotest_common.sh@10 -- # set +x 00:24:15.906 12:20:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.906 12:20:16 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.906 12:20:16 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:15.906 12:20:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.906 12:20:16 -- common/autotest_common.sh@10 -- # set +x 00:24:15.906 12:20:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.906 12:20:16 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:15.906 12:20:16 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:24:15.906 12:20:16 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:15.906 12:20:16 -- host/auth.sh@44 -- # digest=sha256 00:24:15.906 12:20:16 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:15.906 12:20:16 -- host/auth.sh@44 -- # keyid=1 00:24:15.906 12:20:16 -- host/auth.sh@45 -- # key=DHHC-1:00:YjY1NzhmZTYxY2M4M2MzOTVjMTVkNjE0ZmEyYzA3Y2QzMGE0NzI3MTY4NTU2YjM3PF0etg==: 00:24:15.906 12:20:16 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:15.906 12:20:16 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:15.906 12:20:16 -- host/auth.sh@49 -- # echo DHHC-1:00:YjY1NzhmZTYxY2M4M2MzOTVjMTVkNjE0ZmEyYzA3Y2QzMGE0NzI3MTY4NTU2YjM3PF0etg==: 00:24:15.906 12:20:16 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 1 00:24:15.906 12:20:16 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:15.906 12:20:16 -- host/auth.sh@68 -- # digest=sha256 00:24:15.906 12:20:16 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:15.906 12:20:16 -- host/auth.sh@68 -- # keyid=1 00:24:15.906 12:20:16 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:15.906 12:20:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.906 12:20:16 -- common/autotest_common.sh@10 -- # set +x 00:24:15.906 12:20:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.906 12:20:16 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:15.906 12:20:16 -- nvmf/common.sh@717 -- # local ip 00:24:15.906 12:20:16 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:15.906 12:20:16 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:15.906 12:20:16 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:15.906 12:20:16 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:15.906 12:20:16 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:15.906 12:20:16 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:15.906 12:20:16 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:15.906 12:20:16 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:15.906 12:20:16 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:15.906 12:20:16 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:15.906 12:20:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.906 12:20:16 -- common/autotest_common.sh@10 -- # set +x 00:24:16.166 nvme0n1 00:24:16.166 12:20:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.166 12:20:17 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:16.166 12:20:17 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:16.166 12:20:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.166 12:20:17 -- common/autotest_common.sh@10 -- # set +x 00:24:16.166 12:20:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.166 12:20:17 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.166 12:20:17 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:16.167 12:20:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.167 12:20:17 -- common/autotest_common.sh@10 -- # set +x 00:24:16.167 12:20:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.167 12:20:17 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:16.167 12:20:17 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:24:16.167 12:20:17 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:16.167 12:20:17 -- host/auth.sh@44 -- # digest=sha256 00:24:16.167 12:20:17 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:16.167 12:20:17 -- host/auth.sh@44 -- # keyid=2 00:24:16.167 12:20:17 -- host/auth.sh@45 -- # key=DHHC-1:01:M2QxYTRhYjMyNTU1NjM0ODk2ZDQ3MTZkZGE5YjU1MDCXHfQe: 00:24:16.167 12:20:17 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:16.167 12:20:17 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:16.167 12:20:17 -- host/auth.sh@49 -- # echo DHHC-1:01:M2QxYTRhYjMyNTU1NjM0ODk2ZDQ3MTZkZGE5YjU1MDCXHfQe: 00:24:16.167 12:20:17 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 2 00:24:16.167 12:20:17 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:16.167 12:20:17 -- host/auth.sh@68 -- # digest=sha256 00:24:16.167 12:20:17 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:16.167 12:20:17 -- host/auth.sh@68 -- # keyid=2 00:24:16.167 12:20:17 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:16.167 12:20:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.167 12:20:17 -- common/autotest_common.sh@10 -- # set +x 00:24:16.167 12:20:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.167 12:20:17 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:16.167 12:20:17 -- nvmf/common.sh@717 -- # local ip 00:24:16.167 12:20:17 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:16.167 12:20:17 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:16.167 12:20:17 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:16.167 12:20:17 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:16.167 12:20:17 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:16.167 12:20:17 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:16.167 12:20:17 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:16.167 12:20:17 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:16.167 12:20:17 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:16.167 12:20:17 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:16.167 12:20:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.167 12:20:17 -- common/autotest_common.sh@10 -- # set +x 00:24:16.427 nvme0n1 00:24:16.427 12:20:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.427 12:20:17 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:16.427 12:20:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.427 12:20:17 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:16.427 12:20:17 -- common/autotest_common.sh@10 -- # set +x 00:24:16.427 12:20:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.427 12:20:17 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.427 12:20:17 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:16.427 12:20:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.427 12:20:17 -- common/autotest_common.sh@10 -- # set +x 00:24:16.427 12:20:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.427 12:20:17 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:16.427 12:20:17 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:24:16.427 12:20:17 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:16.427 12:20:17 -- host/auth.sh@44 -- # digest=sha256 00:24:16.427 12:20:17 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:16.427 12:20:17 -- host/auth.sh@44 -- # keyid=3 00:24:16.427 12:20:17 -- host/auth.sh@45 -- # key=DHHC-1:02:YmM5YjI3NmM0ODliZTg0ZWI2ZDE0NzUzNmZjZDY1MzkxNmMzN2Y5ZDVmMmQwMDQ2bsg/gA==: 00:24:16.427 12:20:17 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:16.427 12:20:17 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:16.427 12:20:17 -- host/auth.sh@49 -- # echo DHHC-1:02:YmM5YjI3NmM0ODliZTg0ZWI2ZDE0NzUzNmZjZDY1MzkxNmMzN2Y5ZDVmMmQwMDQ2bsg/gA==: 00:24:16.427 12:20:17 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 3 00:24:16.427 12:20:17 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:16.427 12:20:17 -- host/auth.sh@68 -- # digest=sha256 00:24:16.427 12:20:17 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:16.427 12:20:17 -- host/auth.sh@68 -- # keyid=3 00:24:16.427 12:20:17 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:16.427 12:20:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.427 12:20:17 -- common/autotest_common.sh@10 -- # set +x 00:24:16.427 12:20:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.427 12:20:17 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:16.427 12:20:17 -- nvmf/common.sh@717 -- # local ip 00:24:16.427 12:20:17 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:16.427 12:20:17 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:16.427 12:20:17 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:16.427 12:20:17 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:16.427 12:20:17 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:16.427 12:20:17 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:16.427 12:20:17 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:16.427 12:20:17 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:16.427 12:20:17 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:16.428 12:20:17 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:16.428 12:20:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.428 12:20:17 -- common/autotest_common.sh@10 -- # set +x 00:24:16.687 nvme0n1 00:24:16.687 12:20:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.687 12:20:17 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:16.687 12:20:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.687 12:20:17 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:16.687 12:20:17 -- common/autotest_common.sh@10 -- # set +x 00:24:16.687 12:20:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.687 12:20:17 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.687 12:20:17 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:16.687 12:20:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.687 12:20:17 -- common/autotest_common.sh@10 -- # set +x 00:24:16.947 12:20:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.947 12:20:17 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:16.947 12:20:17 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:24:16.947 12:20:17 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:16.947 12:20:17 -- host/auth.sh@44 -- # digest=sha256 00:24:16.947 12:20:17 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:16.947 12:20:17 -- host/auth.sh@44 -- # keyid=4 00:24:16.948 12:20:17 -- host/auth.sh@45 -- # key=DHHC-1:03:MzFjZGJkYTIxMzIwNDdiNDNkMGIwZjg5Y2JkZGY0N2ZiMTNjMzI4ODczNDEyZTVmODE2NGU2NTExYWM3N2Q5MhKdksY=: 00:24:16.948 12:20:17 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:16.948 12:20:17 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:16.948 12:20:17 -- host/auth.sh@49 -- # echo DHHC-1:03:MzFjZGJkYTIxMzIwNDdiNDNkMGIwZjg5Y2JkZGY0N2ZiMTNjMzI4ODczNDEyZTVmODE2NGU2NTExYWM3N2Q5MhKdksY=: 00:24:16.948 12:20:17 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 4 00:24:16.948 12:20:17 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:16.948 12:20:17 -- host/auth.sh@68 -- # digest=sha256 00:24:16.948 12:20:17 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:16.948 12:20:17 -- host/auth.sh@68 -- # keyid=4 00:24:16.948 12:20:17 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:16.948 12:20:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.948 12:20:17 -- common/autotest_common.sh@10 -- # set +x 00:24:16.948 12:20:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.948 12:20:17 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:16.948 12:20:17 -- nvmf/common.sh@717 -- # local ip 00:24:16.948 12:20:17 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:16.948 12:20:17 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:16.948 12:20:17 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:16.948 12:20:17 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:16.948 12:20:17 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:16.948 12:20:17 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:16.948 12:20:17 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:16.948 12:20:17 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:16.948 12:20:17 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:16.948 12:20:17 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:16.948 12:20:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.948 12:20:17 -- common/autotest_common.sh@10 -- # set +x 00:24:17.209 nvme0n1 00:24:17.209 12:20:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.209 12:20:18 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:17.209 12:20:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.209 12:20:18 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:17.209 12:20:18 -- common/autotest_common.sh@10 -- # set +x 00:24:17.209 12:20:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.209 12:20:18 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.209 12:20:18 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.209 12:20:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.209 12:20:18 -- common/autotest_common.sh@10 -- # set +x 00:24:17.209 12:20:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.209 12:20:18 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:17.209 12:20:18 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:17.209 12:20:18 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:24:17.209 12:20:18 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:17.209 12:20:18 -- host/auth.sh@44 -- # digest=sha256 00:24:17.209 12:20:18 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:17.209 12:20:18 -- host/auth.sh@44 -- # keyid=0 00:24:17.209 12:20:18 -- host/auth.sh@45 -- # key=DHHC-1:00:NzI3MTUzMzYyOGYzNjBiMzU5ZjNjZGNjYWY3ZWVkZWatAOsZ: 00:24:17.209 12:20:18 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:17.209 12:20:18 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:17.209 12:20:18 -- host/auth.sh@49 -- # echo DHHC-1:00:NzI3MTUzMzYyOGYzNjBiMzU5ZjNjZGNjYWY3ZWVkZWatAOsZ: 00:24:17.209 12:20:18 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 0 00:24:17.209 12:20:18 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:17.209 12:20:18 -- host/auth.sh@68 -- # digest=sha256 00:24:17.209 12:20:18 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:17.209 12:20:18 -- host/auth.sh@68 -- # keyid=0 00:24:17.209 12:20:18 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:17.209 12:20:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.209 12:20:18 -- common/autotest_common.sh@10 -- # set +x 00:24:17.209 12:20:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.209 12:20:18 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:17.209 12:20:18 -- nvmf/common.sh@717 -- # local ip 00:24:17.209 12:20:18 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:17.209 12:20:18 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:17.209 12:20:18 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.209 12:20:18 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.209 12:20:18 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:17.209 12:20:18 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.209 12:20:18 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:17.209 12:20:18 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:17.209 12:20:18 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:17.209 12:20:18 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:17.209 12:20:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.209 12:20:18 -- common/autotest_common.sh@10 -- # set +x 00:24:17.779 nvme0n1 00:24:17.779 12:20:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.779 12:20:18 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:17.779 12:20:18 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:17.779 12:20:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.779 12:20:18 -- common/autotest_common.sh@10 -- # set +x 00:24:17.779 12:20:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.779 12:20:18 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.779 12:20:18 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.779 12:20:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.779 12:20:18 -- common/autotest_common.sh@10 -- # set +x 00:24:17.779 12:20:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.779 12:20:18 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:17.779 12:20:18 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:24:17.779 12:20:18 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:17.779 12:20:18 -- host/auth.sh@44 -- # digest=sha256 00:24:17.779 12:20:18 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:17.779 12:20:18 -- host/auth.sh@44 -- # keyid=1 00:24:17.779 12:20:18 -- host/auth.sh@45 -- # key=DHHC-1:00:YjY1NzhmZTYxY2M4M2MzOTVjMTVkNjE0ZmEyYzA3Y2QzMGE0NzI3MTY4NTU2YjM3PF0etg==: 00:24:17.779 12:20:18 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:17.779 12:20:18 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:17.779 12:20:18 -- host/auth.sh@49 -- # echo DHHC-1:00:YjY1NzhmZTYxY2M4M2MzOTVjMTVkNjE0ZmEyYzA3Y2QzMGE0NzI3MTY4NTU2YjM3PF0etg==: 00:24:17.779 12:20:18 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 1 00:24:17.779 12:20:18 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:17.779 12:20:18 -- host/auth.sh@68 -- # digest=sha256 00:24:17.779 12:20:18 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:17.779 12:20:18 -- host/auth.sh@68 -- # keyid=1 00:24:17.779 12:20:18 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:17.780 12:20:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.780 12:20:18 -- common/autotest_common.sh@10 -- # set +x 00:24:17.780 12:20:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.780 12:20:18 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:17.780 12:20:18 -- nvmf/common.sh@717 -- # local ip 00:24:17.780 12:20:18 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:17.780 12:20:18 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:17.780 12:20:18 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.780 12:20:18 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.780 12:20:18 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:17.780 12:20:18 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.780 12:20:18 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:17.780 12:20:18 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:17.780 12:20:18 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:17.780 12:20:18 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:17.780 12:20:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.780 12:20:18 -- common/autotest_common.sh@10 -- # set +x 00:24:18.351 nvme0n1 00:24:18.351 12:20:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.351 12:20:19 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:18.351 12:20:19 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:18.351 12:20:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.351 12:20:19 -- common/autotest_common.sh@10 -- # set +x 00:24:18.351 12:20:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.351 12:20:19 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.351 12:20:19 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:18.351 12:20:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.351 12:20:19 -- common/autotest_common.sh@10 -- # set +x 00:24:18.351 12:20:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.351 12:20:19 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:18.351 12:20:19 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:24:18.351 12:20:19 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:18.351 12:20:19 -- host/auth.sh@44 -- # digest=sha256 00:24:18.351 12:20:19 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:18.351 12:20:19 -- host/auth.sh@44 -- # keyid=2 00:24:18.351 12:20:19 -- host/auth.sh@45 -- # key=DHHC-1:01:M2QxYTRhYjMyNTU1NjM0ODk2ZDQ3MTZkZGE5YjU1MDCXHfQe: 00:24:18.351 12:20:19 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:18.351 12:20:19 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:18.351 12:20:19 -- host/auth.sh@49 -- # echo DHHC-1:01:M2QxYTRhYjMyNTU1NjM0ODk2ZDQ3MTZkZGE5YjU1MDCXHfQe: 00:24:18.351 12:20:19 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 2 00:24:18.351 12:20:19 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:18.351 12:20:19 -- host/auth.sh@68 -- # digest=sha256 00:24:18.351 12:20:19 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:18.351 12:20:19 -- host/auth.sh@68 -- # keyid=2 00:24:18.351 12:20:19 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:18.351 12:20:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.351 12:20:19 -- common/autotest_common.sh@10 -- # set +x 00:24:18.351 12:20:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.351 12:20:19 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:18.351 12:20:19 -- nvmf/common.sh@717 -- # local ip 00:24:18.351 12:20:19 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:18.351 12:20:19 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:18.351 12:20:19 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:18.351 12:20:19 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:18.351 12:20:19 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:18.351 12:20:19 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:18.351 12:20:19 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:18.351 12:20:19 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:18.351 12:20:19 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:18.351 12:20:19 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:18.351 12:20:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.351 12:20:19 -- common/autotest_common.sh@10 -- # set +x 00:24:18.611 nvme0n1 00:24:18.611 12:20:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.611 12:20:19 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:18.611 12:20:19 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:18.611 12:20:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.611 12:20:19 -- common/autotest_common.sh@10 -- # set +x 00:24:18.872 12:20:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.872 12:20:19 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.872 12:20:19 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:18.872 12:20:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.872 12:20:19 -- common/autotest_common.sh@10 -- # set +x 00:24:18.872 12:20:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.872 12:20:19 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:18.872 12:20:19 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:24:18.872 12:20:19 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:18.872 12:20:19 -- host/auth.sh@44 -- # digest=sha256 00:24:18.872 12:20:19 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:18.872 12:20:19 -- host/auth.sh@44 -- # keyid=3 00:24:18.872 12:20:19 -- host/auth.sh@45 -- # key=DHHC-1:02:YmM5YjI3NmM0ODliZTg0ZWI2ZDE0NzUzNmZjZDY1MzkxNmMzN2Y5ZDVmMmQwMDQ2bsg/gA==: 00:24:18.872 12:20:19 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:18.872 12:20:19 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:18.872 12:20:19 -- host/auth.sh@49 -- # echo DHHC-1:02:YmM5YjI3NmM0ODliZTg0ZWI2ZDE0NzUzNmZjZDY1MzkxNmMzN2Y5ZDVmMmQwMDQ2bsg/gA==: 00:24:18.872 12:20:19 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 3 00:24:18.872 12:20:19 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:18.872 12:20:19 -- host/auth.sh@68 -- # digest=sha256 00:24:18.872 12:20:19 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:18.872 12:20:19 -- host/auth.sh@68 -- # keyid=3 00:24:18.872 12:20:19 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:18.872 12:20:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.872 12:20:19 -- common/autotest_common.sh@10 -- # set +x 00:24:18.872 12:20:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.872 12:20:19 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:18.872 12:20:19 -- nvmf/common.sh@717 -- # local ip 00:24:18.872 12:20:19 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:18.872 12:20:19 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:18.872 12:20:19 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:18.872 12:20:19 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:18.872 12:20:19 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:18.872 12:20:19 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:18.872 12:20:19 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:18.872 12:20:19 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:18.872 12:20:19 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:18.872 12:20:19 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:18.872 12:20:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.872 12:20:19 -- common/autotest_common.sh@10 -- # set +x 00:24:19.445 nvme0n1 00:24:19.445 12:20:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.445 12:20:20 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:19.445 12:20:20 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:19.445 12:20:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.445 12:20:20 -- common/autotest_common.sh@10 -- # set +x 00:24:19.445 12:20:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.445 12:20:20 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.445 12:20:20 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:19.445 12:20:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.445 12:20:20 -- common/autotest_common.sh@10 -- # set +x 00:24:19.445 12:20:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.445 12:20:20 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:19.445 12:20:20 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:24:19.445 12:20:20 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:19.445 12:20:20 -- host/auth.sh@44 -- # digest=sha256 00:24:19.445 12:20:20 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:19.445 12:20:20 -- host/auth.sh@44 -- # keyid=4 00:24:19.445 12:20:20 -- host/auth.sh@45 -- # key=DHHC-1:03:MzFjZGJkYTIxMzIwNDdiNDNkMGIwZjg5Y2JkZGY0N2ZiMTNjMzI4ODczNDEyZTVmODE2NGU2NTExYWM3N2Q5MhKdksY=: 00:24:19.445 12:20:20 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:19.445 12:20:20 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:19.445 12:20:20 -- host/auth.sh@49 -- # echo DHHC-1:03:MzFjZGJkYTIxMzIwNDdiNDNkMGIwZjg5Y2JkZGY0N2ZiMTNjMzI4ODczNDEyZTVmODE2NGU2NTExYWM3N2Q5MhKdksY=: 00:24:19.445 12:20:20 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 4 00:24:19.445 12:20:20 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:19.445 12:20:20 -- host/auth.sh@68 -- # digest=sha256 00:24:19.445 12:20:20 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:19.445 12:20:20 -- host/auth.sh@68 -- # keyid=4 00:24:19.445 12:20:20 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:19.445 12:20:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.445 12:20:20 -- common/autotest_common.sh@10 -- # set +x 00:24:19.445 12:20:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.445 12:20:20 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:19.445 12:20:20 -- nvmf/common.sh@717 -- # local ip 00:24:19.445 12:20:20 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:19.445 12:20:20 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:19.445 12:20:20 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:19.445 12:20:20 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:19.445 12:20:20 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:19.445 12:20:20 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:19.445 12:20:20 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:19.445 12:20:20 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:19.445 12:20:20 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:19.445 12:20:20 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:19.445 12:20:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.445 12:20:20 -- common/autotest_common.sh@10 -- # set +x 00:24:19.706 nvme0n1 00:24:19.706 12:20:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.706 12:20:20 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:19.706 12:20:20 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:19.706 12:20:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.706 12:20:20 -- common/autotest_common.sh@10 -- # set +x 00:24:19.706 12:20:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.968 12:20:20 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.968 12:20:20 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:19.968 12:20:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.968 12:20:20 -- common/autotest_common.sh@10 -- # set +x 00:24:19.968 12:20:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.968 12:20:20 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:19.968 12:20:20 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:19.968 12:20:20 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:24:19.968 12:20:20 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:19.968 12:20:20 -- host/auth.sh@44 -- # digest=sha256 00:24:19.968 12:20:20 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:19.968 12:20:20 -- host/auth.sh@44 -- # keyid=0 00:24:19.968 12:20:20 -- host/auth.sh@45 -- # key=DHHC-1:00:NzI3MTUzMzYyOGYzNjBiMzU5ZjNjZGNjYWY3ZWVkZWatAOsZ: 00:24:19.968 12:20:20 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:19.968 12:20:20 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:19.968 12:20:20 -- host/auth.sh@49 -- # echo DHHC-1:00:NzI3MTUzMzYyOGYzNjBiMzU5ZjNjZGNjYWY3ZWVkZWatAOsZ: 00:24:19.968 12:20:20 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 0 00:24:19.968 12:20:20 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:19.968 12:20:20 -- host/auth.sh@68 -- # digest=sha256 00:24:19.968 12:20:20 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:19.968 12:20:20 -- host/auth.sh@68 -- # keyid=0 00:24:19.968 12:20:20 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:19.968 12:20:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.968 12:20:20 -- common/autotest_common.sh@10 -- # set +x 00:24:19.968 12:20:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.968 12:20:20 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:19.968 12:20:20 -- nvmf/common.sh@717 -- # local ip 00:24:19.968 12:20:20 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:19.968 12:20:20 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:19.968 12:20:20 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:19.968 12:20:20 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:19.968 12:20:20 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:19.968 12:20:20 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:19.968 12:20:20 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:19.968 12:20:20 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:19.968 12:20:20 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:19.968 12:20:20 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:19.968 12:20:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.968 12:20:20 -- common/autotest_common.sh@10 -- # set +x 00:24:20.541 nvme0n1 00:24:20.541 12:20:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:20.541 12:20:21 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:20.541 12:20:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.541 12:20:21 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:20.541 12:20:21 -- common/autotest_common.sh@10 -- # set +x 00:24:20.541 12:20:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:20.803 12:20:21 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.803 12:20:21 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:20.803 12:20:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.803 12:20:21 -- common/autotest_common.sh@10 -- # set +x 00:24:20.803 12:20:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:20.803 12:20:21 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:20.803 12:20:21 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:24:20.803 12:20:21 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:20.803 12:20:21 -- host/auth.sh@44 -- # digest=sha256 00:24:20.803 12:20:21 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:20.803 12:20:21 -- host/auth.sh@44 -- # keyid=1 00:24:20.803 12:20:21 -- host/auth.sh@45 -- # key=DHHC-1:00:YjY1NzhmZTYxY2M4M2MzOTVjMTVkNjE0ZmEyYzA3Y2QzMGE0NzI3MTY4NTU2YjM3PF0etg==: 00:24:20.803 12:20:21 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:20.803 12:20:21 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:20.803 12:20:21 -- host/auth.sh@49 -- # echo DHHC-1:00:YjY1NzhmZTYxY2M4M2MzOTVjMTVkNjE0ZmEyYzA3Y2QzMGE0NzI3MTY4NTU2YjM3PF0etg==: 00:24:20.803 12:20:21 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 1 00:24:20.803 12:20:21 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:20.803 12:20:21 -- host/auth.sh@68 -- # digest=sha256 00:24:20.803 12:20:21 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:20.803 12:20:21 -- host/auth.sh@68 -- # keyid=1 00:24:20.803 12:20:21 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:20.803 12:20:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.803 12:20:21 -- common/autotest_common.sh@10 -- # set +x 00:24:20.803 12:20:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:20.803 12:20:21 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:20.803 12:20:21 -- nvmf/common.sh@717 -- # local ip 00:24:20.803 12:20:21 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:20.803 12:20:21 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:20.803 12:20:21 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:20.803 12:20:21 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:20.803 12:20:21 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:20.803 12:20:21 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:20.803 12:20:21 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:20.803 12:20:21 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:20.803 12:20:21 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:20.803 12:20:21 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:20.803 12:20:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.803 12:20:21 -- common/autotest_common.sh@10 -- # set +x 00:24:21.376 nvme0n1 00:24:21.376 12:20:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:21.376 12:20:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:21.376 12:20:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:21.376 12:20:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:21.376 12:20:22 -- common/autotest_common.sh@10 -- # set +x 00:24:21.376 12:20:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:21.638 12:20:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.638 12:20:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:21.638 12:20:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:21.638 12:20:22 -- common/autotest_common.sh@10 -- # set +x 00:24:21.638 12:20:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:21.638 12:20:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:21.638 12:20:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:24:21.638 12:20:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:21.638 12:20:22 -- host/auth.sh@44 -- # digest=sha256 00:24:21.638 12:20:22 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:21.638 12:20:22 -- host/auth.sh@44 -- # keyid=2 00:24:21.638 12:20:22 -- host/auth.sh@45 -- # key=DHHC-1:01:M2QxYTRhYjMyNTU1NjM0ODk2ZDQ3MTZkZGE5YjU1MDCXHfQe: 00:24:21.638 12:20:22 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:21.638 12:20:22 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:21.638 12:20:22 -- host/auth.sh@49 -- # echo DHHC-1:01:M2QxYTRhYjMyNTU1NjM0ODk2ZDQ3MTZkZGE5YjU1MDCXHfQe: 00:24:21.638 12:20:22 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 2 00:24:21.638 12:20:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:21.638 12:20:22 -- host/auth.sh@68 -- # digest=sha256 00:24:21.638 12:20:22 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:21.638 12:20:22 -- host/auth.sh@68 -- # keyid=2 00:24:21.638 12:20:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:21.638 12:20:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:21.638 12:20:22 -- common/autotest_common.sh@10 -- # set +x 00:24:21.638 12:20:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:21.638 12:20:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:21.638 12:20:22 -- nvmf/common.sh@717 -- # local ip 00:24:21.638 12:20:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:21.638 12:20:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:21.638 12:20:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:21.638 12:20:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:21.638 12:20:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:21.638 12:20:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:21.638 12:20:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:21.638 12:20:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:21.638 12:20:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:21.638 12:20:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:21.638 12:20:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:21.638 12:20:22 -- common/autotest_common.sh@10 -- # set +x 00:24:22.210 nvme0n1 00:24:22.210 12:20:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:22.210 12:20:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.210 12:20:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:22.210 12:20:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:22.210 12:20:23 -- common/autotest_common.sh@10 -- # set +x 00:24:22.210 12:20:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:22.471 12:20:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.471 12:20:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.471 12:20:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:22.471 12:20:23 -- common/autotest_common.sh@10 -- # set +x 00:24:22.471 12:20:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:22.471 12:20:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:22.471 12:20:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:24:22.471 12:20:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:22.471 12:20:23 -- host/auth.sh@44 -- # digest=sha256 00:24:22.471 12:20:23 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:22.471 12:20:23 -- host/auth.sh@44 -- # keyid=3 00:24:22.471 12:20:23 -- host/auth.sh@45 -- # key=DHHC-1:02:YmM5YjI3NmM0ODliZTg0ZWI2ZDE0NzUzNmZjZDY1MzkxNmMzN2Y5ZDVmMmQwMDQ2bsg/gA==: 00:24:22.471 12:20:23 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:22.471 12:20:23 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:22.471 12:20:23 -- host/auth.sh@49 -- # echo DHHC-1:02:YmM5YjI3NmM0ODliZTg0ZWI2ZDE0NzUzNmZjZDY1MzkxNmMzN2Y5ZDVmMmQwMDQ2bsg/gA==: 00:24:22.471 12:20:23 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 3 00:24:22.471 12:20:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:22.471 12:20:23 -- host/auth.sh@68 -- # digest=sha256 00:24:22.471 12:20:23 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:22.471 12:20:23 -- host/auth.sh@68 -- # keyid=3 00:24:22.471 12:20:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:22.471 12:20:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:22.471 12:20:23 -- common/autotest_common.sh@10 -- # set +x 00:24:22.471 12:20:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:22.471 12:20:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:22.471 12:20:23 -- nvmf/common.sh@717 -- # local ip 00:24:22.471 12:20:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:22.471 12:20:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:22.471 12:20:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.471 12:20:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.471 12:20:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:22.471 12:20:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:22.471 12:20:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:22.471 12:20:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:22.471 12:20:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:22.471 12:20:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:22.471 12:20:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:22.471 12:20:23 -- common/autotest_common.sh@10 -- # set +x 00:24:23.042 nvme0n1 00:24:23.042 12:20:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.042 12:20:24 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.042 12:20:24 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:23.042 12:20:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.042 12:20:24 -- common/autotest_common.sh@10 -- # set +x 00:24:23.042 12:20:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.303 12:20:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.303 12:20:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.303 12:20:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.303 12:20:24 -- common/autotest_common.sh@10 -- # set +x 00:24:23.303 12:20:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.303 12:20:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:23.303 12:20:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:24:23.303 12:20:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:23.303 12:20:24 -- host/auth.sh@44 -- # digest=sha256 00:24:23.303 12:20:24 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:23.303 12:20:24 -- host/auth.sh@44 -- # keyid=4 00:24:23.303 12:20:24 -- host/auth.sh@45 -- # key=DHHC-1:03:MzFjZGJkYTIxMzIwNDdiNDNkMGIwZjg5Y2JkZGY0N2ZiMTNjMzI4ODczNDEyZTVmODE2NGU2NTExYWM3N2Q5MhKdksY=: 00:24:23.303 12:20:24 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:23.303 12:20:24 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:23.303 12:20:24 -- host/auth.sh@49 -- # echo DHHC-1:03:MzFjZGJkYTIxMzIwNDdiNDNkMGIwZjg5Y2JkZGY0N2ZiMTNjMzI4ODczNDEyZTVmODE2NGU2NTExYWM3N2Q5MhKdksY=: 00:24:23.303 12:20:24 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 4 00:24:23.303 12:20:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:23.303 12:20:24 -- host/auth.sh@68 -- # digest=sha256 00:24:23.303 12:20:24 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:23.303 12:20:24 -- host/auth.sh@68 -- # keyid=4 00:24:23.303 12:20:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:23.303 12:20:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.303 12:20:24 -- common/autotest_common.sh@10 -- # set +x 00:24:23.303 12:20:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.303 12:20:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:23.303 12:20:24 -- nvmf/common.sh@717 -- # local ip 00:24:23.303 12:20:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:23.303 12:20:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:23.303 12:20:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.303 12:20:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.303 12:20:24 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:23.303 12:20:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.303 12:20:24 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:23.303 12:20:24 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:23.303 12:20:24 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:23.303 12:20:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:23.303 12:20:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.303 12:20:24 -- common/autotest_common.sh@10 -- # set +x 00:24:23.875 nvme0n1 00:24:23.875 12:20:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.875 12:20:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.875 12:20:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.875 12:20:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:23.875 12:20:25 -- common/autotest_common.sh@10 -- # set +x 00:24:23.875 12:20:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.137 12:20:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.137 12:20:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.137 12:20:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.137 12:20:25 -- common/autotest_common.sh@10 -- # set +x 00:24:24.137 12:20:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.137 12:20:25 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:24:24.137 12:20:25 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:24.137 12:20:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:24.137 12:20:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:24:24.137 12:20:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:24.137 12:20:25 -- host/auth.sh@44 -- # digest=sha384 00:24:24.137 12:20:25 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:24.137 12:20:25 -- host/auth.sh@44 -- # keyid=0 00:24:24.137 12:20:25 -- host/auth.sh@45 -- # key=DHHC-1:00:NzI3MTUzMzYyOGYzNjBiMzU5ZjNjZGNjYWY3ZWVkZWatAOsZ: 00:24:24.137 12:20:25 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:24.137 12:20:25 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:24.137 12:20:25 -- host/auth.sh@49 -- # echo DHHC-1:00:NzI3MTUzMzYyOGYzNjBiMzU5ZjNjZGNjYWY3ZWVkZWatAOsZ: 00:24:24.137 12:20:25 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 0 00:24:24.137 12:20:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:24.137 12:20:25 -- host/auth.sh@68 -- # digest=sha384 00:24:24.137 12:20:25 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:24.137 12:20:25 -- host/auth.sh@68 -- # keyid=0 00:24:24.137 12:20:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:24.137 12:20:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.137 12:20:25 -- common/autotest_common.sh@10 -- # set +x 00:24:24.137 12:20:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.137 12:20:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:24.137 12:20:25 -- nvmf/common.sh@717 -- # local ip 00:24:24.137 12:20:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:24.137 12:20:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:24.137 12:20:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.137 12:20:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.137 12:20:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:24.137 12:20:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.137 12:20:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:24.137 12:20:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:24.137 12:20:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:24.137 12:20:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:24.137 12:20:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.137 12:20:25 -- common/autotest_common.sh@10 -- # set +x 00:24:24.137 nvme0n1 00:24:24.137 12:20:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.137 12:20:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.137 12:20:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.137 12:20:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:24.137 12:20:25 -- common/autotest_common.sh@10 -- # set +x 00:24:24.137 12:20:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.137 12:20:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.137 12:20:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.137 12:20:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.138 12:20:25 -- common/autotest_common.sh@10 -- # set +x 00:24:24.138 12:20:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.138 12:20:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:24.138 12:20:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:24:24.138 12:20:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:24.138 12:20:25 -- host/auth.sh@44 -- # digest=sha384 00:24:24.138 12:20:25 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:24.138 12:20:25 -- host/auth.sh@44 -- # keyid=1 00:24:24.138 12:20:25 -- host/auth.sh@45 -- # key=DHHC-1:00:YjY1NzhmZTYxY2M4M2MzOTVjMTVkNjE0ZmEyYzA3Y2QzMGE0NzI3MTY4NTU2YjM3PF0etg==: 00:24:24.138 12:20:25 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:24.138 12:20:25 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:24.138 12:20:25 -- host/auth.sh@49 -- # echo DHHC-1:00:YjY1NzhmZTYxY2M4M2MzOTVjMTVkNjE0ZmEyYzA3Y2QzMGE0NzI3MTY4NTU2YjM3PF0etg==: 00:24:24.138 12:20:25 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 1 00:24:24.138 12:20:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:24.138 12:20:25 -- host/auth.sh@68 -- # digest=sha384 00:24:24.138 12:20:25 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:24.138 12:20:25 -- host/auth.sh@68 -- # keyid=1 00:24:24.138 12:20:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:24.138 12:20:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.138 12:20:25 -- common/autotest_common.sh@10 -- # set +x 00:24:24.138 12:20:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.138 12:20:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:24.138 12:20:25 -- nvmf/common.sh@717 -- # local ip 00:24:24.138 12:20:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:24.138 12:20:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:24.138 12:20:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.138 12:20:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.138 12:20:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:24.138 12:20:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.138 12:20:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:24.138 12:20:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:24.138 12:20:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:24.398 12:20:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:24.398 12:20:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.398 12:20:25 -- common/autotest_common.sh@10 -- # set +x 00:24:24.398 nvme0n1 00:24:24.398 12:20:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.398 12:20:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.398 12:20:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:24.398 12:20:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.398 12:20:25 -- common/autotest_common.sh@10 -- # set +x 00:24:24.398 12:20:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.398 12:20:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.398 12:20:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.398 12:20:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.398 12:20:25 -- common/autotest_common.sh@10 -- # set +x 00:24:24.398 12:20:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.399 12:20:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:24.399 12:20:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:24:24.399 12:20:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:24.399 12:20:25 -- host/auth.sh@44 -- # digest=sha384 00:24:24.399 12:20:25 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:24.399 12:20:25 -- host/auth.sh@44 -- # keyid=2 00:24:24.399 12:20:25 -- host/auth.sh@45 -- # key=DHHC-1:01:M2QxYTRhYjMyNTU1NjM0ODk2ZDQ3MTZkZGE5YjU1MDCXHfQe: 00:24:24.399 12:20:25 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:24.399 12:20:25 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:24.399 12:20:25 -- host/auth.sh@49 -- # echo DHHC-1:01:M2QxYTRhYjMyNTU1NjM0ODk2ZDQ3MTZkZGE5YjU1MDCXHfQe: 00:24:24.399 12:20:25 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 2 00:24:24.399 12:20:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:24.399 12:20:25 -- host/auth.sh@68 -- # digest=sha384 00:24:24.399 12:20:25 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:24.399 12:20:25 -- host/auth.sh@68 -- # keyid=2 00:24:24.399 12:20:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:24.399 12:20:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.399 12:20:25 -- common/autotest_common.sh@10 -- # set +x 00:24:24.399 12:20:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.399 12:20:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:24.399 12:20:25 -- nvmf/common.sh@717 -- # local ip 00:24:24.399 12:20:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:24.399 12:20:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:24.399 12:20:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.399 12:20:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.399 12:20:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:24.399 12:20:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.399 12:20:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:24.399 12:20:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:24.399 12:20:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:24.399 12:20:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:24.399 12:20:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.399 12:20:25 -- common/autotest_common.sh@10 -- # set +x 00:24:24.660 nvme0n1 00:24:24.660 12:20:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.660 12:20:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.660 12:20:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.660 12:20:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:24.660 12:20:25 -- common/autotest_common.sh@10 -- # set +x 00:24:24.660 12:20:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.660 12:20:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.660 12:20:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.660 12:20:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.660 12:20:25 -- common/autotest_common.sh@10 -- # set +x 00:24:24.660 12:20:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.660 12:20:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:24.660 12:20:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:24:24.660 12:20:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:24.660 12:20:25 -- host/auth.sh@44 -- # digest=sha384 00:24:24.660 12:20:25 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:24.660 12:20:25 -- host/auth.sh@44 -- # keyid=3 00:24:24.660 12:20:25 -- host/auth.sh@45 -- # key=DHHC-1:02:YmM5YjI3NmM0ODliZTg0ZWI2ZDE0NzUzNmZjZDY1MzkxNmMzN2Y5ZDVmMmQwMDQ2bsg/gA==: 00:24:24.660 12:20:25 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:24.660 12:20:25 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:24.660 12:20:25 -- host/auth.sh@49 -- # echo DHHC-1:02:YmM5YjI3NmM0ODliZTg0ZWI2ZDE0NzUzNmZjZDY1MzkxNmMzN2Y5ZDVmMmQwMDQ2bsg/gA==: 00:24:24.660 12:20:25 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 3 00:24:24.660 12:20:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:24.660 12:20:25 -- host/auth.sh@68 -- # digest=sha384 00:24:24.660 12:20:25 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:24.660 12:20:25 -- host/auth.sh@68 -- # keyid=3 00:24:24.660 12:20:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:24.660 12:20:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.660 12:20:25 -- common/autotest_common.sh@10 -- # set +x 00:24:24.660 12:20:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.660 12:20:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:24.660 12:20:25 -- nvmf/common.sh@717 -- # local ip 00:24:24.660 12:20:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:24.660 12:20:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:24.660 12:20:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.660 12:20:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.660 12:20:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:24.660 12:20:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.660 12:20:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:24.660 12:20:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:24.660 12:20:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:24.660 12:20:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:24.660 12:20:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.660 12:20:25 -- common/autotest_common.sh@10 -- # set +x 00:24:24.921 nvme0n1 00:24:24.921 12:20:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.921 12:20:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.921 12:20:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:24.921 12:20:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.921 12:20:25 -- common/autotest_common.sh@10 -- # set +x 00:24:24.921 12:20:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.921 12:20:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.921 12:20:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.921 12:20:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.921 12:20:26 -- common/autotest_common.sh@10 -- # set +x 00:24:24.921 12:20:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.921 12:20:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:24.921 12:20:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:24:24.921 12:20:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:24.921 12:20:26 -- host/auth.sh@44 -- # digest=sha384 00:24:24.921 12:20:26 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:24.921 12:20:26 -- host/auth.sh@44 -- # keyid=4 00:24:24.921 12:20:26 -- host/auth.sh@45 -- # key=DHHC-1:03:MzFjZGJkYTIxMzIwNDdiNDNkMGIwZjg5Y2JkZGY0N2ZiMTNjMzI4ODczNDEyZTVmODE2NGU2NTExYWM3N2Q5MhKdksY=: 00:24:24.921 12:20:26 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:24.921 12:20:26 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:24.921 12:20:26 -- host/auth.sh@49 -- # echo DHHC-1:03:MzFjZGJkYTIxMzIwNDdiNDNkMGIwZjg5Y2JkZGY0N2ZiMTNjMzI4ODczNDEyZTVmODE2NGU2NTExYWM3N2Q5MhKdksY=: 00:24:24.921 12:20:26 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 4 00:24:24.921 12:20:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:24.921 12:20:26 -- host/auth.sh@68 -- # digest=sha384 00:24:24.921 12:20:26 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:24.921 12:20:26 -- host/auth.sh@68 -- # keyid=4 00:24:24.921 12:20:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:24.921 12:20:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.921 12:20:26 -- common/autotest_common.sh@10 -- # set +x 00:24:24.921 12:20:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.921 12:20:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:24.921 12:20:26 -- nvmf/common.sh@717 -- # local ip 00:24:24.921 12:20:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:24.921 12:20:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:24.921 12:20:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.921 12:20:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.921 12:20:26 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:24.921 12:20:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.921 12:20:26 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:24.921 12:20:26 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:24.921 12:20:26 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:24.921 12:20:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:24.921 12:20:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.921 12:20:26 -- common/autotest_common.sh@10 -- # set +x 00:24:25.182 nvme0n1 00:24:25.182 12:20:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.182 12:20:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.182 12:20:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:25.182 12:20:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.182 12:20:26 -- common/autotest_common.sh@10 -- # set +x 00:24:25.182 12:20:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.182 12:20:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.182 12:20:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.182 12:20:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.182 12:20:26 -- common/autotest_common.sh@10 -- # set +x 00:24:25.182 12:20:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.182 12:20:26 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:25.182 12:20:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:25.182 12:20:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:24:25.182 12:20:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:25.182 12:20:26 -- host/auth.sh@44 -- # digest=sha384 00:24:25.182 12:20:26 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:25.182 12:20:26 -- host/auth.sh@44 -- # keyid=0 00:24:25.183 12:20:26 -- host/auth.sh@45 -- # key=DHHC-1:00:NzI3MTUzMzYyOGYzNjBiMzU5ZjNjZGNjYWY3ZWVkZWatAOsZ: 00:24:25.183 12:20:26 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:25.183 12:20:26 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:25.183 12:20:26 -- host/auth.sh@49 -- # echo DHHC-1:00:NzI3MTUzMzYyOGYzNjBiMzU5ZjNjZGNjYWY3ZWVkZWatAOsZ: 00:24:25.183 12:20:26 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 0 00:24:25.183 12:20:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:25.183 12:20:26 -- host/auth.sh@68 -- # digest=sha384 00:24:25.183 12:20:26 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:25.183 12:20:26 -- host/auth.sh@68 -- # keyid=0 00:24:25.183 12:20:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:25.183 12:20:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.183 12:20:26 -- common/autotest_common.sh@10 -- # set +x 00:24:25.183 12:20:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.183 12:20:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:25.183 12:20:26 -- nvmf/common.sh@717 -- # local ip 00:24:25.183 12:20:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:25.183 12:20:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:25.183 12:20:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.183 12:20:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.183 12:20:26 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:25.183 12:20:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.183 12:20:26 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:25.183 12:20:26 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:25.183 12:20:26 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:25.183 12:20:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:25.183 12:20:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.183 12:20:26 -- common/autotest_common.sh@10 -- # set +x 00:24:25.444 nvme0n1 00:24:25.444 12:20:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.444 12:20:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.444 12:20:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:25.444 12:20:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.444 12:20:26 -- common/autotest_common.sh@10 -- # set +x 00:24:25.444 12:20:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.444 12:20:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.444 12:20:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.444 12:20:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.444 12:20:26 -- common/autotest_common.sh@10 -- # set +x 00:24:25.444 12:20:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.444 12:20:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:25.444 12:20:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:24:25.444 12:20:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:25.444 12:20:26 -- host/auth.sh@44 -- # digest=sha384 00:24:25.444 12:20:26 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:25.444 12:20:26 -- host/auth.sh@44 -- # keyid=1 00:24:25.444 12:20:26 -- host/auth.sh@45 -- # key=DHHC-1:00:YjY1NzhmZTYxY2M4M2MzOTVjMTVkNjE0ZmEyYzA3Y2QzMGE0NzI3MTY4NTU2YjM3PF0etg==: 00:24:25.444 12:20:26 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:25.444 12:20:26 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:25.444 12:20:26 -- host/auth.sh@49 -- # echo DHHC-1:00:YjY1NzhmZTYxY2M4M2MzOTVjMTVkNjE0ZmEyYzA3Y2QzMGE0NzI3MTY4NTU2YjM3PF0etg==: 00:24:25.444 12:20:26 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 1 00:24:25.444 12:20:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:25.444 12:20:26 -- host/auth.sh@68 -- # digest=sha384 00:24:25.444 12:20:26 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:25.444 12:20:26 -- host/auth.sh@68 -- # keyid=1 00:24:25.444 12:20:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:25.444 12:20:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.444 12:20:26 -- common/autotest_common.sh@10 -- # set +x 00:24:25.444 12:20:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.444 12:20:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:25.444 12:20:26 -- nvmf/common.sh@717 -- # local ip 00:24:25.444 12:20:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:25.444 12:20:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:25.444 12:20:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.444 12:20:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.444 12:20:26 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:25.444 12:20:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.444 12:20:26 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:25.444 12:20:26 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:25.444 12:20:26 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:25.444 12:20:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:25.444 12:20:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.444 12:20:26 -- common/autotest_common.sh@10 -- # set +x 00:24:25.705 nvme0n1 00:24:25.705 12:20:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.705 12:20:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.705 12:20:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:25.705 12:20:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.705 12:20:26 -- common/autotest_common.sh@10 -- # set +x 00:24:25.705 12:20:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.705 12:20:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.705 12:20:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.705 12:20:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.705 12:20:26 -- common/autotest_common.sh@10 -- # set +x 00:24:25.705 12:20:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.705 12:20:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:25.705 12:20:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:24:25.705 12:20:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:25.705 12:20:26 -- host/auth.sh@44 -- # digest=sha384 00:24:25.705 12:20:26 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:25.705 12:20:26 -- host/auth.sh@44 -- # keyid=2 00:24:25.705 12:20:26 -- host/auth.sh@45 -- # key=DHHC-1:01:M2QxYTRhYjMyNTU1NjM0ODk2ZDQ3MTZkZGE5YjU1MDCXHfQe: 00:24:25.705 12:20:26 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:25.705 12:20:26 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:25.705 12:20:26 -- host/auth.sh@49 -- # echo DHHC-1:01:M2QxYTRhYjMyNTU1NjM0ODk2ZDQ3MTZkZGE5YjU1MDCXHfQe: 00:24:25.705 12:20:26 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 2 00:24:25.705 12:20:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:25.705 12:20:26 -- host/auth.sh@68 -- # digest=sha384 00:24:25.705 12:20:26 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:25.705 12:20:26 -- host/auth.sh@68 -- # keyid=2 00:24:25.705 12:20:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:25.705 12:20:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.705 12:20:26 -- common/autotest_common.sh@10 -- # set +x 00:24:25.705 12:20:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.705 12:20:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:25.705 12:20:26 -- nvmf/common.sh@717 -- # local ip 00:24:25.705 12:20:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:25.705 12:20:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:25.705 12:20:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.705 12:20:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.705 12:20:26 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:25.705 12:20:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.705 12:20:26 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:25.705 12:20:26 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:25.705 12:20:26 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:25.705 12:20:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:25.705 12:20:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.705 12:20:26 -- common/autotest_common.sh@10 -- # set +x 00:24:25.967 nvme0n1 00:24:25.967 12:20:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.967 12:20:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.967 12:20:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:25.967 12:20:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.967 12:20:26 -- common/autotest_common.sh@10 -- # set +x 00:24:25.967 12:20:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.967 12:20:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.967 12:20:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.967 12:20:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.967 12:20:27 -- common/autotest_common.sh@10 -- # set +x 00:24:25.967 12:20:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.967 12:20:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:25.967 12:20:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:24:25.967 12:20:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:25.967 12:20:27 -- host/auth.sh@44 -- # digest=sha384 00:24:25.967 12:20:27 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:25.967 12:20:27 -- host/auth.sh@44 -- # keyid=3 00:24:25.967 12:20:27 -- host/auth.sh@45 -- # key=DHHC-1:02:YmM5YjI3NmM0ODliZTg0ZWI2ZDE0NzUzNmZjZDY1MzkxNmMzN2Y5ZDVmMmQwMDQ2bsg/gA==: 00:24:25.967 12:20:27 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:25.967 12:20:27 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:25.967 12:20:27 -- host/auth.sh@49 -- # echo DHHC-1:02:YmM5YjI3NmM0ODliZTg0ZWI2ZDE0NzUzNmZjZDY1MzkxNmMzN2Y5ZDVmMmQwMDQ2bsg/gA==: 00:24:25.967 12:20:27 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 3 00:24:25.967 12:20:27 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:25.967 12:20:27 -- host/auth.sh@68 -- # digest=sha384 00:24:25.967 12:20:27 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:25.967 12:20:27 -- host/auth.sh@68 -- # keyid=3 00:24:25.967 12:20:27 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:25.967 12:20:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.968 12:20:27 -- common/autotest_common.sh@10 -- # set +x 00:24:25.968 12:20:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.968 12:20:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:25.968 12:20:27 -- nvmf/common.sh@717 -- # local ip 00:24:25.968 12:20:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:25.968 12:20:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:25.968 12:20:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.968 12:20:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.968 12:20:27 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:25.968 12:20:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.968 12:20:27 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:25.968 12:20:27 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:25.968 12:20:27 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:25.968 12:20:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:25.968 12:20:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.968 12:20:27 -- common/autotest_common.sh@10 -- # set +x 00:24:26.229 nvme0n1 00:24:26.229 12:20:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.229 12:20:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.229 12:20:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:26.229 12:20:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.229 12:20:27 -- common/autotest_common.sh@10 -- # set +x 00:24:26.229 12:20:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.229 12:20:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.229 12:20:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.229 12:20:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.229 12:20:27 -- common/autotest_common.sh@10 -- # set +x 00:24:26.229 12:20:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.229 12:20:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:26.229 12:20:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:24:26.229 12:20:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:26.229 12:20:27 -- host/auth.sh@44 -- # digest=sha384 00:24:26.229 12:20:27 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:26.229 12:20:27 -- host/auth.sh@44 -- # keyid=4 00:24:26.229 12:20:27 -- host/auth.sh@45 -- # key=DHHC-1:03:MzFjZGJkYTIxMzIwNDdiNDNkMGIwZjg5Y2JkZGY0N2ZiMTNjMzI4ODczNDEyZTVmODE2NGU2NTExYWM3N2Q5MhKdksY=: 00:24:26.229 12:20:27 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:26.229 12:20:27 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:26.229 12:20:27 -- host/auth.sh@49 -- # echo DHHC-1:03:MzFjZGJkYTIxMzIwNDdiNDNkMGIwZjg5Y2JkZGY0N2ZiMTNjMzI4ODczNDEyZTVmODE2NGU2NTExYWM3N2Q5MhKdksY=: 00:24:26.229 12:20:27 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 4 00:24:26.229 12:20:27 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:26.229 12:20:27 -- host/auth.sh@68 -- # digest=sha384 00:24:26.229 12:20:27 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:26.229 12:20:27 -- host/auth.sh@68 -- # keyid=4 00:24:26.229 12:20:27 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:26.229 12:20:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.229 12:20:27 -- common/autotest_common.sh@10 -- # set +x 00:24:26.229 12:20:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.229 12:20:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:26.229 12:20:27 -- nvmf/common.sh@717 -- # local ip 00:24:26.230 12:20:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:26.230 12:20:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:26.230 12:20:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.230 12:20:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.230 12:20:27 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:26.230 12:20:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.230 12:20:27 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:26.230 12:20:27 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:26.230 12:20:27 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:26.230 12:20:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:26.230 12:20:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.230 12:20:27 -- common/autotest_common.sh@10 -- # set +x 00:24:26.490 nvme0n1 00:24:26.490 12:20:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.490 12:20:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.490 12:20:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.490 12:20:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:26.490 12:20:27 -- common/autotest_common.sh@10 -- # set +x 00:24:26.490 12:20:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.490 12:20:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.490 12:20:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.490 12:20:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.490 12:20:27 -- common/autotest_common.sh@10 -- # set +x 00:24:26.490 12:20:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.490 12:20:27 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:26.490 12:20:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:26.490 12:20:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:24:26.490 12:20:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:26.490 12:20:27 -- host/auth.sh@44 -- # digest=sha384 00:24:26.490 12:20:27 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:26.490 12:20:27 -- host/auth.sh@44 -- # keyid=0 00:24:26.490 12:20:27 -- host/auth.sh@45 -- # key=DHHC-1:00:NzI3MTUzMzYyOGYzNjBiMzU5ZjNjZGNjYWY3ZWVkZWatAOsZ: 00:24:26.490 12:20:27 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:26.490 12:20:27 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:26.490 12:20:27 -- host/auth.sh@49 -- # echo DHHC-1:00:NzI3MTUzMzYyOGYzNjBiMzU5ZjNjZGNjYWY3ZWVkZWatAOsZ: 00:24:26.490 12:20:27 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 0 00:24:26.490 12:20:27 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:26.490 12:20:27 -- host/auth.sh@68 -- # digest=sha384 00:24:26.490 12:20:27 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:26.490 12:20:27 -- host/auth.sh@68 -- # keyid=0 00:24:26.490 12:20:27 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:26.490 12:20:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.490 12:20:27 -- common/autotest_common.sh@10 -- # set +x 00:24:26.490 12:20:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.490 12:20:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:26.490 12:20:27 -- nvmf/common.sh@717 -- # local ip 00:24:26.490 12:20:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:26.490 12:20:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:26.490 12:20:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.490 12:20:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.490 12:20:27 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:26.490 12:20:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.490 12:20:27 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:26.490 12:20:27 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:26.490 12:20:27 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:26.490 12:20:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:26.490 12:20:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.490 12:20:27 -- common/autotest_common.sh@10 -- # set +x 00:24:26.752 nvme0n1 00:24:26.752 12:20:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.752 12:20:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.752 12:20:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:26.752 12:20:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.752 12:20:27 -- common/autotest_common.sh@10 -- # set +x 00:24:26.752 12:20:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.752 12:20:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.752 12:20:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.752 12:20:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.752 12:20:27 -- common/autotest_common.sh@10 -- # set +x 00:24:26.752 12:20:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.752 12:20:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:26.752 12:20:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:24:26.752 12:20:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:26.752 12:20:27 -- host/auth.sh@44 -- # digest=sha384 00:24:26.752 12:20:27 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:26.752 12:20:27 -- host/auth.sh@44 -- # keyid=1 00:24:26.752 12:20:27 -- host/auth.sh@45 -- # key=DHHC-1:00:YjY1NzhmZTYxY2M4M2MzOTVjMTVkNjE0ZmEyYzA3Y2QzMGE0NzI3MTY4NTU2YjM3PF0etg==: 00:24:26.752 12:20:27 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:26.752 12:20:27 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:26.752 12:20:27 -- host/auth.sh@49 -- # echo DHHC-1:00:YjY1NzhmZTYxY2M4M2MzOTVjMTVkNjE0ZmEyYzA3Y2QzMGE0NzI3MTY4NTU2YjM3PF0etg==: 00:24:26.752 12:20:27 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 1 00:24:26.752 12:20:27 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:26.752 12:20:27 -- host/auth.sh@68 -- # digest=sha384 00:24:26.752 12:20:27 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:26.752 12:20:27 -- host/auth.sh@68 -- # keyid=1 00:24:26.752 12:20:27 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:26.752 12:20:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.752 12:20:27 -- common/autotest_common.sh@10 -- # set +x 00:24:26.752 12:20:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.752 12:20:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:26.752 12:20:27 -- nvmf/common.sh@717 -- # local ip 00:24:26.752 12:20:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:26.752 12:20:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:26.752 12:20:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.752 12:20:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.752 12:20:27 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:26.752 12:20:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.752 12:20:27 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:26.752 12:20:27 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:26.752 12:20:27 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:26.752 12:20:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:26.752 12:20:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.014 12:20:27 -- common/autotest_common.sh@10 -- # set +x 00:24:27.275 nvme0n1 00:24:27.275 12:20:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.275 12:20:28 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:27.275 12:20:28 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:27.275 12:20:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.275 12:20:28 -- common/autotest_common.sh@10 -- # set +x 00:24:27.275 12:20:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.275 12:20:28 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.275 12:20:28 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:27.275 12:20:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.275 12:20:28 -- common/autotest_common.sh@10 -- # set +x 00:24:27.275 12:20:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.275 12:20:28 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:27.275 12:20:28 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:24:27.275 12:20:28 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:27.275 12:20:28 -- host/auth.sh@44 -- # digest=sha384 00:24:27.275 12:20:28 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:27.275 12:20:28 -- host/auth.sh@44 -- # keyid=2 00:24:27.275 12:20:28 -- host/auth.sh@45 -- # key=DHHC-1:01:M2QxYTRhYjMyNTU1NjM0ODk2ZDQ3MTZkZGE5YjU1MDCXHfQe: 00:24:27.275 12:20:28 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:27.275 12:20:28 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:27.275 12:20:28 -- host/auth.sh@49 -- # echo DHHC-1:01:M2QxYTRhYjMyNTU1NjM0ODk2ZDQ3MTZkZGE5YjU1MDCXHfQe: 00:24:27.275 12:20:28 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 2 00:24:27.275 12:20:28 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:27.275 12:20:28 -- host/auth.sh@68 -- # digest=sha384 00:24:27.275 12:20:28 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:27.275 12:20:28 -- host/auth.sh@68 -- # keyid=2 00:24:27.275 12:20:28 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:27.275 12:20:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.275 12:20:28 -- common/autotest_common.sh@10 -- # set +x 00:24:27.275 12:20:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.275 12:20:28 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:27.275 12:20:28 -- nvmf/common.sh@717 -- # local ip 00:24:27.275 12:20:28 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:27.275 12:20:28 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:27.275 12:20:28 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.275 12:20:28 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.275 12:20:28 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:27.275 12:20:28 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:27.275 12:20:28 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:27.275 12:20:28 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:27.275 12:20:28 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:27.275 12:20:28 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:27.275 12:20:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.275 12:20:28 -- common/autotest_common.sh@10 -- # set +x 00:24:27.536 nvme0n1 00:24:27.536 12:20:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.536 12:20:28 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:27.536 12:20:28 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:27.536 12:20:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.536 12:20:28 -- common/autotest_common.sh@10 -- # set +x 00:24:27.536 12:20:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.536 12:20:28 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.536 12:20:28 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:27.536 12:20:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.536 12:20:28 -- common/autotest_common.sh@10 -- # set +x 00:24:27.536 12:20:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.536 12:20:28 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:27.536 12:20:28 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:24:27.536 12:20:28 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:27.536 12:20:28 -- host/auth.sh@44 -- # digest=sha384 00:24:27.536 12:20:28 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:27.536 12:20:28 -- host/auth.sh@44 -- # keyid=3 00:24:27.536 12:20:28 -- host/auth.sh@45 -- # key=DHHC-1:02:YmM5YjI3NmM0ODliZTg0ZWI2ZDE0NzUzNmZjZDY1MzkxNmMzN2Y5ZDVmMmQwMDQ2bsg/gA==: 00:24:27.536 12:20:28 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:27.536 12:20:28 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:27.536 12:20:28 -- host/auth.sh@49 -- # echo DHHC-1:02:YmM5YjI3NmM0ODliZTg0ZWI2ZDE0NzUzNmZjZDY1MzkxNmMzN2Y5ZDVmMmQwMDQ2bsg/gA==: 00:24:27.536 12:20:28 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 3 00:24:27.536 12:20:28 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:27.536 12:20:28 -- host/auth.sh@68 -- # digest=sha384 00:24:27.536 12:20:28 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:27.536 12:20:28 -- host/auth.sh@68 -- # keyid=3 00:24:27.537 12:20:28 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:27.537 12:20:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.537 12:20:28 -- common/autotest_common.sh@10 -- # set +x 00:24:27.537 12:20:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.537 12:20:28 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:27.537 12:20:28 -- nvmf/common.sh@717 -- # local ip 00:24:27.537 12:20:28 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:27.537 12:20:28 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:27.537 12:20:28 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.537 12:20:28 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.537 12:20:28 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:27.537 12:20:28 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:27.537 12:20:28 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:27.537 12:20:28 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:27.537 12:20:28 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:27.537 12:20:28 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:27.537 12:20:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.537 12:20:28 -- common/autotest_common.sh@10 -- # set +x 00:24:27.798 nvme0n1 00:24:27.798 12:20:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.798 12:20:28 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:27.798 12:20:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.798 12:20:28 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:27.798 12:20:28 -- common/autotest_common.sh@10 -- # set +x 00:24:27.798 12:20:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.798 12:20:29 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.798 12:20:29 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:27.798 12:20:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.798 12:20:29 -- common/autotest_common.sh@10 -- # set +x 00:24:28.060 12:20:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:28.060 12:20:29 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:28.060 12:20:29 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:24:28.060 12:20:29 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:28.060 12:20:29 -- host/auth.sh@44 -- # digest=sha384 00:24:28.060 12:20:29 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:28.060 12:20:29 -- host/auth.sh@44 -- # keyid=4 00:24:28.060 12:20:29 -- host/auth.sh@45 -- # key=DHHC-1:03:MzFjZGJkYTIxMzIwNDdiNDNkMGIwZjg5Y2JkZGY0N2ZiMTNjMzI4ODczNDEyZTVmODE2NGU2NTExYWM3N2Q5MhKdksY=: 00:24:28.060 12:20:29 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:28.060 12:20:29 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:28.060 12:20:29 -- host/auth.sh@49 -- # echo DHHC-1:03:MzFjZGJkYTIxMzIwNDdiNDNkMGIwZjg5Y2JkZGY0N2ZiMTNjMzI4ODczNDEyZTVmODE2NGU2NTExYWM3N2Q5MhKdksY=: 00:24:28.060 12:20:29 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 4 00:24:28.060 12:20:29 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:28.060 12:20:29 -- host/auth.sh@68 -- # digest=sha384 00:24:28.060 12:20:29 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:28.060 12:20:29 -- host/auth.sh@68 -- # keyid=4 00:24:28.060 12:20:29 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:28.060 12:20:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.060 12:20:29 -- common/autotest_common.sh@10 -- # set +x 00:24:28.060 12:20:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:28.060 12:20:29 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:28.060 12:20:29 -- nvmf/common.sh@717 -- # local ip 00:24:28.060 12:20:29 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:28.060 12:20:29 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:28.060 12:20:29 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.060 12:20:29 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.060 12:20:29 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:28.060 12:20:29 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:28.060 12:20:29 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:28.060 12:20:29 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:28.060 12:20:29 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:28.060 12:20:29 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:28.060 12:20:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.060 12:20:29 -- common/autotest_common.sh@10 -- # set +x 00:24:28.321 nvme0n1 00:24:28.321 12:20:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:28.321 12:20:29 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:28.321 12:20:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.321 12:20:29 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:28.321 12:20:29 -- common/autotest_common.sh@10 -- # set +x 00:24:28.321 12:20:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:28.321 12:20:29 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.321 12:20:29 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:28.321 12:20:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.321 12:20:29 -- common/autotest_common.sh@10 -- # set +x 00:24:28.321 12:20:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:28.321 12:20:29 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:28.321 12:20:29 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:28.321 12:20:29 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:24:28.321 12:20:29 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:28.321 12:20:29 -- host/auth.sh@44 -- # digest=sha384 00:24:28.321 12:20:29 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:28.321 12:20:29 -- host/auth.sh@44 -- # keyid=0 00:24:28.321 12:20:29 -- host/auth.sh@45 -- # key=DHHC-1:00:NzI3MTUzMzYyOGYzNjBiMzU5ZjNjZGNjYWY3ZWVkZWatAOsZ: 00:24:28.321 12:20:29 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:28.321 12:20:29 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:28.321 12:20:29 -- host/auth.sh@49 -- # echo DHHC-1:00:NzI3MTUzMzYyOGYzNjBiMzU5ZjNjZGNjYWY3ZWVkZWatAOsZ: 00:24:28.321 12:20:29 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 0 00:24:28.321 12:20:29 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:28.321 12:20:29 -- host/auth.sh@68 -- # digest=sha384 00:24:28.321 12:20:29 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:28.321 12:20:29 -- host/auth.sh@68 -- # keyid=0 00:24:28.321 12:20:29 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:28.321 12:20:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.321 12:20:29 -- common/autotest_common.sh@10 -- # set +x 00:24:28.321 12:20:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:28.321 12:20:29 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:28.321 12:20:29 -- nvmf/common.sh@717 -- # local ip 00:24:28.322 12:20:29 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:28.322 12:20:29 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:28.322 12:20:29 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.322 12:20:29 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.322 12:20:29 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:28.322 12:20:29 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:28.322 12:20:29 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:28.322 12:20:29 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:28.322 12:20:29 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:28.322 12:20:29 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:28.322 12:20:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.322 12:20:29 -- common/autotest_common.sh@10 -- # set +x 00:24:28.896 nvme0n1 00:24:28.897 12:20:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:28.897 12:20:29 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:28.897 12:20:29 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:28.897 12:20:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.897 12:20:29 -- common/autotest_common.sh@10 -- # set +x 00:24:28.897 12:20:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:28.897 12:20:29 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.897 12:20:29 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:28.897 12:20:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.897 12:20:29 -- common/autotest_common.sh@10 -- # set +x 00:24:28.897 12:20:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:28.897 12:20:29 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:28.897 12:20:29 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:24:28.897 12:20:29 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:28.897 12:20:29 -- host/auth.sh@44 -- # digest=sha384 00:24:28.897 12:20:29 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:28.897 12:20:29 -- host/auth.sh@44 -- # keyid=1 00:24:28.897 12:20:29 -- host/auth.sh@45 -- # key=DHHC-1:00:YjY1NzhmZTYxY2M4M2MzOTVjMTVkNjE0ZmEyYzA3Y2QzMGE0NzI3MTY4NTU2YjM3PF0etg==: 00:24:28.897 12:20:29 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:28.897 12:20:29 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:28.897 12:20:29 -- host/auth.sh@49 -- # echo DHHC-1:00:YjY1NzhmZTYxY2M4M2MzOTVjMTVkNjE0ZmEyYzA3Y2QzMGE0NzI3MTY4NTU2YjM3PF0etg==: 00:24:28.897 12:20:29 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 1 00:24:28.897 12:20:29 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:28.897 12:20:29 -- host/auth.sh@68 -- # digest=sha384 00:24:28.897 12:20:29 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:28.897 12:20:29 -- host/auth.sh@68 -- # keyid=1 00:24:28.897 12:20:29 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:28.897 12:20:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.897 12:20:29 -- common/autotest_common.sh@10 -- # set +x 00:24:28.897 12:20:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:28.897 12:20:29 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:28.897 12:20:29 -- nvmf/common.sh@717 -- # local ip 00:24:28.897 12:20:29 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:28.897 12:20:29 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:28.897 12:20:29 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.897 12:20:29 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.897 12:20:29 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:28.897 12:20:29 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:28.897 12:20:29 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:28.897 12:20:29 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:28.897 12:20:29 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:28.897 12:20:29 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:28.897 12:20:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.897 12:20:29 -- common/autotest_common.sh@10 -- # set +x 00:24:29.472 nvme0n1 00:24:29.472 12:20:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.472 12:20:30 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:29.472 12:20:30 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:29.472 12:20:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.472 12:20:30 -- common/autotest_common.sh@10 -- # set +x 00:24:29.472 12:20:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.472 12:20:30 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.472 12:20:30 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.472 12:20:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.472 12:20:30 -- common/autotest_common.sh@10 -- # set +x 00:24:29.472 12:20:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.472 12:20:30 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:29.472 12:20:30 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:24:29.472 12:20:30 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:29.472 12:20:30 -- host/auth.sh@44 -- # digest=sha384 00:24:29.472 12:20:30 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:29.472 12:20:30 -- host/auth.sh@44 -- # keyid=2 00:24:29.472 12:20:30 -- host/auth.sh@45 -- # key=DHHC-1:01:M2QxYTRhYjMyNTU1NjM0ODk2ZDQ3MTZkZGE5YjU1MDCXHfQe: 00:24:29.472 12:20:30 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:29.472 12:20:30 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:29.472 12:20:30 -- host/auth.sh@49 -- # echo DHHC-1:01:M2QxYTRhYjMyNTU1NjM0ODk2ZDQ3MTZkZGE5YjU1MDCXHfQe: 00:24:29.472 12:20:30 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 2 00:24:29.472 12:20:30 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:29.472 12:20:30 -- host/auth.sh@68 -- # digest=sha384 00:24:29.472 12:20:30 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:29.472 12:20:30 -- host/auth.sh@68 -- # keyid=2 00:24:29.472 12:20:30 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:29.472 12:20:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.472 12:20:30 -- common/autotest_common.sh@10 -- # set +x 00:24:29.472 12:20:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.472 12:20:30 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:29.472 12:20:30 -- nvmf/common.sh@717 -- # local ip 00:24:29.472 12:20:30 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:29.472 12:20:30 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:29.472 12:20:30 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:29.472 12:20:30 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:29.472 12:20:30 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:29.472 12:20:30 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:29.472 12:20:30 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:29.472 12:20:30 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:29.472 12:20:30 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:29.472 12:20:30 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:29.472 12:20:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.472 12:20:30 -- common/autotest_common.sh@10 -- # set +x 00:24:29.734 nvme0n1 00:24:29.734 12:20:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.734 12:20:30 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:29.734 12:20:30 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:29.734 12:20:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.734 12:20:30 -- common/autotest_common.sh@10 -- # set +x 00:24:29.996 12:20:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.996 12:20:30 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.996 12:20:30 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.996 12:20:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.996 12:20:30 -- common/autotest_common.sh@10 -- # set +x 00:24:29.996 12:20:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.996 12:20:31 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:29.996 12:20:31 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:24:29.996 12:20:31 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:29.996 12:20:31 -- host/auth.sh@44 -- # digest=sha384 00:24:29.996 12:20:31 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:29.996 12:20:31 -- host/auth.sh@44 -- # keyid=3 00:24:29.996 12:20:31 -- host/auth.sh@45 -- # key=DHHC-1:02:YmM5YjI3NmM0ODliZTg0ZWI2ZDE0NzUzNmZjZDY1MzkxNmMzN2Y5ZDVmMmQwMDQ2bsg/gA==: 00:24:29.996 12:20:31 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:29.996 12:20:31 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:29.996 12:20:31 -- host/auth.sh@49 -- # echo DHHC-1:02:YmM5YjI3NmM0ODliZTg0ZWI2ZDE0NzUzNmZjZDY1MzkxNmMzN2Y5ZDVmMmQwMDQ2bsg/gA==: 00:24:29.996 12:20:31 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 3 00:24:29.996 12:20:31 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:29.996 12:20:31 -- host/auth.sh@68 -- # digest=sha384 00:24:29.996 12:20:31 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:29.996 12:20:31 -- host/auth.sh@68 -- # keyid=3 00:24:29.996 12:20:31 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:29.996 12:20:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.996 12:20:31 -- common/autotest_common.sh@10 -- # set +x 00:24:29.996 12:20:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.996 12:20:31 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:29.996 12:20:31 -- nvmf/common.sh@717 -- # local ip 00:24:29.996 12:20:31 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:29.996 12:20:31 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:29.996 12:20:31 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:29.996 12:20:31 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:29.996 12:20:31 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:29.996 12:20:31 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:29.996 12:20:31 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:29.996 12:20:31 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:29.996 12:20:31 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:29.996 12:20:31 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:29.996 12:20:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.996 12:20:31 -- common/autotest_common.sh@10 -- # set +x 00:24:30.569 nvme0n1 00:24:30.569 12:20:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:30.569 12:20:31 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:30.569 12:20:31 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:30.569 12:20:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:30.569 12:20:31 -- common/autotest_common.sh@10 -- # set +x 00:24:30.569 12:20:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:30.569 12:20:31 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.569 12:20:31 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:30.569 12:20:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:30.569 12:20:31 -- common/autotest_common.sh@10 -- # set +x 00:24:30.569 12:20:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:30.569 12:20:31 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:30.569 12:20:31 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:24:30.569 12:20:31 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:30.569 12:20:31 -- host/auth.sh@44 -- # digest=sha384 00:24:30.569 12:20:31 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:30.569 12:20:31 -- host/auth.sh@44 -- # keyid=4 00:24:30.569 12:20:31 -- host/auth.sh@45 -- # key=DHHC-1:03:MzFjZGJkYTIxMzIwNDdiNDNkMGIwZjg5Y2JkZGY0N2ZiMTNjMzI4ODczNDEyZTVmODE2NGU2NTExYWM3N2Q5MhKdksY=: 00:24:30.569 12:20:31 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:30.569 12:20:31 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:30.569 12:20:31 -- host/auth.sh@49 -- # echo DHHC-1:03:MzFjZGJkYTIxMzIwNDdiNDNkMGIwZjg5Y2JkZGY0N2ZiMTNjMzI4ODczNDEyZTVmODE2NGU2NTExYWM3N2Q5MhKdksY=: 00:24:30.569 12:20:31 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 4 00:24:30.569 12:20:31 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:30.569 12:20:31 -- host/auth.sh@68 -- # digest=sha384 00:24:30.569 12:20:31 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:30.569 12:20:31 -- host/auth.sh@68 -- # keyid=4 00:24:30.569 12:20:31 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:30.569 12:20:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:30.569 12:20:31 -- common/autotest_common.sh@10 -- # set +x 00:24:30.569 12:20:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:30.569 12:20:31 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:30.569 12:20:31 -- nvmf/common.sh@717 -- # local ip 00:24:30.569 12:20:31 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:30.569 12:20:31 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:30.569 12:20:31 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:30.569 12:20:31 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:30.569 12:20:31 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:30.569 12:20:31 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:30.569 12:20:31 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:30.569 12:20:31 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:30.569 12:20:31 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:30.569 12:20:31 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:30.569 12:20:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:30.569 12:20:31 -- common/autotest_common.sh@10 -- # set +x 00:24:30.830 nvme0n1 00:24:30.830 12:20:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:30.830 12:20:32 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:30.830 12:20:32 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:30.830 12:20:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:30.830 12:20:32 -- common/autotest_common.sh@10 -- # set +x 00:24:31.091 12:20:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:31.091 12:20:32 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.091 12:20:32 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.091 12:20:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:31.091 12:20:32 -- common/autotest_common.sh@10 -- # set +x 00:24:31.091 12:20:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:31.091 12:20:32 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:31.091 12:20:32 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:31.091 12:20:32 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:24:31.091 12:20:32 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:31.091 12:20:32 -- host/auth.sh@44 -- # digest=sha384 00:24:31.091 12:20:32 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:31.091 12:20:32 -- host/auth.sh@44 -- # keyid=0 00:24:31.091 12:20:32 -- host/auth.sh@45 -- # key=DHHC-1:00:NzI3MTUzMzYyOGYzNjBiMzU5ZjNjZGNjYWY3ZWVkZWatAOsZ: 00:24:31.091 12:20:32 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:31.091 12:20:32 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:31.091 12:20:32 -- host/auth.sh@49 -- # echo DHHC-1:00:NzI3MTUzMzYyOGYzNjBiMzU5ZjNjZGNjYWY3ZWVkZWatAOsZ: 00:24:31.091 12:20:32 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 0 00:24:31.091 12:20:32 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:31.091 12:20:32 -- host/auth.sh@68 -- # digest=sha384 00:24:31.091 12:20:32 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:31.091 12:20:32 -- host/auth.sh@68 -- # keyid=0 00:24:31.091 12:20:32 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:31.091 12:20:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:31.091 12:20:32 -- common/autotest_common.sh@10 -- # set +x 00:24:31.091 12:20:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:31.091 12:20:32 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:31.091 12:20:32 -- nvmf/common.sh@717 -- # local ip 00:24:31.091 12:20:32 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:31.091 12:20:32 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:31.091 12:20:32 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.091 12:20:32 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.091 12:20:32 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:31.091 12:20:32 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:31.091 12:20:32 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:31.091 12:20:32 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:31.091 12:20:32 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:31.091 12:20:32 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:31.091 12:20:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:31.091 12:20:32 -- common/autotest_common.sh@10 -- # set +x 00:24:31.663 nvme0n1 00:24:31.663 12:20:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:31.663 12:20:32 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.663 12:20:32 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:31.663 12:20:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:31.663 12:20:32 -- common/autotest_common.sh@10 -- # set +x 00:24:31.998 12:20:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:31.998 12:20:32 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.998 12:20:32 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.998 12:20:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:31.998 12:20:32 -- common/autotest_common.sh@10 -- # set +x 00:24:31.998 12:20:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:31.998 12:20:32 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:31.998 12:20:32 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:24:31.998 12:20:32 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:31.998 12:20:32 -- host/auth.sh@44 -- # digest=sha384 00:24:31.998 12:20:32 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:31.998 12:20:32 -- host/auth.sh@44 -- # keyid=1 00:24:31.998 12:20:32 -- host/auth.sh@45 -- # key=DHHC-1:00:YjY1NzhmZTYxY2M4M2MzOTVjMTVkNjE0ZmEyYzA3Y2QzMGE0NzI3MTY4NTU2YjM3PF0etg==: 00:24:31.998 12:20:32 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:31.998 12:20:32 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:31.998 12:20:32 -- host/auth.sh@49 -- # echo DHHC-1:00:YjY1NzhmZTYxY2M4M2MzOTVjMTVkNjE0ZmEyYzA3Y2QzMGE0NzI3MTY4NTU2YjM3PF0etg==: 00:24:31.998 12:20:32 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 1 00:24:31.998 12:20:32 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:31.998 12:20:32 -- host/auth.sh@68 -- # digest=sha384 00:24:31.998 12:20:32 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:31.998 12:20:32 -- host/auth.sh@68 -- # keyid=1 00:24:31.998 12:20:32 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:31.998 12:20:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:31.998 12:20:32 -- common/autotest_common.sh@10 -- # set +x 00:24:31.998 12:20:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:31.998 12:20:32 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:31.998 12:20:32 -- nvmf/common.sh@717 -- # local ip 00:24:31.998 12:20:32 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:31.998 12:20:32 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:31.998 12:20:32 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.998 12:20:32 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.998 12:20:32 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:31.998 12:20:32 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:31.998 12:20:32 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:31.998 12:20:32 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:31.998 12:20:32 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:31.998 12:20:32 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:31.998 12:20:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:31.998 12:20:32 -- common/autotest_common.sh@10 -- # set +x 00:24:32.630 nvme0n1 00:24:32.630 12:20:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.630 12:20:33 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.630 12:20:33 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:32.630 12:20:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.630 12:20:33 -- common/autotest_common.sh@10 -- # set +x 00:24:32.630 12:20:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.630 12:20:33 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.630 12:20:33 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.630 12:20:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.630 12:20:33 -- common/autotest_common.sh@10 -- # set +x 00:24:32.630 12:20:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.630 12:20:33 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:32.630 12:20:33 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:24:32.630 12:20:33 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:32.630 12:20:33 -- host/auth.sh@44 -- # digest=sha384 00:24:32.630 12:20:33 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:32.630 12:20:33 -- host/auth.sh@44 -- # keyid=2 00:24:32.630 12:20:33 -- host/auth.sh@45 -- # key=DHHC-1:01:M2QxYTRhYjMyNTU1NjM0ODk2ZDQ3MTZkZGE5YjU1MDCXHfQe: 00:24:32.630 12:20:33 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:32.630 12:20:33 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:32.630 12:20:33 -- host/auth.sh@49 -- # echo DHHC-1:01:M2QxYTRhYjMyNTU1NjM0ODk2ZDQ3MTZkZGE5YjU1MDCXHfQe: 00:24:32.630 12:20:33 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 2 00:24:32.630 12:20:33 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:32.630 12:20:33 -- host/auth.sh@68 -- # digest=sha384 00:24:32.630 12:20:33 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:32.630 12:20:33 -- host/auth.sh@68 -- # keyid=2 00:24:32.630 12:20:33 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:32.630 12:20:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.630 12:20:33 -- common/autotest_common.sh@10 -- # set +x 00:24:32.630 12:20:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.630 12:20:33 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:32.630 12:20:33 -- nvmf/common.sh@717 -- # local ip 00:24:32.630 12:20:33 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:32.630 12:20:33 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:32.630 12:20:33 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.630 12:20:33 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.630 12:20:33 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:32.630 12:20:33 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:32.630 12:20:33 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:32.630 12:20:33 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:32.630 12:20:33 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:32.630 12:20:33 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:32.630 12:20:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.630 12:20:33 -- common/autotest_common.sh@10 -- # set +x 00:24:33.572 nvme0n1 00:24:33.572 12:20:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.572 12:20:34 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.572 12:20:34 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:33.572 12:20:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.572 12:20:34 -- common/autotest_common.sh@10 -- # set +x 00:24:33.572 12:20:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.572 12:20:34 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.572 12:20:34 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.572 12:20:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.572 12:20:34 -- common/autotest_common.sh@10 -- # set +x 00:24:33.572 12:20:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.572 12:20:34 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:33.572 12:20:34 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:24:33.572 12:20:34 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:33.572 12:20:34 -- host/auth.sh@44 -- # digest=sha384 00:24:33.572 12:20:34 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:33.572 12:20:34 -- host/auth.sh@44 -- # keyid=3 00:24:33.572 12:20:34 -- host/auth.sh@45 -- # key=DHHC-1:02:YmM5YjI3NmM0ODliZTg0ZWI2ZDE0NzUzNmZjZDY1MzkxNmMzN2Y5ZDVmMmQwMDQ2bsg/gA==: 00:24:33.572 12:20:34 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:33.572 12:20:34 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:33.572 12:20:34 -- host/auth.sh@49 -- # echo DHHC-1:02:YmM5YjI3NmM0ODliZTg0ZWI2ZDE0NzUzNmZjZDY1MzkxNmMzN2Y5ZDVmMmQwMDQ2bsg/gA==: 00:24:33.572 12:20:34 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 3 00:24:33.572 12:20:34 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:33.572 12:20:34 -- host/auth.sh@68 -- # digest=sha384 00:24:33.572 12:20:34 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:33.572 12:20:34 -- host/auth.sh@68 -- # keyid=3 00:24:33.572 12:20:34 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:33.572 12:20:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.572 12:20:34 -- common/autotest_common.sh@10 -- # set +x 00:24:33.572 12:20:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.572 12:20:34 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:33.572 12:20:34 -- nvmf/common.sh@717 -- # local ip 00:24:33.572 12:20:34 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:33.572 12:20:34 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:33.572 12:20:34 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.572 12:20:34 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.572 12:20:34 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:33.572 12:20:34 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.572 12:20:34 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:33.572 12:20:34 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:33.572 12:20:34 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:33.572 12:20:34 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:33.572 12:20:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.572 12:20:34 -- common/autotest_common.sh@10 -- # set +x 00:24:34.142 nvme0n1 00:24:34.142 12:20:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.142 12:20:35 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.142 12:20:35 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:34.142 12:20:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.142 12:20:35 -- common/autotest_common.sh@10 -- # set +x 00:24:34.142 12:20:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.403 12:20:35 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.403 12:20:35 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.403 12:20:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.403 12:20:35 -- common/autotest_common.sh@10 -- # set +x 00:24:34.403 12:20:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.403 12:20:35 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:34.403 12:20:35 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:24:34.403 12:20:35 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:34.403 12:20:35 -- host/auth.sh@44 -- # digest=sha384 00:24:34.403 12:20:35 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:34.403 12:20:35 -- host/auth.sh@44 -- # keyid=4 00:24:34.403 12:20:35 -- host/auth.sh@45 -- # key=DHHC-1:03:MzFjZGJkYTIxMzIwNDdiNDNkMGIwZjg5Y2JkZGY0N2ZiMTNjMzI4ODczNDEyZTVmODE2NGU2NTExYWM3N2Q5MhKdksY=: 00:24:34.403 12:20:35 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:34.403 12:20:35 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:34.403 12:20:35 -- host/auth.sh@49 -- # echo DHHC-1:03:MzFjZGJkYTIxMzIwNDdiNDNkMGIwZjg5Y2JkZGY0N2ZiMTNjMzI4ODczNDEyZTVmODE2NGU2NTExYWM3N2Q5MhKdksY=: 00:24:34.403 12:20:35 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 4 00:24:34.403 12:20:35 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:34.403 12:20:35 -- host/auth.sh@68 -- # digest=sha384 00:24:34.403 12:20:35 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:34.403 12:20:35 -- host/auth.sh@68 -- # keyid=4 00:24:34.403 12:20:35 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:34.403 12:20:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.403 12:20:35 -- common/autotest_common.sh@10 -- # set +x 00:24:34.403 12:20:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.403 12:20:35 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:34.403 12:20:35 -- nvmf/common.sh@717 -- # local ip 00:24:34.403 12:20:35 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:34.403 12:20:35 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:34.403 12:20:35 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.403 12:20:35 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.403 12:20:35 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:34.403 12:20:35 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:34.403 12:20:35 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:34.403 12:20:35 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:34.403 12:20:35 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:34.403 12:20:35 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:34.403 12:20:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.403 12:20:35 -- common/autotest_common.sh@10 -- # set +x 00:24:34.975 nvme0n1 00:24:34.975 12:20:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.975 12:20:36 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.975 12:20:36 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:34.975 12:20:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.975 12:20:36 -- common/autotest_common.sh@10 -- # set +x 00:24:34.975 12:20:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.237 12:20:36 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.237 12:20:36 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.237 12:20:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.237 12:20:36 -- common/autotest_common.sh@10 -- # set +x 00:24:35.237 12:20:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.237 12:20:36 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:24:35.237 12:20:36 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:35.237 12:20:36 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:35.237 12:20:36 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:24:35.237 12:20:36 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:35.237 12:20:36 -- host/auth.sh@44 -- # digest=sha512 00:24:35.237 12:20:36 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:35.237 12:20:36 -- host/auth.sh@44 -- # keyid=0 00:24:35.237 12:20:36 -- host/auth.sh@45 -- # key=DHHC-1:00:NzI3MTUzMzYyOGYzNjBiMzU5ZjNjZGNjYWY3ZWVkZWatAOsZ: 00:24:35.238 12:20:36 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:35.238 12:20:36 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:35.238 12:20:36 -- host/auth.sh@49 -- # echo DHHC-1:00:NzI3MTUzMzYyOGYzNjBiMzU5ZjNjZGNjYWY3ZWVkZWatAOsZ: 00:24:35.238 12:20:36 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 0 00:24:35.238 12:20:36 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:35.238 12:20:36 -- host/auth.sh@68 -- # digest=sha512 00:24:35.238 12:20:36 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:35.238 12:20:36 -- host/auth.sh@68 -- # keyid=0 00:24:35.238 12:20:36 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:35.238 12:20:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.238 12:20:36 -- common/autotest_common.sh@10 -- # set +x 00:24:35.238 12:20:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.238 12:20:36 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:35.238 12:20:36 -- nvmf/common.sh@717 -- # local ip 00:24:35.238 12:20:36 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:35.238 12:20:36 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:35.238 12:20:36 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:35.238 12:20:36 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:35.238 12:20:36 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:35.238 12:20:36 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:35.238 12:20:36 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:35.238 12:20:36 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:35.238 12:20:36 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:35.238 12:20:36 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:35.238 12:20:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.238 12:20:36 -- common/autotest_common.sh@10 -- # set +x 00:24:35.238 nvme0n1 00:24:35.238 12:20:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.238 12:20:36 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:35.238 12:20:36 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:35.238 12:20:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.238 12:20:36 -- common/autotest_common.sh@10 -- # set +x 00:24:35.238 12:20:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.238 12:20:36 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.238 12:20:36 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.238 12:20:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.238 12:20:36 -- common/autotest_common.sh@10 -- # set +x 00:24:35.499 12:20:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.499 12:20:36 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:35.500 12:20:36 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:24:35.500 12:20:36 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:35.500 12:20:36 -- host/auth.sh@44 -- # digest=sha512 00:24:35.500 12:20:36 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:35.500 12:20:36 -- host/auth.sh@44 -- # keyid=1 00:24:35.500 12:20:36 -- host/auth.sh@45 -- # key=DHHC-1:00:YjY1NzhmZTYxY2M4M2MzOTVjMTVkNjE0ZmEyYzA3Y2QzMGE0NzI3MTY4NTU2YjM3PF0etg==: 00:24:35.500 12:20:36 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:35.500 12:20:36 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:35.500 12:20:36 -- host/auth.sh@49 -- # echo DHHC-1:00:YjY1NzhmZTYxY2M4M2MzOTVjMTVkNjE0ZmEyYzA3Y2QzMGE0NzI3MTY4NTU2YjM3PF0etg==: 00:24:35.500 12:20:36 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 1 00:24:35.500 12:20:36 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:35.500 12:20:36 -- host/auth.sh@68 -- # digest=sha512 00:24:35.500 12:20:36 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:35.500 12:20:36 -- host/auth.sh@68 -- # keyid=1 00:24:35.500 12:20:36 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:35.500 12:20:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.500 12:20:36 -- common/autotest_common.sh@10 -- # set +x 00:24:35.500 12:20:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.500 12:20:36 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:35.500 12:20:36 -- nvmf/common.sh@717 -- # local ip 00:24:35.500 12:20:36 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:35.500 12:20:36 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:35.500 12:20:36 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:35.500 12:20:36 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:35.500 12:20:36 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:35.500 12:20:36 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:35.500 12:20:36 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:35.500 12:20:36 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:35.500 12:20:36 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:35.500 12:20:36 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:35.500 12:20:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.500 12:20:36 -- common/autotest_common.sh@10 -- # set +x 00:24:35.500 nvme0n1 00:24:35.500 12:20:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.500 12:20:36 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:35.500 12:20:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.500 12:20:36 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:35.500 12:20:36 -- common/autotest_common.sh@10 -- # set +x 00:24:35.500 12:20:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.500 12:20:36 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.500 12:20:36 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.500 12:20:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.500 12:20:36 -- common/autotest_common.sh@10 -- # set +x 00:24:35.500 12:20:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.500 12:20:36 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:35.500 12:20:36 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:24:35.500 12:20:36 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:35.500 12:20:36 -- host/auth.sh@44 -- # digest=sha512 00:24:35.500 12:20:36 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:35.500 12:20:36 -- host/auth.sh@44 -- # keyid=2 00:24:35.500 12:20:36 -- host/auth.sh@45 -- # key=DHHC-1:01:M2QxYTRhYjMyNTU1NjM0ODk2ZDQ3MTZkZGE5YjU1MDCXHfQe: 00:24:35.500 12:20:36 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:35.500 12:20:36 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:35.500 12:20:36 -- host/auth.sh@49 -- # echo DHHC-1:01:M2QxYTRhYjMyNTU1NjM0ODk2ZDQ3MTZkZGE5YjU1MDCXHfQe: 00:24:35.500 12:20:36 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 2 00:24:35.500 12:20:36 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:35.500 12:20:36 -- host/auth.sh@68 -- # digest=sha512 00:24:35.500 12:20:36 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:35.500 12:20:36 -- host/auth.sh@68 -- # keyid=2 00:24:35.500 12:20:36 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:35.500 12:20:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.500 12:20:36 -- common/autotest_common.sh@10 -- # set +x 00:24:35.500 12:20:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.500 12:20:36 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:35.500 12:20:36 -- nvmf/common.sh@717 -- # local ip 00:24:35.500 12:20:36 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:35.500 12:20:36 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:35.500 12:20:36 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:35.500 12:20:36 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:35.500 12:20:36 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:35.500 12:20:36 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:35.500 12:20:36 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:35.500 12:20:36 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:35.500 12:20:36 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:35.500 12:20:36 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:35.500 12:20:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.500 12:20:36 -- common/autotest_common.sh@10 -- # set +x 00:24:35.763 nvme0n1 00:24:35.763 12:20:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.763 12:20:36 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:35.763 12:20:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.763 12:20:36 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:35.763 12:20:36 -- common/autotest_common.sh@10 -- # set +x 00:24:35.763 12:20:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.763 12:20:36 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.763 12:20:36 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.763 12:20:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.763 12:20:36 -- common/autotest_common.sh@10 -- # set +x 00:24:35.763 12:20:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.763 12:20:36 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:35.763 12:20:36 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:24:35.763 12:20:36 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:35.763 12:20:36 -- host/auth.sh@44 -- # digest=sha512 00:24:35.763 12:20:36 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:35.763 12:20:36 -- host/auth.sh@44 -- # keyid=3 00:24:35.763 12:20:36 -- host/auth.sh@45 -- # key=DHHC-1:02:YmM5YjI3NmM0ODliZTg0ZWI2ZDE0NzUzNmZjZDY1MzkxNmMzN2Y5ZDVmMmQwMDQ2bsg/gA==: 00:24:35.763 12:20:36 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:35.763 12:20:36 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:35.763 12:20:36 -- host/auth.sh@49 -- # echo DHHC-1:02:YmM5YjI3NmM0ODliZTg0ZWI2ZDE0NzUzNmZjZDY1MzkxNmMzN2Y5ZDVmMmQwMDQ2bsg/gA==: 00:24:35.763 12:20:36 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 3 00:24:35.763 12:20:36 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:35.763 12:20:36 -- host/auth.sh@68 -- # digest=sha512 00:24:35.763 12:20:36 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:35.763 12:20:36 -- host/auth.sh@68 -- # keyid=3 00:24:35.763 12:20:36 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:35.763 12:20:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.763 12:20:36 -- common/autotest_common.sh@10 -- # set +x 00:24:35.763 12:20:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.763 12:20:36 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:35.763 12:20:36 -- nvmf/common.sh@717 -- # local ip 00:24:35.763 12:20:36 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:35.763 12:20:36 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:35.763 12:20:36 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:35.763 12:20:36 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:35.763 12:20:36 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:35.763 12:20:36 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:35.763 12:20:36 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:35.763 12:20:36 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:35.763 12:20:36 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:35.763 12:20:36 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:35.763 12:20:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.763 12:20:36 -- common/autotest_common.sh@10 -- # set +x 00:24:36.026 nvme0n1 00:24:36.026 12:20:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.026 12:20:37 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.026 12:20:37 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:36.026 12:20:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.026 12:20:37 -- common/autotest_common.sh@10 -- # set +x 00:24:36.026 12:20:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.026 12:20:37 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.026 12:20:37 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:36.026 12:20:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.026 12:20:37 -- common/autotest_common.sh@10 -- # set +x 00:24:36.026 12:20:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.026 12:20:37 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:36.026 12:20:37 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:24:36.026 12:20:37 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:36.026 12:20:37 -- host/auth.sh@44 -- # digest=sha512 00:24:36.026 12:20:37 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:36.026 12:20:37 -- host/auth.sh@44 -- # keyid=4 00:24:36.026 12:20:37 -- host/auth.sh@45 -- # key=DHHC-1:03:MzFjZGJkYTIxMzIwNDdiNDNkMGIwZjg5Y2JkZGY0N2ZiMTNjMzI4ODczNDEyZTVmODE2NGU2NTExYWM3N2Q5MhKdksY=: 00:24:36.026 12:20:37 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:36.026 12:20:37 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:36.026 12:20:37 -- host/auth.sh@49 -- # echo DHHC-1:03:MzFjZGJkYTIxMzIwNDdiNDNkMGIwZjg5Y2JkZGY0N2ZiMTNjMzI4ODczNDEyZTVmODE2NGU2NTExYWM3N2Q5MhKdksY=: 00:24:36.026 12:20:37 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 4 00:24:36.026 12:20:37 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:36.026 12:20:37 -- host/auth.sh@68 -- # digest=sha512 00:24:36.026 12:20:37 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:36.026 12:20:37 -- host/auth.sh@68 -- # keyid=4 00:24:36.026 12:20:37 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:36.026 12:20:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.026 12:20:37 -- common/autotest_common.sh@10 -- # set +x 00:24:36.026 12:20:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.026 12:20:37 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:36.026 12:20:37 -- nvmf/common.sh@717 -- # local ip 00:24:36.026 12:20:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:36.026 12:20:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:36.026 12:20:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:36.026 12:20:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:36.026 12:20:37 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:36.026 12:20:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:36.026 12:20:37 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:36.026 12:20:37 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:36.026 12:20:37 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:36.026 12:20:37 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:36.026 12:20:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.026 12:20:37 -- common/autotest_common.sh@10 -- # set +x 00:24:36.288 nvme0n1 00:24:36.288 12:20:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.288 12:20:37 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.288 12:20:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.288 12:20:37 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:36.288 12:20:37 -- common/autotest_common.sh@10 -- # set +x 00:24:36.288 12:20:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.288 12:20:37 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.288 12:20:37 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:36.288 12:20:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.288 12:20:37 -- common/autotest_common.sh@10 -- # set +x 00:24:36.288 12:20:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.288 12:20:37 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:36.288 12:20:37 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:36.288 12:20:37 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:24:36.288 12:20:37 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:36.288 12:20:37 -- host/auth.sh@44 -- # digest=sha512 00:24:36.288 12:20:37 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:36.288 12:20:37 -- host/auth.sh@44 -- # keyid=0 00:24:36.288 12:20:37 -- host/auth.sh@45 -- # key=DHHC-1:00:NzI3MTUzMzYyOGYzNjBiMzU5ZjNjZGNjYWY3ZWVkZWatAOsZ: 00:24:36.288 12:20:37 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:36.288 12:20:37 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:36.288 12:20:37 -- host/auth.sh@49 -- # echo DHHC-1:00:NzI3MTUzMzYyOGYzNjBiMzU5ZjNjZGNjYWY3ZWVkZWatAOsZ: 00:24:36.288 12:20:37 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 0 00:24:36.288 12:20:37 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:36.288 12:20:37 -- host/auth.sh@68 -- # digest=sha512 00:24:36.288 12:20:37 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:36.288 12:20:37 -- host/auth.sh@68 -- # keyid=0 00:24:36.288 12:20:37 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:36.288 12:20:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.288 12:20:37 -- common/autotest_common.sh@10 -- # set +x 00:24:36.288 12:20:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.288 12:20:37 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:36.288 12:20:37 -- nvmf/common.sh@717 -- # local ip 00:24:36.288 12:20:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:36.288 12:20:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:36.288 12:20:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:36.288 12:20:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:36.288 12:20:37 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:36.288 12:20:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:36.288 12:20:37 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:36.288 12:20:37 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:36.288 12:20:37 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:36.288 12:20:37 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:36.288 12:20:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.288 12:20:37 -- common/autotest_common.sh@10 -- # set +x 00:24:36.549 nvme0n1 00:24:36.549 12:20:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.549 12:20:37 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.549 12:20:37 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:36.549 12:20:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.549 12:20:37 -- common/autotest_common.sh@10 -- # set +x 00:24:36.549 12:20:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.549 12:20:37 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.549 12:20:37 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:36.549 12:20:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.549 12:20:37 -- common/autotest_common.sh@10 -- # set +x 00:24:36.549 12:20:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.549 12:20:37 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:36.549 12:20:37 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:24:36.549 12:20:37 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:36.549 12:20:37 -- host/auth.sh@44 -- # digest=sha512 00:24:36.549 12:20:37 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:36.549 12:20:37 -- host/auth.sh@44 -- # keyid=1 00:24:36.549 12:20:37 -- host/auth.sh@45 -- # key=DHHC-1:00:YjY1NzhmZTYxY2M4M2MzOTVjMTVkNjE0ZmEyYzA3Y2QzMGE0NzI3MTY4NTU2YjM3PF0etg==: 00:24:36.549 12:20:37 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:36.549 12:20:37 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:36.550 12:20:37 -- host/auth.sh@49 -- # echo DHHC-1:00:YjY1NzhmZTYxY2M4M2MzOTVjMTVkNjE0ZmEyYzA3Y2QzMGE0NzI3MTY4NTU2YjM3PF0etg==: 00:24:36.550 12:20:37 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 1 00:24:36.550 12:20:37 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:36.550 12:20:37 -- host/auth.sh@68 -- # digest=sha512 00:24:36.550 12:20:37 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:36.550 12:20:37 -- host/auth.sh@68 -- # keyid=1 00:24:36.550 12:20:37 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:36.550 12:20:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.550 12:20:37 -- common/autotest_common.sh@10 -- # set +x 00:24:36.550 12:20:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.550 12:20:37 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:36.550 12:20:37 -- nvmf/common.sh@717 -- # local ip 00:24:36.550 12:20:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:36.550 12:20:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:36.550 12:20:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:36.550 12:20:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:36.550 12:20:37 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:36.550 12:20:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:36.550 12:20:37 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:36.550 12:20:37 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:36.550 12:20:37 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:36.550 12:20:37 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:36.550 12:20:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.550 12:20:37 -- common/autotest_common.sh@10 -- # set +x 00:24:36.812 nvme0n1 00:24:36.812 12:20:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.812 12:20:37 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.812 12:20:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.812 12:20:37 -- common/autotest_common.sh@10 -- # set +x 00:24:36.812 12:20:37 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:36.812 12:20:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.812 12:20:37 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.812 12:20:37 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:36.812 12:20:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.812 12:20:37 -- common/autotest_common.sh@10 -- # set +x 00:24:36.812 12:20:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.812 12:20:37 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:36.812 12:20:37 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:24:36.812 12:20:37 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:36.812 12:20:37 -- host/auth.sh@44 -- # digest=sha512 00:24:36.812 12:20:37 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:36.812 12:20:37 -- host/auth.sh@44 -- # keyid=2 00:24:36.812 12:20:37 -- host/auth.sh@45 -- # key=DHHC-1:01:M2QxYTRhYjMyNTU1NjM0ODk2ZDQ3MTZkZGE5YjU1MDCXHfQe: 00:24:36.812 12:20:37 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:36.812 12:20:37 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:36.812 12:20:37 -- host/auth.sh@49 -- # echo DHHC-1:01:M2QxYTRhYjMyNTU1NjM0ODk2ZDQ3MTZkZGE5YjU1MDCXHfQe: 00:24:36.812 12:20:37 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 2 00:24:36.812 12:20:37 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:36.812 12:20:37 -- host/auth.sh@68 -- # digest=sha512 00:24:36.812 12:20:37 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:36.812 12:20:37 -- host/auth.sh@68 -- # keyid=2 00:24:36.812 12:20:37 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:36.812 12:20:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.813 12:20:37 -- common/autotest_common.sh@10 -- # set +x 00:24:36.813 12:20:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.813 12:20:37 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:36.813 12:20:37 -- nvmf/common.sh@717 -- # local ip 00:24:36.813 12:20:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:36.813 12:20:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:36.813 12:20:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:36.813 12:20:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:36.813 12:20:37 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:36.813 12:20:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:36.813 12:20:37 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:36.813 12:20:37 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:36.813 12:20:37 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:36.813 12:20:37 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:36.813 12:20:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.813 12:20:37 -- common/autotest_common.sh@10 -- # set +x 00:24:37.074 nvme0n1 00:24:37.075 12:20:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.075 12:20:38 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.075 12:20:38 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:37.075 12:20:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.075 12:20:38 -- common/autotest_common.sh@10 -- # set +x 00:24:37.075 12:20:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.075 12:20:38 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.075 12:20:38 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:37.075 12:20:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.075 12:20:38 -- common/autotest_common.sh@10 -- # set +x 00:24:37.075 12:20:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.075 12:20:38 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:37.075 12:20:38 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:24:37.075 12:20:38 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:37.075 12:20:38 -- host/auth.sh@44 -- # digest=sha512 00:24:37.075 12:20:38 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:37.075 12:20:38 -- host/auth.sh@44 -- # keyid=3 00:24:37.075 12:20:38 -- host/auth.sh@45 -- # key=DHHC-1:02:YmM5YjI3NmM0ODliZTg0ZWI2ZDE0NzUzNmZjZDY1MzkxNmMzN2Y5ZDVmMmQwMDQ2bsg/gA==: 00:24:37.075 12:20:38 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:37.075 12:20:38 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:37.075 12:20:38 -- host/auth.sh@49 -- # echo DHHC-1:02:YmM5YjI3NmM0ODliZTg0ZWI2ZDE0NzUzNmZjZDY1MzkxNmMzN2Y5ZDVmMmQwMDQ2bsg/gA==: 00:24:37.075 12:20:38 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 3 00:24:37.075 12:20:38 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:37.075 12:20:38 -- host/auth.sh@68 -- # digest=sha512 00:24:37.075 12:20:38 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:37.075 12:20:38 -- host/auth.sh@68 -- # keyid=3 00:24:37.075 12:20:38 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:37.075 12:20:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.075 12:20:38 -- common/autotest_common.sh@10 -- # set +x 00:24:37.075 12:20:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.075 12:20:38 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:37.075 12:20:38 -- nvmf/common.sh@717 -- # local ip 00:24:37.075 12:20:38 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:37.075 12:20:38 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:37.075 12:20:38 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.075 12:20:38 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.075 12:20:38 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:37.075 12:20:38 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:37.075 12:20:38 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:37.075 12:20:38 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:37.075 12:20:38 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:37.075 12:20:38 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:37.075 12:20:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.075 12:20:38 -- common/autotest_common.sh@10 -- # set +x 00:24:37.337 nvme0n1 00:24:37.337 12:20:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.337 12:20:38 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.337 12:20:38 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:37.337 12:20:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.337 12:20:38 -- common/autotest_common.sh@10 -- # set +x 00:24:37.337 12:20:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.337 12:20:38 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.337 12:20:38 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:37.337 12:20:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.337 12:20:38 -- common/autotest_common.sh@10 -- # set +x 00:24:37.337 12:20:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.337 12:20:38 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:37.337 12:20:38 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:24:37.337 12:20:38 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:37.337 12:20:38 -- host/auth.sh@44 -- # digest=sha512 00:24:37.337 12:20:38 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:37.337 12:20:38 -- host/auth.sh@44 -- # keyid=4 00:24:37.337 12:20:38 -- host/auth.sh@45 -- # key=DHHC-1:03:MzFjZGJkYTIxMzIwNDdiNDNkMGIwZjg5Y2JkZGY0N2ZiMTNjMzI4ODczNDEyZTVmODE2NGU2NTExYWM3N2Q5MhKdksY=: 00:24:37.337 12:20:38 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:37.337 12:20:38 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:37.337 12:20:38 -- host/auth.sh@49 -- # echo DHHC-1:03:MzFjZGJkYTIxMzIwNDdiNDNkMGIwZjg5Y2JkZGY0N2ZiMTNjMzI4ODczNDEyZTVmODE2NGU2NTExYWM3N2Q5MhKdksY=: 00:24:37.337 12:20:38 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 4 00:24:37.337 12:20:38 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:37.337 12:20:38 -- host/auth.sh@68 -- # digest=sha512 00:24:37.337 12:20:38 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:37.337 12:20:38 -- host/auth.sh@68 -- # keyid=4 00:24:37.337 12:20:38 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:37.337 12:20:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.337 12:20:38 -- common/autotest_common.sh@10 -- # set +x 00:24:37.337 12:20:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.337 12:20:38 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:37.337 12:20:38 -- nvmf/common.sh@717 -- # local ip 00:24:37.337 12:20:38 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:37.337 12:20:38 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:37.337 12:20:38 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.337 12:20:38 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.337 12:20:38 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:37.337 12:20:38 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:37.337 12:20:38 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:37.337 12:20:38 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:37.337 12:20:38 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:37.337 12:20:38 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:37.337 12:20:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.337 12:20:38 -- common/autotest_common.sh@10 -- # set +x 00:24:37.598 nvme0n1 00:24:37.598 12:20:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.598 12:20:38 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.598 12:20:38 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:37.598 12:20:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.598 12:20:38 -- common/autotest_common.sh@10 -- # set +x 00:24:37.598 12:20:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.598 12:20:38 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.598 12:20:38 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:37.598 12:20:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.598 12:20:38 -- common/autotest_common.sh@10 -- # set +x 00:24:37.598 12:20:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.598 12:20:38 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:37.598 12:20:38 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:37.598 12:20:38 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:24:37.598 12:20:38 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:37.598 12:20:38 -- host/auth.sh@44 -- # digest=sha512 00:24:37.598 12:20:38 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:37.598 12:20:38 -- host/auth.sh@44 -- # keyid=0 00:24:37.598 12:20:38 -- host/auth.sh@45 -- # key=DHHC-1:00:NzI3MTUzMzYyOGYzNjBiMzU5ZjNjZGNjYWY3ZWVkZWatAOsZ: 00:24:37.598 12:20:38 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:37.598 12:20:38 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:37.598 12:20:38 -- host/auth.sh@49 -- # echo DHHC-1:00:NzI3MTUzMzYyOGYzNjBiMzU5ZjNjZGNjYWY3ZWVkZWatAOsZ: 00:24:37.598 12:20:38 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 0 00:24:37.598 12:20:38 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:37.598 12:20:38 -- host/auth.sh@68 -- # digest=sha512 00:24:37.598 12:20:38 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:37.598 12:20:38 -- host/auth.sh@68 -- # keyid=0 00:24:37.598 12:20:38 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:37.598 12:20:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.598 12:20:38 -- common/autotest_common.sh@10 -- # set +x 00:24:37.598 12:20:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.598 12:20:38 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:37.598 12:20:38 -- nvmf/common.sh@717 -- # local ip 00:24:37.598 12:20:38 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:37.598 12:20:38 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:37.598 12:20:38 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.598 12:20:38 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.598 12:20:38 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:37.598 12:20:38 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:37.598 12:20:38 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:37.598 12:20:38 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:37.598 12:20:38 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:37.598 12:20:38 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:37.598 12:20:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.598 12:20:38 -- common/autotest_common.sh@10 -- # set +x 00:24:37.860 nvme0n1 00:24:37.860 12:20:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.860 12:20:38 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.860 12:20:38 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:37.860 12:20:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.860 12:20:38 -- common/autotest_common.sh@10 -- # set +x 00:24:37.860 12:20:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.860 12:20:39 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.860 12:20:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:37.860 12:20:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.860 12:20:39 -- common/autotest_common.sh@10 -- # set +x 00:24:37.860 12:20:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.860 12:20:39 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:37.860 12:20:39 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:24:37.860 12:20:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:37.860 12:20:39 -- host/auth.sh@44 -- # digest=sha512 00:24:37.860 12:20:39 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:37.860 12:20:39 -- host/auth.sh@44 -- # keyid=1 00:24:37.860 12:20:39 -- host/auth.sh@45 -- # key=DHHC-1:00:YjY1NzhmZTYxY2M4M2MzOTVjMTVkNjE0ZmEyYzA3Y2QzMGE0NzI3MTY4NTU2YjM3PF0etg==: 00:24:37.860 12:20:39 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:37.860 12:20:39 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:37.860 12:20:39 -- host/auth.sh@49 -- # echo DHHC-1:00:YjY1NzhmZTYxY2M4M2MzOTVjMTVkNjE0ZmEyYzA3Y2QzMGE0NzI3MTY4NTU2YjM3PF0etg==: 00:24:37.860 12:20:39 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 1 00:24:37.860 12:20:39 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:37.860 12:20:39 -- host/auth.sh@68 -- # digest=sha512 00:24:37.860 12:20:39 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:37.860 12:20:39 -- host/auth.sh@68 -- # keyid=1 00:24:37.860 12:20:39 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:37.860 12:20:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.860 12:20:39 -- common/autotest_common.sh@10 -- # set +x 00:24:37.860 12:20:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.860 12:20:39 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:37.860 12:20:39 -- nvmf/common.sh@717 -- # local ip 00:24:37.860 12:20:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:37.860 12:20:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:37.860 12:20:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.860 12:20:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.860 12:20:39 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:37.860 12:20:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:37.860 12:20:39 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:37.860 12:20:39 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:37.860 12:20:39 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:37.860 12:20:39 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:37.860 12:20:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.860 12:20:39 -- common/autotest_common.sh@10 -- # set +x 00:24:38.121 nvme0n1 00:24:38.121 12:20:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.121 12:20:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.121 12:20:39 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:38.121 12:20:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.121 12:20:39 -- common/autotest_common.sh@10 -- # set +x 00:24:38.382 12:20:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.382 12:20:39 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.382 12:20:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.382 12:20:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.382 12:20:39 -- common/autotest_common.sh@10 -- # set +x 00:24:38.382 12:20:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.382 12:20:39 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:38.382 12:20:39 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:24:38.382 12:20:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:38.382 12:20:39 -- host/auth.sh@44 -- # digest=sha512 00:24:38.382 12:20:39 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:38.382 12:20:39 -- host/auth.sh@44 -- # keyid=2 00:24:38.382 12:20:39 -- host/auth.sh@45 -- # key=DHHC-1:01:M2QxYTRhYjMyNTU1NjM0ODk2ZDQ3MTZkZGE5YjU1MDCXHfQe: 00:24:38.382 12:20:39 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:38.382 12:20:39 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:38.382 12:20:39 -- host/auth.sh@49 -- # echo DHHC-1:01:M2QxYTRhYjMyNTU1NjM0ODk2ZDQ3MTZkZGE5YjU1MDCXHfQe: 00:24:38.382 12:20:39 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 2 00:24:38.382 12:20:39 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:38.382 12:20:39 -- host/auth.sh@68 -- # digest=sha512 00:24:38.382 12:20:39 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:38.382 12:20:39 -- host/auth.sh@68 -- # keyid=2 00:24:38.382 12:20:39 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:38.382 12:20:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.382 12:20:39 -- common/autotest_common.sh@10 -- # set +x 00:24:38.382 12:20:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.382 12:20:39 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:38.382 12:20:39 -- nvmf/common.sh@717 -- # local ip 00:24:38.382 12:20:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:38.382 12:20:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:38.382 12:20:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.382 12:20:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.382 12:20:39 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:38.382 12:20:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:38.382 12:20:39 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:38.382 12:20:39 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:38.382 12:20:39 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:38.382 12:20:39 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:38.382 12:20:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.382 12:20:39 -- common/autotest_common.sh@10 -- # set +x 00:24:38.644 nvme0n1 00:24:38.644 12:20:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.644 12:20:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.644 12:20:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.644 12:20:39 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:38.644 12:20:39 -- common/autotest_common.sh@10 -- # set +x 00:24:38.644 12:20:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.644 12:20:39 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.644 12:20:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.644 12:20:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.644 12:20:39 -- common/autotest_common.sh@10 -- # set +x 00:24:38.644 12:20:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.644 12:20:39 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:38.644 12:20:39 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:24:38.644 12:20:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:38.644 12:20:39 -- host/auth.sh@44 -- # digest=sha512 00:24:38.644 12:20:39 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:38.644 12:20:39 -- host/auth.sh@44 -- # keyid=3 00:24:38.644 12:20:39 -- host/auth.sh@45 -- # key=DHHC-1:02:YmM5YjI3NmM0ODliZTg0ZWI2ZDE0NzUzNmZjZDY1MzkxNmMzN2Y5ZDVmMmQwMDQ2bsg/gA==: 00:24:38.644 12:20:39 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:38.644 12:20:39 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:38.644 12:20:39 -- host/auth.sh@49 -- # echo DHHC-1:02:YmM5YjI3NmM0ODliZTg0ZWI2ZDE0NzUzNmZjZDY1MzkxNmMzN2Y5ZDVmMmQwMDQ2bsg/gA==: 00:24:38.644 12:20:39 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 3 00:24:38.644 12:20:39 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:38.644 12:20:39 -- host/auth.sh@68 -- # digest=sha512 00:24:38.644 12:20:39 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:38.644 12:20:39 -- host/auth.sh@68 -- # keyid=3 00:24:38.644 12:20:39 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:38.644 12:20:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.644 12:20:39 -- common/autotest_common.sh@10 -- # set +x 00:24:38.644 12:20:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.644 12:20:39 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:38.644 12:20:39 -- nvmf/common.sh@717 -- # local ip 00:24:38.644 12:20:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:38.644 12:20:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:38.644 12:20:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.644 12:20:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.644 12:20:39 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:38.644 12:20:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:38.644 12:20:39 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:38.644 12:20:39 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:38.644 12:20:39 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:38.644 12:20:39 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:38.644 12:20:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.644 12:20:39 -- common/autotest_common.sh@10 -- # set +x 00:24:38.905 nvme0n1 00:24:38.905 12:20:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.905 12:20:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.905 12:20:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:38.905 12:20:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.905 12:20:40 -- common/autotest_common.sh@10 -- # set +x 00:24:38.905 12:20:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.905 12:20:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.905 12:20:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.905 12:20:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.905 12:20:40 -- common/autotest_common.sh@10 -- # set +x 00:24:38.905 12:20:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.905 12:20:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:38.905 12:20:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:24:38.905 12:20:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:38.905 12:20:40 -- host/auth.sh@44 -- # digest=sha512 00:24:38.905 12:20:40 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:38.905 12:20:40 -- host/auth.sh@44 -- # keyid=4 00:24:38.905 12:20:40 -- host/auth.sh@45 -- # key=DHHC-1:03:MzFjZGJkYTIxMzIwNDdiNDNkMGIwZjg5Y2JkZGY0N2ZiMTNjMzI4ODczNDEyZTVmODE2NGU2NTExYWM3N2Q5MhKdksY=: 00:24:38.905 12:20:40 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:38.905 12:20:40 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:38.905 12:20:40 -- host/auth.sh@49 -- # echo DHHC-1:03:MzFjZGJkYTIxMzIwNDdiNDNkMGIwZjg5Y2JkZGY0N2ZiMTNjMzI4ODczNDEyZTVmODE2NGU2NTExYWM3N2Q5MhKdksY=: 00:24:38.905 12:20:40 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 4 00:24:38.905 12:20:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:38.905 12:20:40 -- host/auth.sh@68 -- # digest=sha512 00:24:38.905 12:20:40 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:38.905 12:20:40 -- host/auth.sh@68 -- # keyid=4 00:24:38.905 12:20:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:38.905 12:20:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.905 12:20:40 -- common/autotest_common.sh@10 -- # set +x 00:24:38.905 12:20:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.905 12:20:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:38.905 12:20:40 -- nvmf/common.sh@717 -- # local ip 00:24:38.905 12:20:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:38.905 12:20:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:38.905 12:20:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.905 12:20:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.905 12:20:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:38.905 12:20:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:38.905 12:20:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:38.905 12:20:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:38.905 12:20:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:38.905 12:20:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:38.905 12:20:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.905 12:20:40 -- common/autotest_common.sh@10 -- # set +x 00:24:39.165 nvme0n1 00:24:39.165 12:20:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.165 12:20:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:39.165 12:20:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:39.165 12:20:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.165 12:20:40 -- common/autotest_common.sh@10 -- # set +x 00:24:39.165 12:20:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.426 12:20:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:39.426 12:20:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:39.426 12:20:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.426 12:20:40 -- common/autotest_common.sh@10 -- # set +x 00:24:39.426 12:20:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.426 12:20:40 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:39.426 12:20:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:39.426 12:20:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:24:39.426 12:20:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:39.426 12:20:40 -- host/auth.sh@44 -- # digest=sha512 00:24:39.426 12:20:40 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:39.426 12:20:40 -- host/auth.sh@44 -- # keyid=0 00:24:39.426 12:20:40 -- host/auth.sh@45 -- # key=DHHC-1:00:NzI3MTUzMzYyOGYzNjBiMzU5ZjNjZGNjYWY3ZWVkZWatAOsZ: 00:24:39.426 12:20:40 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:39.426 12:20:40 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:39.426 12:20:40 -- host/auth.sh@49 -- # echo DHHC-1:00:NzI3MTUzMzYyOGYzNjBiMzU5ZjNjZGNjYWY3ZWVkZWatAOsZ: 00:24:39.426 12:20:40 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 0 00:24:39.426 12:20:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:39.426 12:20:40 -- host/auth.sh@68 -- # digest=sha512 00:24:39.426 12:20:40 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:39.426 12:20:40 -- host/auth.sh@68 -- # keyid=0 00:24:39.426 12:20:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:39.426 12:20:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.426 12:20:40 -- common/autotest_common.sh@10 -- # set +x 00:24:39.426 12:20:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.426 12:20:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:39.426 12:20:40 -- nvmf/common.sh@717 -- # local ip 00:24:39.426 12:20:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:39.426 12:20:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:39.426 12:20:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:39.426 12:20:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:39.426 12:20:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:39.426 12:20:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:39.426 12:20:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:39.426 12:20:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:39.426 12:20:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:39.426 12:20:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:39.426 12:20:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.426 12:20:40 -- common/autotest_common.sh@10 -- # set +x 00:24:39.687 nvme0n1 00:24:39.687 12:20:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.688 12:20:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:39.688 12:20:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.688 12:20:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:39.688 12:20:40 -- common/autotest_common.sh@10 -- # set +x 00:24:39.948 12:20:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.948 12:20:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:39.948 12:20:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:39.948 12:20:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.948 12:20:40 -- common/autotest_common.sh@10 -- # set +x 00:24:39.948 12:20:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.948 12:20:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:39.948 12:20:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:24:39.948 12:20:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:39.948 12:20:40 -- host/auth.sh@44 -- # digest=sha512 00:24:39.948 12:20:40 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:39.948 12:20:40 -- host/auth.sh@44 -- # keyid=1 00:24:39.948 12:20:40 -- host/auth.sh@45 -- # key=DHHC-1:00:YjY1NzhmZTYxY2M4M2MzOTVjMTVkNjE0ZmEyYzA3Y2QzMGE0NzI3MTY4NTU2YjM3PF0etg==: 00:24:39.948 12:20:40 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:39.948 12:20:40 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:39.948 12:20:40 -- host/auth.sh@49 -- # echo DHHC-1:00:YjY1NzhmZTYxY2M4M2MzOTVjMTVkNjE0ZmEyYzA3Y2QzMGE0NzI3MTY4NTU2YjM3PF0etg==: 00:24:39.948 12:20:40 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 1 00:24:39.948 12:20:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:39.948 12:20:40 -- host/auth.sh@68 -- # digest=sha512 00:24:39.948 12:20:40 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:39.948 12:20:40 -- host/auth.sh@68 -- # keyid=1 00:24:39.948 12:20:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:39.948 12:20:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.948 12:20:40 -- common/autotest_common.sh@10 -- # set +x 00:24:39.948 12:20:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.948 12:20:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:39.948 12:20:40 -- nvmf/common.sh@717 -- # local ip 00:24:39.948 12:20:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:39.948 12:20:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:39.948 12:20:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:39.948 12:20:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:39.948 12:20:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:39.948 12:20:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:39.948 12:20:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:39.948 12:20:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:39.948 12:20:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:39.948 12:20:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:39.948 12:20:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.948 12:20:40 -- common/autotest_common.sh@10 -- # set +x 00:24:40.210 nvme0n1 00:24:40.210 12:20:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.470 12:20:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:40.470 12:20:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:40.470 12:20:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.470 12:20:41 -- common/autotest_common.sh@10 -- # set +x 00:24:40.470 12:20:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.470 12:20:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.470 12:20:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:40.470 12:20:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.470 12:20:41 -- common/autotest_common.sh@10 -- # set +x 00:24:40.470 12:20:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.470 12:20:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:40.470 12:20:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:24:40.470 12:20:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:40.470 12:20:41 -- host/auth.sh@44 -- # digest=sha512 00:24:40.470 12:20:41 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:40.470 12:20:41 -- host/auth.sh@44 -- # keyid=2 00:24:40.470 12:20:41 -- host/auth.sh@45 -- # key=DHHC-1:01:M2QxYTRhYjMyNTU1NjM0ODk2ZDQ3MTZkZGE5YjU1MDCXHfQe: 00:24:40.470 12:20:41 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:40.470 12:20:41 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:40.470 12:20:41 -- host/auth.sh@49 -- # echo DHHC-1:01:M2QxYTRhYjMyNTU1NjM0ODk2ZDQ3MTZkZGE5YjU1MDCXHfQe: 00:24:40.470 12:20:41 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 2 00:24:40.470 12:20:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:40.470 12:20:41 -- host/auth.sh@68 -- # digest=sha512 00:24:40.470 12:20:41 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:40.470 12:20:41 -- host/auth.sh@68 -- # keyid=2 00:24:40.470 12:20:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:40.470 12:20:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.470 12:20:41 -- common/autotest_common.sh@10 -- # set +x 00:24:40.470 12:20:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.470 12:20:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:40.470 12:20:41 -- nvmf/common.sh@717 -- # local ip 00:24:40.470 12:20:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:40.470 12:20:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:40.470 12:20:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:40.470 12:20:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:40.470 12:20:41 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:40.470 12:20:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:40.470 12:20:41 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:40.470 12:20:41 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:40.470 12:20:41 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:40.470 12:20:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:40.470 12:20:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.470 12:20:41 -- common/autotest_common.sh@10 -- # set +x 00:24:41.041 nvme0n1 00:24:41.041 12:20:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:41.041 12:20:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:41.041 12:20:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:41.041 12:20:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:41.041 12:20:41 -- common/autotest_common.sh@10 -- # set +x 00:24:41.041 12:20:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:41.041 12:20:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:41.041 12:20:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:41.041 12:20:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:41.041 12:20:42 -- common/autotest_common.sh@10 -- # set +x 00:24:41.041 12:20:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:41.041 12:20:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:41.041 12:20:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:24:41.041 12:20:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:41.041 12:20:42 -- host/auth.sh@44 -- # digest=sha512 00:24:41.041 12:20:42 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:41.041 12:20:42 -- host/auth.sh@44 -- # keyid=3 00:24:41.041 12:20:42 -- host/auth.sh@45 -- # key=DHHC-1:02:YmM5YjI3NmM0ODliZTg0ZWI2ZDE0NzUzNmZjZDY1MzkxNmMzN2Y5ZDVmMmQwMDQ2bsg/gA==: 00:24:41.041 12:20:42 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:41.041 12:20:42 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:41.041 12:20:42 -- host/auth.sh@49 -- # echo DHHC-1:02:YmM5YjI3NmM0ODliZTg0ZWI2ZDE0NzUzNmZjZDY1MzkxNmMzN2Y5ZDVmMmQwMDQ2bsg/gA==: 00:24:41.041 12:20:42 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 3 00:24:41.041 12:20:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:41.041 12:20:42 -- host/auth.sh@68 -- # digest=sha512 00:24:41.041 12:20:42 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:41.041 12:20:42 -- host/auth.sh@68 -- # keyid=3 00:24:41.041 12:20:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:41.041 12:20:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:41.041 12:20:42 -- common/autotest_common.sh@10 -- # set +x 00:24:41.041 12:20:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:41.041 12:20:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:41.041 12:20:42 -- nvmf/common.sh@717 -- # local ip 00:24:41.041 12:20:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:41.041 12:20:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:41.041 12:20:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:41.041 12:20:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:41.041 12:20:42 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:41.041 12:20:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:41.041 12:20:42 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:41.041 12:20:42 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:41.041 12:20:42 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:41.041 12:20:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:41.041 12:20:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:41.041 12:20:42 -- common/autotest_common.sh@10 -- # set +x 00:24:41.301 nvme0n1 00:24:41.301 12:20:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:41.301 12:20:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:41.301 12:20:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:41.301 12:20:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:41.301 12:20:42 -- common/autotest_common.sh@10 -- # set +x 00:24:41.562 12:20:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:41.562 12:20:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:41.562 12:20:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:41.562 12:20:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:41.562 12:20:42 -- common/autotest_common.sh@10 -- # set +x 00:24:41.562 12:20:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:41.562 12:20:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:41.562 12:20:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:24:41.562 12:20:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:41.562 12:20:42 -- host/auth.sh@44 -- # digest=sha512 00:24:41.562 12:20:42 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:41.562 12:20:42 -- host/auth.sh@44 -- # keyid=4 00:24:41.562 12:20:42 -- host/auth.sh@45 -- # key=DHHC-1:03:MzFjZGJkYTIxMzIwNDdiNDNkMGIwZjg5Y2JkZGY0N2ZiMTNjMzI4ODczNDEyZTVmODE2NGU2NTExYWM3N2Q5MhKdksY=: 00:24:41.562 12:20:42 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:41.562 12:20:42 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:41.562 12:20:42 -- host/auth.sh@49 -- # echo DHHC-1:03:MzFjZGJkYTIxMzIwNDdiNDNkMGIwZjg5Y2JkZGY0N2ZiMTNjMzI4ODczNDEyZTVmODE2NGU2NTExYWM3N2Q5MhKdksY=: 00:24:41.562 12:20:42 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 4 00:24:41.563 12:20:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:41.563 12:20:42 -- host/auth.sh@68 -- # digest=sha512 00:24:41.563 12:20:42 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:41.563 12:20:42 -- host/auth.sh@68 -- # keyid=4 00:24:41.563 12:20:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:41.563 12:20:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:41.563 12:20:42 -- common/autotest_common.sh@10 -- # set +x 00:24:41.563 12:20:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:41.563 12:20:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:41.563 12:20:42 -- nvmf/common.sh@717 -- # local ip 00:24:41.563 12:20:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:41.563 12:20:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:41.563 12:20:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:41.563 12:20:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:41.563 12:20:42 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:41.563 12:20:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:41.563 12:20:42 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:41.563 12:20:42 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:41.563 12:20:42 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:41.563 12:20:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:41.563 12:20:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:41.563 12:20:42 -- common/autotest_common.sh@10 -- # set +x 00:24:42.133 nvme0n1 00:24:42.133 12:20:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:42.133 12:20:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:42.133 12:20:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:42.133 12:20:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:42.133 12:20:43 -- common/autotest_common.sh@10 -- # set +x 00:24:42.133 12:20:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:42.133 12:20:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.133 12:20:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:42.133 12:20:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:42.133 12:20:43 -- common/autotest_common.sh@10 -- # set +x 00:24:42.133 12:20:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:42.133 12:20:43 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:42.133 12:20:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:42.133 12:20:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:24:42.133 12:20:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:42.133 12:20:43 -- host/auth.sh@44 -- # digest=sha512 00:24:42.133 12:20:43 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:42.133 12:20:43 -- host/auth.sh@44 -- # keyid=0 00:24:42.133 12:20:43 -- host/auth.sh@45 -- # key=DHHC-1:00:NzI3MTUzMzYyOGYzNjBiMzU5ZjNjZGNjYWY3ZWVkZWatAOsZ: 00:24:42.133 12:20:43 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:42.133 12:20:43 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:42.133 12:20:43 -- host/auth.sh@49 -- # echo DHHC-1:00:NzI3MTUzMzYyOGYzNjBiMzU5ZjNjZGNjYWY3ZWVkZWatAOsZ: 00:24:42.133 12:20:43 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 0 00:24:42.133 12:20:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:42.133 12:20:43 -- host/auth.sh@68 -- # digest=sha512 00:24:42.133 12:20:43 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:42.133 12:20:43 -- host/auth.sh@68 -- # keyid=0 00:24:42.133 12:20:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:42.133 12:20:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:42.133 12:20:43 -- common/autotest_common.sh@10 -- # set +x 00:24:42.133 12:20:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:42.133 12:20:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:42.133 12:20:43 -- nvmf/common.sh@717 -- # local ip 00:24:42.133 12:20:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:42.133 12:20:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:42.133 12:20:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.133 12:20:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.133 12:20:43 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:42.133 12:20:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:42.133 12:20:43 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:42.133 12:20:43 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:42.133 12:20:43 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:42.133 12:20:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:42.133 12:20:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:42.133 12:20:43 -- common/autotest_common.sh@10 -- # set +x 00:24:42.703 nvme0n1 00:24:42.703 12:20:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:42.703 12:20:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:42.703 12:20:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:42.703 12:20:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:42.703 12:20:43 -- common/autotest_common.sh@10 -- # set +x 00:24:42.703 12:20:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:42.965 12:20:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.965 12:20:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:42.965 12:20:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:42.965 12:20:43 -- common/autotest_common.sh@10 -- # set +x 00:24:42.965 12:20:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:42.965 12:20:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:42.965 12:20:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:24:42.965 12:20:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:42.965 12:20:43 -- host/auth.sh@44 -- # digest=sha512 00:24:42.965 12:20:43 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:42.965 12:20:43 -- host/auth.sh@44 -- # keyid=1 00:24:42.965 12:20:43 -- host/auth.sh@45 -- # key=DHHC-1:00:YjY1NzhmZTYxY2M4M2MzOTVjMTVkNjE0ZmEyYzA3Y2QzMGE0NzI3MTY4NTU2YjM3PF0etg==: 00:24:42.965 12:20:43 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:42.965 12:20:43 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:42.965 12:20:43 -- host/auth.sh@49 -- # echo DHHC-1:00:YjY1NzhmZTYxY2M4M2MzOTVjMTVkNjE0ZmEyYzA3Y2QzMGE0NzI3MTY4NTU2YjM3PF0etg==: 00:24:42.965 12:20:43 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 1 00:24:42.965 12:20:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:42.965 12:20:43 -- host/auth.sh@68 -- # digest=sha512 00:24:42.965 12:20:43 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:42.965 12:20:43 -- host/auth.sh@68 -- # keyid=1 00:24:42.965 12:20:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:42.965 12:20:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:42.965 12:20:43 -- common/autotest_common.sh@10 -- # set +x 00:24:42.965 12:20:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:42.965 12:20:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:42.965 12:20:43 -- nvmf/common.sh@717 -- # local ip 00:24:42.965 12:20:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:42.965 12:20:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:42.965 12:20:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.965 12:20:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.965 12:20:43 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:42.965 12:20:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:42.965 12:20:43 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:42.965 12:20:43 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:42.965 12:20:43 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:42.965 12:20:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:42.965 12:20:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:42.965 12:20:43 -- common/autotest_common.sh@10 -- # set +x 00:24:43.537 nvme0n1 00:24:43.537 12:20:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:43.537 12:20:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:43.538 12:20:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:43.538 12:20:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:43.538 12:20:44 -- common/autotest_common.sh@10 -- # set +x 00:24:43.538 12:20:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:43.798 12:20:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.798 12:20:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:43.798 12:20:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:43.798 12:20:44 -- common/autotest_common.sh@10 -- # set +x 00:24:43.798 12:20:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:43.798 12:20:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:43.798 12:20:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:24:43.798 12:20:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:43.798 12:20:44 -- host/auth.sh@44 -- # digest=sha512 00:24:43.798 12:20:44 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:43.798 12:20:44 -- host/auth.sh@44 -- # keyid=2 00:24:43.798 12:20:44 -- host/auth.sh@45 -- # key=DHHC-1:01:M2QxYTRhYjMyNTU1NjM0ODk2ZDQ3MTZkZGE5YjU1MDCXHfQe: 00:24:43.798 12:20:44 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:43.798 12:20:44 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:43.798 12:20:44 -- host/auth.sh@49 -- # echo DHHC-1:01:M2QxYTRhYjMyNTU1NjM0ODk2ZDQ3MTZkZGE5YjU1MDCXHfQe: 00:24:43.798 12:20:44 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 2 00:24:43.798 12:20:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:43.798 12:20:44 -- host/auth.sh@68 -- # digest=sha512 00:24:43.798 12:20:44 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:43.798 12:20:44 -- host/auth.sh@68 -- # keyid=2 00:24:43.798 12:20:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:43.798 12:20:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:43.798 12:20:44 -- common/autotest_common.sh@10 -- # set +x 00:24:43.798 12:20:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:43.798 12:20:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:43.798 12:20:44 -- nvmf/common.sh@717 -- # local ip 00:24:43.798 12:20:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:43.798 12:20:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:43.798 12:20:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:43.799 12:20:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:43.799 12:20:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:43.799 12:20:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:43.799 12:20:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:43.799 12:20:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:43.799 12:20:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:43.799 12:20:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:43.799 12:20:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:43.799 12:20:44 -- common/autotest_common.sh@10 -- # set +x 00:24:44.368 nvme0n1 00:24:44.368 12:20:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:44.368 12:20:45 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.368 12:20:45 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:44.368 12:20:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:44.368 12:20:45 -- common/autotest_common.sh@10 -- # set +x 00:24:44.368 12:20:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:44.368 12:20:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.368 12:20:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:44.368 12:20:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:44.368 12:20:45 -- common/autotest_common.sh@10 -- # set +x 00:24:44.628 12:20:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:44.628 12:20:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:44.628 12:20:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:24:44.628 12:20:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:44.628 12:20:45 -- host/auth.sh@44 -- # digest=sha512 00:24:44.628 12:20:45 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:44.628 12:20:45 -- host/auth.sh@44 -- # keyid=3 00:24:44.628 12:20:45 -- host/auth.sh@45 -- # key=DHHC-1:02:YmM5YjI3NmM0ODliZTg0ZWI2ZDE0NzUzNmZjZDY1MzkxNmMzN2Y5ZDVmMmQwMDQ2bsg/gA==: 00:24:44.628 12:20:45 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:44.628 12:20:45 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:44.628 12:20:45 -- host/auth.sh@49 -- # echo DHHC-1:02:YmM5YjI3NmM0ODliZTg0ZWI2ZDE0NzUzNmZjZDY1MzkxNmMzN2Y5ZDVmMmQwMDQ2bsg/gA==: 00:24:44.628 12:20:45 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 3 00:24:44.628 12:20:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:44.628 12:20:45 -- host/auth.sh@68 -- # digest=sha512 00:24:44.628 12:20:45 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:44.628 12:20:45 -- host/auth.sh@68 -- # keyid=3 00:24:44.628 12:20:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:44.628 12:20:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:44.628 12:20:45 -- common/autotest_common.sh@10 -- # set +x 00:24:44.628 12:20:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:44.628 12:20:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:44.628 12:20:45 -- nvmf/common.sh@717 -- # local ip 00:24:44.628 12:20:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:44.628 12:20:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:44.628 12:20:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.628 12:20:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.628 12:20:45 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:44.628 12:20:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:44.628 12:20:45 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:44.628 12:20:45 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:44.628 12:20:45 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:44.628 12:20:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:44.628 12:20:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:44.628 12:20:45 -- common/autotest_common.sh@10 -- # set +x 00:24:45.199 nvme0n1 00:24:45.199 12:20:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:45.199 12:20:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:45.199 12:20:46 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:45.199 12:20:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.199 12:20:46 -- common/autotest_common.sh@10 -- # set +x 00:24:45.199 12:20:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:45.199 12:20:46 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:45.199 12:20:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:45.199 12:20:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.199 12:20:46 -- common/autotest_common.sh@10 -- # set +x 00:24:45.459 12:20:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:45.459 12:20:46 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:45.459 12:20:46 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:24:45.459 12:20:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:45.459 12:20:46 -- host/auth.sh@44 -- # digest=sha512 00:24:45.459 12:20:46 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:45.459 12:20:46 -- host/auth.sh@44 -- # keyid=4 00:24:45.459 12:20:46 -- host/auth.sh@45 -- # key=DHHC-1:03:MzFjZGJkYTIxMzIwNDdiNDNkMGIwZjg5Y2JkZGY0N2ZiMTNjMzI4ODczNDEyZTVmODE2NGU2NTExYWM3N2Q5MhKdksY=: 00:24:45.459 12:20:46 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:45.459 12:20:46 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:45.459 12:20:46 -- host/auth.sh@49 -- # echo DHHC-1:03:MzFjZGJkYTIxMzIwNDdiNDNkMGIwZjg5Y2JkZGY0N2ZiMTNjMzI4ODczNDEyZTVmODE2NGU2NTExYWM3N2Q5MhKdksY=: 00:24:45.459 12:20:46 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 4 00:24:45.459 12:20:46 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:45.459 12:20:46 -- host/auth.sh@68 -- # digest=sha512 00:24:45.459 12:20:46 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:45.459 12:20:46 -- host/auth.sh@68 -- # keyid=4 00:24:45.459 12:20:46 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:45.459 12:20:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.459 12:20:46 -- common/autotest_common.sh@10 -- # set +x 00:24:45.459 12:20:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:45.459 12:20:46 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:45.459 12:20:46 -- nvmf/common.sh@717 -- # local ip 00:24:45.459 12:20:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:45.459 12:20:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:45.459 12:20:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:45.459 12:20:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:45.459 12:20:46 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:45.459 12:20:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:45.459 12:20:46 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:45.459 12:20:46 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:45.459 12:20:46 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:45.459 12:20:46 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:45.460 12:20:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.460 12:20:46 -- common/autotest_common.sh@10 -- # set +x 00:24:46.032 nvme0n1 00:24:46.032 12:20:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:46.032 12:20:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:46.032 12:20:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:46.032 12:20:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:46.032 12:20:47 -- common/autotest_common.sh@10 -- # set +x 00:24:46.032 12:20:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:46.032 12:20:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.032 12:20:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:46.032 12:20:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:46.032 12:20:47 -- common/autotest_common.sh@10 -- # set +x 00:24:46.293 12:20:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:46.293 12:20:47 -- host/auth.sh@117 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:46.293 12:20:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:46.293 12:20:47 -- host/auth.sh@44 -- # digest=sha256 00:24:46.294 12:20:47 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:46.294 12:20:47 -- host/auth.sh@44 -- # keyid=1 00:24:46.294 12:20:47 -- host/auth.sh@45 -- # key=DHHC-1:00:YjY1NzhmZTYxY2M4M2MzOTVjMTVkNjE0ZmEyYzA3Y2QzMGE0NzI3MTY4NTU2YjM3PF0etg==: 00:24:46.294 12:20:47 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:46.294 12:20:47 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:46.294 12:20:47 -- host/auth.sh@49 -- # echo DHHC-1:00:YjY1NzhmZTYxY2M4M2MzOTVjMTVkNjE0ZmEyYzA3Y2QzMGE0NzI3MTY4NTU2YjM3PF0etg==: 00:24:46.294 12:20:47 -- host/auth.sh@118 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:46.294 12:20:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:46.294 12:20:47 -- common/autotest_common.sh@10 -- # set +x 00:24:46.294 12:20:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:46.294 12:20:47 -- host/auth.sh@119 -- # get_main_ns_ip 00:24:46.294 12:20:47 -- nvmf/common.sh@717 -- # local ip 00:24:46.294 12:20:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:46.294 12:20:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:46.294 12:20:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:46.294 12:20:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:46.294 12:20:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:46.294 12:20:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:46.294 12:20:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:46.294 12:20:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:46.294 12:20:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:46.294 12:20:47 -- host/auth.sh@119 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:46.294 12:20:47 -- common/autotest_common.sh@638 -- # local es=0 00:24:46.294 12:20:47 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:46.294 12:20:47 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:24:46.294 12:20:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:46.294 12:20:47 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:24:46.294 12:20:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:46.294 12:20:47 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:46.294 12:20:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:46.294 12:20:47 -- common/autotest_common.sh@10 -- # set +x 00:24:46.294 request: 00:24:46.294 { 00:24:46.294 "name": "nvme0", 00:24:46.294 "trtype": "tcp", 00:24:46.294 "traddr": "10.0.0.1", 00:24:46.294 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:46.294 "adrfam": "ipv4", 00:24:46.294 "trsvcid": "4420", 00:24:46.294 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:46.294 "method": "bdev_nvme_attach_controller", 00:24:46.294 "req_id": 1 00:24:46.294 } 00:24:46.294 Got JSON-RPC error response 00:24:46.294 response: 00:24:46.294 { 00:24:46.294 "code": -32602, 00:24:46.294 "message": "Invalid parameters" 00:24:46.294 } 00:24:46.294 12:20:47 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:24:46.294 12:20:47 -- common/autotest_common.sh@641 -- # es=1 00:24:46.294 12:20:47 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:46.294 12:20:47 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:46.294 12:20:47 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:46.294 12:20:47 -- host/auth.sh@121 -- # rpc_cmd bdev_nvme_get_controllers 00:24:46.294 12:20:47 -- host/auth.sh@121 -- # jq length 00:24:46.294 12:20:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:46.294 12:20:47 -- common/autotest_common.sh@10 -- # set +x 00:24:46.294 12:20:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:46.294 12:20:47 -- host/auth.sh@121 -- # (( 0 == 0 )) 00:24:46.294 12:20:47 -- host/auth.sh@124 -- # get_main_ns_ip 00:24:46.294 12:20:47 -- nvmf/common.sh@717 -- # local ip 00:24:46.294 12:20:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:46.294 12:20:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:46.294 12:20:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:46.294 12:20:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:46.294 12:20:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:46.294 12:20:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:46.294 12:20:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:46.294 12:20:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:46.294 12:20:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:46.294 12:20:47 -- host/auth.sh@124 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:46.294 12:20:47 -- common/autotest_common.sh@638 -- # local es=0 00:24:46.294 12:20:47 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:46.294 12:20:47 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:24:46.294 12:20:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:46.294 12:20:47 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:24:46.294 12:20:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:46.294 12:20:47 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:46.294 12:20:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:46.294 12:20:47 -- common/autotest_common.sh@10 -- # set +x 00:24:46.294 request: 00:24:46.294 { 00:24:46.294 "name": "nvme0", 00:24:46.294 "trtype": "tcp", 00:24:46.294 "traddr": "10.0.0.1", 00:24:46.294 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:46.294 "adrfam": "ipv4", 00:24:46.294 "trsvcid": "4420", 00:24:46.294 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:46.294 "dhchap_key": "key2", 00:24:46.294 "method": "bdev_nvme_attach_controller", 00:24:46.294 "req_id": 1 00:24:46.294 } 00:24:46.294 Got JSON-RPC error response 00:24:46.294 response: 00:24:46.294 { 00:24:46.294 "code": -32602, 00:24:46.294 "message": "Invalid parameters" 00:24:46.294 } 00:24:46.294 12:20:47 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:24:46.294 12:20:47 -- common/autotest_common.sh@641 -- # es=1 00:24:46.294 12:20:47 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:46.294 12:20:47 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:46.294 12:20:47 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:46.294 12:20:47 -- host/auth.sh@127 -- # rpc_cmd bdev_nvme_get_controllers 00:24:46.294 12:20:47 -- host/auth.sh@127 -- # jq length 00:24:46.294 12:20:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:46.294 12:20:47 -- common/autotest_common.sh@10 -- # set +x 00:24:46.294 12:20:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:46.554 12:20:47 -- host/auth.sh@127 -- # (( 0 == 0 )) 00:24:46.555 12:20:47 -- host/auth.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:24:46.555 12:20:47 -- host/auth.sh@130 -- # cleanup 00:24:46.555 12:20:47 -- host/auth.sh@24 -- # nvmftestfini 00:24:46.555 12:20:47 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:46.555 12:20:47 -- nvmf/common.sh@117 -- # sync 00:24:46.555 12:20:47 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:46.555 12:20:47 -- nvmf/common.sh@120 -- # set +e 00:24:46.555 12:20:47 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:46.555 12:20:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:46.555 rmmod nvme_tcp 00:24:46.555 rmmod nvme_fabrics 00:24:46.555 12:20:47 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:46.555 12:20:47 -- nvmf/common.sh@124 -- # set -e 00:24:46.555 12:20:47 -- nvmf/common.sh@125 -- # return 0 00:24:46.555 12:20:47 -- nvmf/common.sh@478 -- # '[' -n 3531831 ']' 00:24:46.555 12:20:47 -- nvmf/common.sh@479 -- # killprocess 3531831 00:24:46.555 12:20:47 -- common/autotest_common.sh@936 -- # '[' -z 3531831 ']' 00:24:46.555 12:20:47 -- common/autotest_common.sh@940 -- # kill -0 3531831 00:24:46.555 12:20:47 -- common/autotest_common.sh@941 -- # uname 00:24:46.555 12:20:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:46.555 12:20:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3531831 00:24:46.555 12:20:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:46.555 12:20:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:46.555 12:20:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3531831' 00:24:46.555 killing process with pid 3531831 00:24:46.555 12:20:47 -- common/autotest_common.sh@955 -- # kill 3531831 00:24:46.555 12:20:47 -- common/autotest_common.sh@960 -- # wait 3531831 00:24:46.555 12:20:47 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:46.555 12:20:47 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:46.555 12:20:47 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:46.555 12:20:47 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:46.555 12:20:47 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:46.555 12:20:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:46.555 12:20:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:46.555 12:20:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:49.148 12:20:49 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:49.148 12:20:49 -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:49.148 12:20:49 -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:49.148 12:20:49 -- host/auth.sh@27 -- # clean_kernel_target 00:24:49.148 12:20:49 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:24:49.148 12:20:49 -- nvmf/common.sh@675 -- # echo 0 00:24:49.148 12:20:49 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:49.148 12:20:49 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:49.148 12:20:49 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:49.148 12:20:49 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:49.148 12:20:49 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:24:49.148 12:20:49 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:24:49.148 12:20:49 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:52.447 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:24:52.447 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:24:52.447 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:24:52.447 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:24:52.447 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:24:52.447 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:24:52.447 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:24:52.447 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:24:52.447 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:24:52.447 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:24:52.447 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:24:52.447 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:24:52.447 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:24:52.447 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:24:52.447 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:24:52.447 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:24:52.447 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:24:52.708 12:20:53 -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.jjQ /tmp/spdk.key-null.8XF /tmp/spdk.key-sha256.A4R /tmp/spdk.key-sha384.9kr /tmp/spdk.key-sha512.brD /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:24:52.708 12:20:53 -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:56.069 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:24:56.069 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:24:56.069 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:24:56.069 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:24:56.069 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:24:56.069 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:24:56.069 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:24:56.069 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:24:56.070 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:24:56.070 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:24:56.070 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:24:56.070 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:24:56.070 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:24:56.070 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:24:56.070 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:24:56.070 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:24:56.070 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:24:56.330 00:24:56.330 real 0m58.049s 00:24:56.330 user 0m51.684s 00:24:56.330 sys 0m14.963s 00:24:56.330 12:20:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:56.330 12:20:57 -- common/autotest_common.sh@10 -- # set +x 00:24:56.330 ************************************ 00:24:56.330 END TEST nvmf_auth 00:24:56.330 ************************************ 00:24:56.330 12:20:57 -- nvmf/nvmf.sh@104 -- # [[ tcp == \t\c\p ]] 00:24:56.330 12:20:57 -- nvmf/nvmf.sh@105 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:56.330 12:20:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:56.330 12:20:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:56.330 12:20:57 -- common/autotest_common.sh@10 -- # set +x 00:24:56.591 ************************************ 00:24:56.591 START TEST nvmf_digest 00:24:56.591 ************************************ 00:24:56.591 12:20:57 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:56.591 * Looking for test storage... 00:24:56.591 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:56.591 12:20:57 -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:56.591 12:20:57 -- nvmf/common.sh@7 -- # uname -s 00:24:56.591 12:20:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:56.591 12:20:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:56.591 12:20:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:56.591 12:20:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:56.592 12:20:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:56.592 12:20:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:56.592 12:20:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:56.592 12:20:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:56.592 12:20:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:56.592 12:20:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:56.592 12:20:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:56.592 12:20:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:56.592 12:20:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:56.592 12:20:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:56.592 12:20:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:56.592 12:20:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:56.592 12:20:57 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:56.592 12:20:57 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:56.592 12:20:57 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:56.592 12:20:57 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:56.592 12:20:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.592 12:20:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.592 12:20:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.592 12:20:57 -- paths/export.sh@5 -- # export PATH 00:24:56.592 12:20:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.592 12:20:57 -- nvmf/common.sh@47 -- # : 0 00:24:56.592 12:20:57 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:56.592 12:20:57 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:56.592 12:20:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:56.592 12:20:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:56.592 12:20:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:56.592 12:20:57 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:56.592 12:20:57 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:56.592 12:20:57 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:56.592 12:20:57 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:24:56.592 12:20:57 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:24:56.592 12:20:57 -- host/digest.sh@16 -- # runtime=2 00:24:56.592 12:20:57 -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:24:56.592 12:20:57 -- host/digest.sh@138 -- # nvmftestinit 00:24:56.592 12:20:57 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:56.592 12:20:57 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:56.592 12:20:57 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:56.592 12:20:57 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:56.592 12:20:57 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:56.592 12:20:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:56.592 12:20:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:56.592 12:20:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:56.592 12:20:57 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:24:56.592 12:20:57 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:24:56.592 12:20:57 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:56.592 12:20:57 -- common/autotest_common.sh@10 -- # set +x 00:25:04.740 12:21:04 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:04.740 12:21:04 -- nvmf/common.sh@291 -- # pci_devs=() 00:25:04.740 12:21:04 -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:04.740 12:21:04 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:04.740 12:21:04 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:04.740 12:21:04 -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:04.740 12:21:04 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:04.740 12:21:04 -- nvmf/common.sh@295 -- # net_devs=() 00:25:04.740 12:21:04 -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:04.740 12:21:04 -- nvmf/common.sh@296 -- # e810=() 00:25:04.740 12:21:04 -- nvmf/common.sh@296 -- # local -ga e810 00:25:04.740 12:21:04 -- nvmf/common.sh@297 -- # x722=() 00:25:04.740 12:21:04 -- nvmf/common.sh@297 -- # local -ga x722 00:25:04.740 12:21:04 -- nvmf/common.sh@298 -- # mlx=() 00:25:04.740 12:21:04 -- nvmf/common.sh@298 -- # local -ga mlx 00:25:04.740 12:21:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:04.740 12:21:04 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:04.740 12:21:04 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:04.740 12:21:04 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:04.740 12:21:04 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:04.740 12:21:04 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:04.740 12:21:04 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:04.740 12:21:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:04.740 12:21:04 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:04.740 12:21:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:04.740 12:21:04 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:04.740 12:21:04 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:04.740 12:21:04 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:04.740 12:21:04 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:04.740 12:21:04 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:04.740 12:21:04 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:04.740 12:21:04 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:04.740 12:21:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:04.740 12:21:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:04.740 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:04.740 12:21:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:04.741 12:21:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:04.741 12:21:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:04.741 12:21:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:04.741 12:21:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:04.741 12:21:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:04.741 12:21:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:04.741 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:04.741 12:21:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:04.741 12:21:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:04.741 12:21:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:04.741 12:21:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:04.741 12:21:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:04.741 12:21:04 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:04.741 12:21:04 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:04.741 12:21:04 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:04.741 12:21:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:04.741 12:21:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:04.741 12:21:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:04.741 12:21:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:04.741 12:21:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:04.741 Found net devices under 0000:31:00.0: cvl_0_0 00:25:04.741 12:21:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:04.741 12:21:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:04.741 12:21:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:04.741 12:21:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:04.741 12:21:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:04.741 12:21:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:04.741 Found net devices under 0000:31:00.1: cvl_0_1 00:25:04.741 12:21:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:04.741 12:21:04 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:25:04.741 12:21:04 -- nvmf/common.sh@403 -- # is_hw=yes 00:25:04.741 12:21:04 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:25:04.741 12:21:04 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:25:04.741 12:21:04 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:25:04.741 12:21:04 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:04.741 12:21:04 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:04.741 12:21:04 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:04.741 12:21:04 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:04.741 12:21:04 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:04.741 12:21:04 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:04.741 12:21:04 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:04.741 12:21:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:04.741 12:21:04 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:04.741 12:21:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:04.741 12:21:04 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:04.741 12:21:04 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:04.741 12:21:04 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:04.741 12:21:04 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:04.741 12:21:04 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:04.741 12:21:04 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:04.741 12:21:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:04.741 12:21:04 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:04.741 12:21:04 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:04.741 12:21:04 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:04.741 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:04.741 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.661 ms 00:25:04.741 00:25:04.741 --- 10.0.0.2 ping statistics --- 00:25:04.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:04.741 rtt min/avg/max/mdev = 0.661/0.661/0.661/0.000 ms 00:25:04.741 12:21:04 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:04.741 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:04.741 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:25:04.741 00:25:04.741 --- 10.0.0.1 ping statistics --- 00:25:04.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:04.741 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:25:04.741 12:21:04 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:04.741 12:21:04 -- nvmf/common.sh@411 -- # return 0 00:25:04.741 12:21:04 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:04.741 12:21:04 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:04.741 12:21:04 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:04.741 12:21:04 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:04.741 12:21:04 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:04.741 12:21:04 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:04.741 12:21:04 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:04.741 12:21:04 -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:04.741 12:21:04 -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:25:04.741 12:21:04 -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:25:04.741 12:21:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:04.741 12:21:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:04.741 12:21:04 -- common/autotest_common.sh@10 -- # set +x 00:25:04.741 ************************************ 00:25:04.741 START TEST nvmf_digest_clean 00:25:04.741 ************************************ 00:25:04.741 12:21:05 -- common/autotest_common.sh@1111 -- # run_digest 00:25:04.741 12:21:05 -- host/digest.sh@120 -- # local dsa_initiator 00:25:04.741 12:21:05 -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:25:04.741 12:21:05 -- host/digest.sh@121 -- # dsa_initiator=false 00:25:04.741 12:21:05 -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:25:04.741 12:21:05 -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:25:04.741 12:21:05 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:04.741 12:21:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:04.741 12:21:05 -- common/autotest_common.sh@10 -- # set +x 00:25:04.741 12:21:05 -- nvmf/common.sh@470 -- # nvmfpid=3548718 00:25:04.741 12:21:05 -- nvmf/common.sh@471 -- # waitforlisten 3548718 00:25:04.741 12:21:05 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:04.741 12:21:05 -- common/autotest_common.sh@817 -- # '[' -z 3548718 ']' 00:25:04.741 12:21:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:04.741 12:21:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:04.741 12:21:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:04.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:04.741 12:21:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:04.741 12:21:05 -- common/autotest_common.sh@10 -- # set +x 00:25:04.741 [2024-04-26 12:21:05.113163] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:25:04.741 [2024-04-26 12:21:05.113222] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:04.741 EAL: No free 2048 kB hugepages reported on node 1 00:25:04.741 [2024-04-26 12:21:05.184791] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:04.741 [2024-04-26 12:21:05.257421] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:04.741 [2024-04-26 12:21:05.257461] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:04.741 [2024-04-26 12:21:05.257468] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:04.741 [2024-04-26 12:21:05.257475] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:04.741 [2024-04-26 12:21:05.257481] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:04.741 [2024-04-26 12:21:05.257505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:04.741 12:21:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:04.741 12:21:05 -- common/autotest_common.sh@850 -- # return 0 00:25:04.741 12:21:05 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:04.741 12:21:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:04.741 12:21:05 -- common/autotest_common.sh@10 -- # set +x 00:25:04.741 12:21:05 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:04.741 12:21:05 -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:25:04.741 12:21:05 -- host/digest.sh@126 -- # common_target_config 00:25:04.741 12:21:05 -- host/digest.sh@43 -- # rpc_cmd 00:25:04.741 12:21:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:04.741 12:21:05 -- common/autotest_common.sh@10 -- # set +x 00:25:05.003 null0 00:25:05.003 [2024-04-26 12:21:05.992672] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:05.003 [2024-04-26 12:21:06.016881] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:05.003 12:21:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:05.003 12:21:06 -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:25:05.003 12:21:06 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:05.003 12:21:06 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:05.003 12:21:06 -- host/digest.sh@80 -- # rw=randread 00:25:05.003 12:21:06 -- host/digest.sh@80 -- # bs=4096 00:25:05.003 12:21:06 -- host/digest.sh@80 -- # qd=128 00:25:05.003 12:21:06 -- host/digest.sh@80 -- # scan_dsa=false 00:25:05.003 12:21:06 -- host/digest.sh@83 -- # bperfpid=3548925 00:25:05.003 12:21:06 -- host/digest.sh@84 -- # waitforlisten 3548925 /var/tmp/bperf.sock 00:25:05.003 12:21:06 -- common/autotest_common.sh@817 -- # '[' -z 3548925 ']' 00:25:05.003 12:21:06 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:05.003 12:21:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:05.003 12:21:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:05.003 12:21:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:05.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:05.003 12:21:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:05.003 12:21:06 -- common/autotest_common.sh@10 -- # set +x 00:25:05.003 [2024-04-26 12:21:06.070688] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:25:05.003 [2024-04-26 12:21:06.070735] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3548925 ] 00:25:05.003 EAL: No free 2048 kB hugepages reported on node 1 00:25:05.003 [2024-04-26 12:21:06.147174] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:05.003 [2024-04-26 12:21:06.209893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:05.946 12:21:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:05.946 12:21:06 -- common/autotest_common.sh@850 -- # return 0 00:25:05.946 12:21:06 -- host/digest.sh@86 -- # false 00:25:05.946 12:21:06 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:05.946 12:21:06 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:05.946 12:21:07 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:05.946 12:21:07 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:06.207 nvme0n1 00:25:06.207 12:21:07 -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:06.207 12:21:07 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:06.467 Running I/O for 2 seconds... 00:25:08.383 00:25:08.383 Latency(us) 00:25:08.383 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:08.383 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:08.383 nvme0n1 : 2.00 19284.93 75.33 0.00 0.00 6629.77 3044.69 23156.05 00:25:08.383 =================================================================================================================== 00:25:08.383 Total : 19284.93 75.33 0.00 0.00 6629.77 3044.69 23156.05 00:25:08.383 0 00:25:08.383 12:21:09 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:08.383 12:21:09 -- host/digest.sh@93 -- # get_accel_stats 00:25:08.383 12:21:09 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:08.383 12:21:09 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:08.383 | select(.opcode=="crc32c") 00:25:08.383 | "\(.module_name) \(.executed)"' 00:25:08.383 12:21:09 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:08.644 12:21:09 -- host/digest.sh@94 -- # false 00:25:08.644 12:21:09 -- host/digest.sh@94 -- # exp_module=software 00:25:08.644 12:21:09 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:08.644 12:21:09 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:08.644 12:21:09 -- host/digest.sh@98 -- # killprocess 3548925 00:25:08.644 12:21:09 -- common/autotest_common.sh@936 -- # '[' -z 3548925 ']' 00:25:08.644 12:21:09 -- common/autotest_common.sh@940 -- # kill -0 3548925 00:25:08.644 12:21:09 -- common/autotest_common.sh@941 -- # uname 00:25:08.644 12:21:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:08.644 12:21:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3548925 00:25:08.644 12:21:09 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:08.644 12:21:09 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:08.644 12:21:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3548925' 00:25:08.644 killing process with pid 3548925 00:25:08.644 12:21:09 -- common/autotest_common.sh@955 -- # kill 3548925 00:25:08.644 Received shutdown signal, test time was about 2.000000 seconds 00:25:08.644 00:25:08.644 Latency(us) 00:25:08.645 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:08.645 =================================================================================================================== 00:25:08.645 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:08.645 12:21:09 -- common/autotest_common.sh@960 -- # wait 3548925 00:25:08.645 12:21:09 -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:25:08.645 12:21:09 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:08.645 12:21:09 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:08.645 12:21:09 -- host/digest.sh@80 -- # rw=randread 00:25:08.645 12:21:09 -- host/digest.sh@80 -- # bs=131072 00:25:08.645 12:21:09 -- host/digest.sh@80 -- # qd=16 00:25:08.645 12:21:09 -- host/digest.sh@80 -- # scan_dsa=false 00:25:08.645 12:21:09 -- host/digest.sh@83 -- # bperfpid=3550184 00:25:08.645 12:21:09 -- host/digest.sh@84 -- # waitforlisten 3550184 /var/tmp/bperf.sock 00:25:08.645 12:21:09 -- common/autotest_common.sh@817 -- # '[' -z 3550184 ']' 00:25:08.645 12:21:09 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:08.645 12:21:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:08.645 12:21:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:08.645 12:21:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:08.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:08.645 12:21:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:08.645 12:21:09 -- common/autotest_common.sh@10 -- # set +x 00:25:08.645 [2024-04-26 12:21:09.839271] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:25:08.645 [2024-04-26 12:21:09.839330] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3550184 ] 00:25:08.645 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:08.645 Zero copy mechanism will not be used. 00:25:08.905 EAL: No free 2048 kB hugepages reported on node 1 00:25:08.905 [2024-04-26 12:21:09.916734] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.905 [2024-04-26 12:21:09.978401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:09.475 12:21:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:09.475 12:21:10 -- common/autotest_common.sh@850 -- # return 0 00:25:09.475 12:21:10 -- host/digest.sh@86 -- # false 00:25:09.475 12:21:10 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:09.475 12:21:10 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:09.735 12:21:10 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:09.735 12:21:10 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:09.996 nvme0n1 00:25:09.996 12:21:11 -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:09.996 12:21:11 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:10.257 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:10.257 Zero copy mechanism will not be used. 00:25:10.257 Running I/O for 2 seconds... 00:25:12.169 00:25:12.169 Latency(us) 00:25:12.169 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:12.169 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:12.169 nvme0n1 : 2.00 3521.71 440.21 0.00 0.00 4540.37 983.04 10704.21 00:25:12.169 =================================================================================================================== 00:25:12.169 Total : 3521.71 440.21 0.00 0.00 4540.37 983.04 10704.21 00:25:12.169 0 00:25:12.169 12:21:13 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:12.169 12:21:13 -- host/digest.sh@93 -- # get_accel_stats 00:25:12.169 12:21:13 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:12.169 12:21:13 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:12.169 | select(.opcode=="crc32c") 00:25:12.169 | "\(.module_name) \(.executed)"' 00:25:12.169 12:21:13 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:12.429 12:21:13 -- host/digest.sh@94 -- # false 00:25:12.429 12:21:13 -- host/digest.sh@94 -- # exp_module=software 00:25:12.429 12:21:13 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:12.429 12:21:13 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:12.429 12:21:13 -- host/digest.sh@98 -- # killprocess 3550184 00:25:12.429 12:21:13 -- common/autotest_common.sh@936 -- # '[' -z 3550184 ']' 00:25:12.429 12:21:13 -- common/autotest_common.sh@940 -- # kill -0 3550184 00:25:12.429 12:21:13 -- common/autotest_common.sh@941 -- # uname 00:25:12.429 12:21:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:12.429 12:21:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3550184 00:25:12.430 12:21:13 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:12.430 12:21:13 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:12.430 12:21:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3550184' 00:25:12.430 killing process with pid 3550184 00:25:12.430 12:21:13 -- common/autotest_common.sh@955 -- # kill 3550184 00:25:12.430 Received shutdown signal, test time was about 2.000000 seconds 00:25:12.430 00:25:12.430 Latency(us) 00:25:12.430 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:12.430 =================================================================================================================== 00:25:12.430 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:12.430 12:21:13 -- common/autotest_common.sh@960 -- # wait 3550184 00:25:12.430 12:21:13 -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:25:12.430 12:21:13 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:12.430 12:21:13 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:12.430 12:21:13 -- host/digest.sh@80 -- # rw=randwrite 00:25:12.430 12:21:13 -- host/digest.sh@80 -- # bs=4096 00:25:12.430 12:21:13 -- host/digest.sh@80 -- # qd=128 00:25:12.430 12:21:13 -- host/digest.sh@80 -- # scan_dsa=false 00:25:12.430 12:21:13 -- host/digest.sh@83 -- # bperfpid=3550899 00:25:12.430 12:21:13 -- host/digest.sh@84 -- # waitforlisten 3550899 /var/tmp/bperf.sock 00:25:12.430 12:21:13 -- common/autotest_common.sh@817 -- # '[' -z 3550899 ']' 00:25:12.430 12:21:13 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:12.430 12:21:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:12.430 12:21:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:12.430 12:21:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:12.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:12.430 12:21:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:12.430 12:21:13 -- common/autotest_common.sh@10 -- # set +x 00:25:12.690 [2024-04-26 12:21:13.688406] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:25:12.690 [2024-04-26 12:21:13.688464] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3550899 ] 00:25:12.690 EAL: No free 2048 kB hugepages reported on node 1 00:25:12.690 [2024-04-26 12:21:13.762783] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:12.690 [2024-04-26 12:21:13.814367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:13.259 12:21:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:13.259 12:21:14 -- common/autotest_common.sh@850 -- # return 0 00:25:13.259 12:21:14 -- host/digest.sh@86 -- # false 00:25:13.259 12:21:14 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:13.259 12:21:14 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:13.521 12:21:14 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:13.521 12:21:14 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:13.781 nvme0n1 00:25:13.781 12:21:14 -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:13.781 12:21:14 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:14.041 Running I/O for 2 seconds... 00:25:15.954 00:25:15.954 Latency(us) 00:25:15.954 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:15.954 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:15.954 nvme0n1 : 2.00 21236.06 82.95 0.00 0.00 6023.19 2211.84 14417.92 00:25:15.954 =================================================================================================================== 00:25:15.954 Total : 21236.06 82.95 0.00 0.00 6023.19 2211.84 14417.92 00:25:15.954 0 00:25:15.954 12:21:17 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:15.954 12:21:17 -- host/digest.sh@93 -- # get_accel_stats 00:25:15.954 12:21:17 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:15.954 12:21:17 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:15.954 | select(.opcode=="crc32c") 00:25:15.954 | "\(.module_name) \(.executed)"' 00:25:15.954 12:21:17 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:16.215 12:21:17 -- host/digest.sh@94 -- # false 00:25:16.215 12:21:17 -- host/digest.sh@94 -- # exp_module=software 00:25:16.215 12:21:17 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:16.215 12:21:17 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:16.215 12:21:17 -- host/digest.sh@98 -- # killprocess 3550899 00:25:16.215 12:21:17 -- common/autotest_common.sh@936 -- # '[' -z 3550899 ']' 00:25:16.215 12:21:17 -- common/autotest_common.sh@940 -- # kill -0 3550899 00:25:16.215 12:21:17 -- common/autotest_common.sh@941 -- # uname 00:25:16.215 12:21:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:16.215 12:21:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3550899 00:25:16.215 12:21:17 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:16.215 12:21:17 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:16.215 12:21:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3550899' 00:25:16.215 killing process with pid 3550899 00:25:16.215 12:21:17 -- common/autotest_common.sh@955 -- # kill 3550899 00:25:16.215 Received shutdown signal, test time was about 2.000000 seconds 00:25:16.215 00:25:16.215 Latency(us) 00:25:16.215 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:16.215 =================================================================================================================== 00:25:16.215 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:16.215 12:21:17 -- common/autotest_common.sh@960 -- # wait 3550899 00:25:16.215 12:21:17 -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:25:16.215 12:21:17 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:16.215 12:21:17 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:16.215 12:21:17 -- host/digest.sh@80 -- # rw=randwrite 00:25:16.215 12:21:17 -- host/digest.sh@80 -- # bs=131072 00:25:16.215 12:21:17 -- host/digest.sh@80 -- # qd=16 00:25:16.215 12:21:17 -- host/digest.sh@80 -- # scan_dsa=false 00:25:16.476 12:21:17 -- host/digest.sh@83 -- # bperfpid=3551578 00:25:16.476 12:21:17 -- host/digest.sh@84 -- # waitforlisten 3551578 /var/tmp/bperf.sock 00:25:16.476 12:21:17 -- common/autotest_common.sh@817 -- # '[' -z 3551578 ']' 00:25:16.476 12:21:17 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:16.476 12:21:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:16.476 12:21:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:16.476 12:21:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:16.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:16.476 12:21:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:16.476 12:21:17 -- common/autotest_common.sh@10 -- # set +x 00:25:16.476 [2024-04-26 12:21:17.488182] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:25:16.476 [2024-04-26 12:21:17.488234] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3551578 ] 00:25:16.476 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:16.476 Zero copy mechanism will not be used. 00:25:16.476 EAL: No free 2048 kB hugepages reported on node 1 00:25:16.476 [2024-04-26 12:21:17.561949] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:16.476 [2024-04-26 12:21:17.612264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:17.047 12:21:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:17.047 12:21:18 -- common/autotest_common.sh@850 -- # return 0 00:25:17.047 12:21:18 -- host/digest.sh@86 -- # false 00:25:17.047 12:21:18 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:17.047 12:21:18 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:17.309 12:21:18 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:17.309 12:21:18 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:17.569 nvme0n1 00:25:17.569 12:21:18 -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:17.569 12:21:18 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:17.569 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:17.569 Zero copy mechanism will not be used. 00:25:17.569 Running I/O for 2 seconds... 00:25:20.110 00:25:20.110 Latency(us) 00:25:20.110 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:20.110 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:20.110 nvme0n1 : 2.00 5281.88 660.23 0.00 0.00 3023.22 1529.17 15510.19 00:25:20.110 =================================================================================================================== 00:25:20.110 Total : 5281.88 660.23 0.00 0.00 3023.22 1529.17 15510.19 00:25:20.110 0 00:25:20.110 12:21:20 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:20.110 12:21:20 -- host/digest.sh@93 -- # get_accel_stats 00:25:20.110 12:21:20 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:20.110 12:21:20 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:20.110 | select(.opcode=="crc32c") 00:25:20.110 | "\(.module_name) \(.executed)"' 00:25:20.111 12:21:20 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:20.111 12:21:20 -- host/digest.sh@94 -- # false 00:25:20.111 12:21:20 -- host/digest.sh@94 -- # exp_module=software 00:25:20.111 12:21:20 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:20.111 12:21:20 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:20.111 12:21:20 -- host/digest.sh@98 -- # killprocess 3551578 00:25:20.111 12:21:20 -- common/autotest_common.sh@936 -- # '[' -z 3551578 ']' 00:25:20.111 12:21:20 -- common/autotest_common.sh@940 -- # kill -0 3551578 00:25:20.111 12:21:20 -- common/autotest_common.sh@941 -- # uname 00:25:20.111 12:21:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:20.111 12:21:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3551578 00:25:20.111 12:21:20 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:20.111 12:21:20 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:20.111 12:21:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3551578' 00:25:20.111 killing process with pid 3551578 00:25:20.111 12:21:20 -- common/autotest_common.sh@955 -- # kill 3551578 00:25:20.111 Received shutdown signal, test time was about 2.000000 seconds 00:25:20.111 00:25:20.111 Latency(us) 00:25:20.111 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:20.111 =================================================================================================================== 00:25:20.111 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:20.111 12:21:20 -- common/autotest_common.sh@960 -- # wait 3551578 00:25:20.111 12:21:21 -- host/digest.sh@132 -- # killprocess 3548718 00:25:20.111 12:21:21 -- common/autotest_common.sh@936 -- # '[' -z 3548718 ']' 00:25:20.111 12:21:21 -- common/autotest_common.sh@940 -- # kill -0 3548718 00:25:20.111 12:21:21 -- common/autotest_common.sh@941 -- # uname 00:25:20.111 12:21:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:20.111 12:21:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3548718 00:25:20.111 12:21:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:20.111 12:21:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:20.111 12:21:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3548718' 00:25:20.111 killing process with pid 3548718 00:25:20.111 12:21:21 -- common/autotest_common.sh@955 -- # kill 3548718 00:25:20.111 12:21:21 -- common/autotest_common.sh@960 -- # wait 3548718 00:25:20.111 00:25:20.111 real 0m16.232s 00:25:20.111 user 0m31.875s 00:25:20.111 sys 0m3.331s 00:25:20.111 12:21:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:20.111 12:21:21 -- common/autotest_common.sh@10 -- # set +x 00:25:20.111 ************************************ 00:25:20.111 END TEST nvmf_digest_clean 00:25:20.111 ************************************ 00:25:20.111 12:21:21 -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:25:20.111 12:21:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:20.111 12:21:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:20.111 12:21:21 -- common/autotest_common.sh@10 -- # set +x 00:25:20.371 ************************************ 00:25:20.371 START TEST nvmf_digest_error 00:25:20.371 ************************************ 00:25:20.371 12:21:21 -- common/autotest_common.sh@1111 -- # run_digest_error 00:25:20.371 12:21:21 -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:25:20.371 12:21:21 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:20.371 12:21:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:20.371 12:21:21 -- common/autotest_common.sh@10 -- # set +x 00:25:20.371 12:21:21 -- nvmf/common.sh@470 -- # nvmfpid=3552372 00:25:20.371 12:21:21 -- nvmf/common.sh@471 -- # waitforlisten 3552372 00:25:20.371 12:21:21 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:20.371 12:21:21 -- common/autotest_common.sh@817 -- # '[' -z 3552372 ']' 00:25:20.371 12:21:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:20.371 12:21:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:20.371 12:21:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:20.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:20.371 12:21:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:20.371 12:21:21 -- common/autotest_common.sh@10 -- # set +x 00:25:20.371 [2024-04-26 12:21:21.526220] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:25:20.371 [2024-04-26 12:21:21.526267] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:20.371 EAL: No free 2048 kB hugepages reported on node 1 00:25:20.633 [2024-04-26 12:21:21.593663] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:20.633 [2024-04-26 12:21:21.661246] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:20.633 [2024-04-26 12:21:21.661280] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:20.633 [2024-04-26 12:21:21.661288] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:20.633 [2024-04-26 12:21:21.661294] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:20.633 [2024-04-26 12:21:21.661300] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:20.633 [2024-04-26 12:21:21.661325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:21.204 12:21:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:21.204 12:21:22 -- common/autotest_common.sh@850 -- # return 0 00:25:21.204 12:21:22 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:21.204 12:21:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:21.204 12:21:22 -- common/autotest_common.sh@10 -- # set +x 00:25:21.204 12:21:22 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:21.204 12:21:22 -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:25:21.204 12:21:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:21.204 12:21:22 -- common/autotest_common.sh@10 -- # set +x 00:25:21.204 [2024-04-26 12:21:22.331258] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:25:21.204 12:21:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:21.204 12:21:22 -- host/digest.sh@105 -- # common_target_config 00:25:21.204 12:21:22 -- host/digest.sh@43 -- # rpc_cmd 00:25:21.204 12:21:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:21.204 12:21:22 -- common/autotest_common.sh@10 -- # set +x 00:25:21.204 null0 00:25:21.204 [2024-04-26 12:21:22.411738] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:21.464 [2024-04-26 12:21:22.435933] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:21.465 12:21:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:21.465 12:21:22 -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:25:21.465 12:21:22 -- host/digest.sh@54 -- # local rw bs qd 00:25:21.465 12:21:22 -- host/digest.sh@56 -- # rw=randread 00:25:21.465 12:21:22 -- host/digest.sh@56 -- # bs=4096 00:25:21.465 12:21:22 -- host/digest.sh@56 -- # qd=128 00:25:21.465 12:21:22 -- host/digest.sh@58 -- # bperfpid=3552643 00:25:21.465 12:21:22 -- host/digest.sh@60 -- # waitforlisten 3552643 /var/tmp/bperf.sock 00:25:21.465 12:21:22 -- common/autotest_common.sh@817 -- # '[' -z 3552643 ']' 00:25:21.465 12:21:22 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:25:21.465 12:21:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:21.465 12:21:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:21.465 12:21:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:21.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:21.465 12:21:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:21.465 12:21:22 -- common/autotest_common.sh@10 -- # set +x 00:25:21.465 [2024-04-26 12:21:22.496948] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:25:21.465 [2024-04-26 12:21:22.496997] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3552643 ] 00:25:21.465 EAL: No free 2048 kB hugepages reported on node 1 00:25:21.465 [2024-04-26 12:21:22.570974] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:21.465 [2024-04-26 12:21:22.622897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:22.035 12:21:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:22.035 12:21:23 -- common/autotest_common.sh@850 -- # return 0 00:25:22.035 12:21:23 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:22.035 12:21:23 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:22.294 12:21:23 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:22.294 12:21:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:22.294 12:21:23 -- common/autotest_common.sh@10 -- # set +x 00:25:22.294 12:21:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:22.294 12:21:23 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:22.294 12:21:23 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:22.554 nvme0n1 00:25:22.554 12:21:23 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:22.554 12:21:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:22.554 12:21:23 -- common/autotest_common.sh@10 -- # set +x 00:25:22.554 12:21:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:22.554 12:21:23 -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:22.554 12:21:23 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:22.815 Running I/O for 2 seconds... 00:25:22.815 [2024-04-26 12:21:23.822145] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:22.815 [2024-04-26 12:21:23.822175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.815 [2024-04-26 12:21:23.822184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.815 [2024-04-26 12:21:23.837472] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:22.815 [2024-04-26 12:21:23.837493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:10244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.815 [2024-04-26 12:21:23.837500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.815 [2024-04-26 12:21:23.850860] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:22.815 [2024-04-26 12:21:23.850879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:40 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.815 [2024-04-26 12:21:23.850886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.815 [2024-04-26 12:21:23.864046] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:22.815 [2024-04-26 12:21:23.864070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.815 [2024-04-26 12:21:23.864076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.815 [2024-04-26 12:21:23.875762] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:22.815 [2024-04-26 12:21:23.875781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:12410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.815 [2024-04-26 12:21:23.875788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.815 [2024-04-26 12:21:23.889452] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:22.815 [2024-04-26 12:21:23.889470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:18427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.815 [2024-04-26 12:21:23.889477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.815 [2024-04-26 12:21:23.902209] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:22.815 [2024-04-26 12:21:23.902226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.815 [2024-04-26 12:21:23.902232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.815 [2024-04-26 12:21:23.914101] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:22.815 [2024-04-26 12:21:23.914119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:10366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.815 [2024-04-26 12:21:23.914125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.815 [2024-04-26 12:21:23.927152] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:22.815 [2024-04-26 12:21:23.927170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.815 [2024-04-26 12:21:23.927176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.815 [2024-04-26 12:21:23.941657] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:22.815 [2024-04-26 12:21:23.941676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:7944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.815 [2024-04-26 12:21:23.941683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.815 [2024-04-26 12:21:23.953192] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:22.815 [2024-04-26 12:21:23.953211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:19703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.815 [2024-04-26 12:21:23.953217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.815 [2024-04-26 12:21:23.966348] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:22.815 [2024-04-26 12:21:23.966366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:8545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.815 [2024-04-26 12:21:23.966372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.816 [2024-04-26 12:21:23.980226] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:22.816 [2024-04-26 12:21:23.980244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:17553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.816 [2024-04-26 12:21:23.980250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.816 [2024-04-26 12:21:23.992200] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:22.816 [2024-04-26 12:21:23.992217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:6617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.816 [2024-04-26 12:21:23.992223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.816 [2024-04-26 12:21:24.003550] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:22.816 [2024-04-26 12:21:24.003567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.816 [2024-04-26 12:21:24.003573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.816 [2024-04-26 12:21:24.015995] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:22.816 [2024-04-26 12:21:24.016013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:2114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.816 [2024-04-26 12:21:24.016019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.816 [2024-04-26 12:21:24.029249] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:22.816 [2024-04-26 12:21:24.029266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:25068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.816 [2024-04-26 12:21:24.029272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.077 [2024-04-26 12:21:24.042433] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.077 [2024-04-26 12:21:24.042451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:4561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.077 [2024-04-26 12:21:24.042458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.077 [2024-04-26 12:21:24.054300] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.077 [2024-04-26 12:21:24.054318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:7193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.077 [2024-04-26 12:21:24.054324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.077 [2024-04-26 12:21:24.067125] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.077 [2024-04-26 12:21:24.067143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:25 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.077 [2024-04-26 12:21:24.067149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.077 [2024-04-26 12:21:24.079878] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.077 [2024-04-26 12:21:24.079895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.077 [2024-04-26 12:21:24.079905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.077 [2024-04-26 12:21:24.092931] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.077 [2024-04-26 12:21:24.092949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:18171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.077 [2024-04-26 12:21:24.092956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.077 [2024-04-26 12:21:24.103098] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.077 [2024-04-26 12:21:24.103116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:20858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.077 [2024-04-26 12:21:24.103122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.077 [2024-04-26 12:21:24.116139] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.077 [2024-04-26 12:21:24.116158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.077 [2024-04-26 12:21:24.116164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.077 [2024-04-26 12:21:24.130232] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.077 [2024-04-26 12:21:24.130249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:17182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.077 [2024-04-26 12:21:24.130255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.077 [2024-04-26 12:21:24.143258] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.077 [2024-04-26 12:21:24.143276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:7705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.077 [2024-04-26 12:21:24.143282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.077 [2024-04-26 12:21:24.157721] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.077 [2024-04-26 12:21:24.157739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.077 [2024-04-26 12:21:24.157747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.077 [2024-04-26 12:21:24.169334] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.077 [2024-04-26 12:21:24.169352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:18681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.077 [2024-04-26 12:21:24.169360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.077 [2024-04-26 12:21:24.182497] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.077 [2024-04-26 12:21:24.182514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:10670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.077 [2024-04-26 12:21:24.182521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.077 [2024-04-26 12:21:24.198003] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.077 [2024-04-26 12:21:24.198020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:24218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.077 [2024-04-26 12:21:24.198027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.077 [2024-04-26 12:21:24.208413] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.077 [2024-04-26 12:21:24.208430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.078 [2024-04-26 12:21:24.208437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.078 [2024-04-26 12:21:24.222298] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.078 [2024-04-26 12:21:24.222314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:9313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.078 [2024-04-26 12:21:24.222321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.078 [2024-04-26 12:21:24.233157] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.078 [2024-04-26 12:21:24.233175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.078 [2024-04-26 12:21:24.233181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.078 [2024-04-26 12:21:24.247440] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.078 [2024-04-26 12:21:24.247457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:23710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.078 [2024-04-26 12:21:24.247463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.078 [2024-04-26 12:21:24.260109] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.078 [2024-04-26 12:21:24.260127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.078 [2024-04-26 12:21:24.260134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.078 [2024-04-26 12:21:24.273243] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.078 [2024-04-26 12:21:24.273260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:24957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.078 [2024-04-26 12:21:24.273267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.078 [2024-04-26 12:21:24.285022] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.078 [2024-04-26 12:21:24.285040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.078 [2024-04-26 12:21:24.285046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.078 [2024-04-26 12:21:24.296518] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.078 [2024-04-26 12:21:24.296536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.078 [2024-04-26 12:21:24.296545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.339 [2024-04-26 12:21:24.309798] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.339 [2024-04-26 12:21:24.309815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.339 [2024-04-26 12:21:24.309821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.339 [2024-04-26 12:21:24.322117] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.339 [2024-04-26 12:21:24.322134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:1972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.339 [2024-04-26 12:21:24.322140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.339 [2024-04-26 12:21:24.334954] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.339 [2024-04-26 12:21:24.334972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:15471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.339 [2024-04-26 12:21:24.334979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.339 [2024-04-26 12:21:24.346273] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.339 [2024-04-26 12:21:24.346291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.339 [2024-04-26 12:21:24.346297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.339 [2024-04-26 12:21:24.360586] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.339 [2024-04-26 12:21:24.360603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:11390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.339 [2024-04-26 12:21:24.360609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.339 [2024-04-26 12:21:24.373374] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.339 [2024-04-26 12:21:24.373391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:25119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.339 [2024-04-26 12:21:24.373398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.339 [2024-04-26 12:21:24.385279] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.339 [2024-04-26 12:21:24.385297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:11628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.339 [2024-04-26 12:21:24.385304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.339 [2024-04-26 12:21:24.398778] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.339 [2024-04-26 12:21:24.398796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:5444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.339 [2024-04-26 12:21:24.398802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.339 [2024-04-26 12:21:24.410693] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.339 [2024-04-26 12:21:24.410713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:8782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.339 [2024-04-26 12:21:24.410720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.339 [2024-04-26 12:21:24.423579] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.339 [2024-04-26 12:21:24.423596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.339 [2024-04-26 12:21:24.423603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.339 [2024-04-26 12:21:24.436600] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.339 [2024-04-26 12:21:24.436617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:2886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.339 [2024-04-26 12:21:24.436624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.339 [2024-04-26 12:21:24.449042] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.339 [2024-04-26 12:21:24.449060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.339 [2024-04-26 12:21:24.449066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.339 [2024-04-26 12:21:24.461568] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.339 [2024-04-26 12:21:24.461585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:11517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.339 [2024-04-26 12:21:24.461592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.339 [2024-04-26 12:21:24.472981] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.339 [2024-04-26 12:21:24.472998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.339 [2024-04-26 12:21:24.473005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.339 [2024-04-26 12:21:24.486432] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.339 [2024-04-26 12:21:24.486450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:22287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.339 [2024-04-26 12:21:24.486456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.339 [2024-04-26 12:21:24.498055] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.339 [2024-04-26 12:21:24.498073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:22982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.339 [2024-04-26 12:21:24.498080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.339 [2024-04-26 12:21:24.512609] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.339 [2024-04-26 12:21:24.512626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:7545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.340 [2024-04-26 12:21:24.512633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.340 [2024-04-26 12:21:24.523390] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.340 [2024-04-26 12:21:24.523408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:17103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.340 [2024-04-26 12:21:24.523414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.340 [2024-04-26 12:21:24.536672] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.340 [2024-04-26 12:21:24.536689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:18699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.340 [2024-04-26 12:21:24.536695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.340 [2024-04-26 12:21:24.550519] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.340 [2024-04-26 12:21:24.550537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:18926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.340 [2024-04-26 12:21:24.550544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.600 [2024-04-26 12:21:24.563781] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.601 [2024-04-26 12:21:24.563798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:25307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.601 [2024-04-26 12:21:24.563804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.601 [2024-04-26 12:21:24.573447] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.601 [2024-04-26 12:21:24.573464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:8370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.601 [2024-04-26 12:21:24.573471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.601 [2024-04-26 12:21:24.587212] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.601 [2024-04-26 12:21:24.587230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:12975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.601 [2024-04-26 12:21:24.587236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.601 [2024-04-26 12:21:24.600970] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.601 [2024-04-26 12:21:24.600988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.601 [2024-04-26 12:21:24.600994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.601 [2024-04-26 12:21:24.614019] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.601 [2024-04-26 12:21:24.614035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:17520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.601 [2024-04-26 12:21:24.614042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.601 [2024-04-26 12:21:24.624385] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.601 [2024-04-26 12:21:24.624402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:24462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.601 [2024-04-26 12:21:24.624412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.601 [2024-04-26 12:21:24.639086] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.601 [2024-04-26 12:21:24.639103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.601 [2024-04-26 12:21:24.639111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.601 [2024-04-26 12:21:24.650465] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.601 [2024-04-26 12:21:24.650482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:3259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.601 [2024-04-26 12:21:24.650488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.601 [2024-04-26 12:21:24.663022] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.601 [2024-04-26 12:21:24.663038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.601 [2024-04-26 12:21:24.663045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.601 [2024-04-26 12:21:24.676169] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.601 [2024-04-26 12:21:24.676186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:13979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.601 [2024-04-26 12:21:24.676192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.601 [2024-04-26 12:21:24.688124] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.601 [2024-04-26 12:21:24.688142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:1190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.601 [2024-04-26 12:21:24.688148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.601 [2024-04-26 12:21:24.701709] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.601 [2024-04-26 12:21:24.701725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:19792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.601 [2024-04-26 12:21:24.701732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.601 [2024-04-26 12:21:24.714381] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.601 [2024-04-26 12:21:24.714399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.601 [2024-04-26 12:21:24.714406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.601 [2024-04-26 12:21:24.725243] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.601 [2024-04-26 12:21:24.725260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:20158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.601 [2024-04-26 12:21:24.725269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.601 [2024-04-26 12:21:24.738593] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.601 [2024-04-26 12:21:24.738613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.601 [2024-04-26 12:21:24.738620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.601 [2024-04-26 12:21:24.751556] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.601 [2024-04-26 12:21:24.751574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.601 [2024-04-26 12:21:24.751580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.601 [2024-04-26 12:21:24.764432] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.601 [2024-04-26 12:21:24.764450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:4128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.601 [2024-04-26 12:21:24.764458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.601 [2024-04-26 12:21:24.777422] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.601 [2024-04-26 12:21:24.777440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.601 [2024-04-26 12:21:24.777447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.601 [2024-04-26 12:21:24.789911] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.601 [2024-04-26 12:21:24.789928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.601 [2024-04-26 12:21:24.789934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.601 [2024-04-26 12:21:24.800714] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.601 [2024-04-26 12:21:24.800732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.601 [2024-04-26 12:21:24.800738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.601 [2024-04-26 12:21:24.814074] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.601 [2024-04-26 12:21:24.814092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:16689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.601 [2024-04-26 12:21:24.814098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.862 [2024-04-26 12:21:24.826099] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.862 [2024-04-26 12:21:24.826117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.862 [2024-04-26 12:21:24.826123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.862 [2024-04-26 12:21:24.840361] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.862 [2024-04-26 12:21:24.840378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:9100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.862 [2024-04-26 12:21:24.840384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.862 [2024-04-26 12:21:24.851663] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.862 [2024-04-26 12:21:24.851680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:19915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.862 [2024-04-26 12:21:24.851687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.862 [2024-04-26 12:21:24.865299] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.862 [2024-04-26 12:21:24.865317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:3005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.862 [2024-04-26 12:21:24.865323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.862 [2024-04-26 12:21:24.878012] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.862 [2024-04-26 12:21:24.878028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.862 [2024-04-26 12:21:24.878035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.862 [2024-04-26 12:21:24.891194] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.862 [2024-04-26 12:21:24.891212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.862 [2024-04-26 12:21:24.891219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.862 [2024-04-26 12:21:24.904294] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.863 [2024-04-26 12:21:24.904312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.863 [2024-04-26 12:21:24.904318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.863 [2024-04-26 12:21:24.917083] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.863 [2024-04-26 12:21:24.917101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.863 [2024-04-26 12:21:24.917107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.863 [2024-04-26 12:21:24.929111] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.863 [2024-04-26 12:21:24.929128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:3938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.863 [2024-04-26 12:21:24.929134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.863 [2024-04-26 12:21:24.941492] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.863 [2024-04-26 12:21:24.941509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:9124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.863 [2024-04-26 12:21:24.941515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.863 [2024-04-26 12:21:24.953989] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.863 [2024-04-26 12:21:24.954006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:22662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.863 [2024-04-26 12:21:24.954016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.863 [2024-04-26 12:21:24.966182] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.863 [2024-04-26 12:21:24.966200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:14634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.863 [2024-04-26 12:21:24.966206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.863 [2024-04-26 12:21:24.979470] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.863 [2024-04-26 12:21:24.979487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.863 [2024-04-26 12:21:24.979493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.863 [2024-04-26 12:21:24.991388] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.863 [2024-04-26 12:21:24.991405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.863 [2024-04-26 12:21:24.991412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.863 [2024-04-26 12:21:25.005191] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.863 [2024-04-26 12:21:25.005208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.863 [2024-04-26 12:21:25.005214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.863 [2024-04-26 12:21:25.016436] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.863 [2024-04-26 12:21:25.016454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.863 [2024-04-26 12:21:25.016460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.863 [2024-04-26 12:21:25.028405] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.863 [2024-04-26 12:21:25.028422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:13774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.863 [2024-04-26 12:21:25.028429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.863 [2024-04-26 12:21:25.042098] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.863 [2024-04-26 12:21:25.042115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.863 [2024-04-26 12:21:25.042122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.863 [2024-04-26 12:21:25.055660] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.863 [2024-04-26 12:21:25.055677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:22364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.863 [2024-04-26 12:21:25.055684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.863 [2024-04-26 12:21:25.069007] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:23.863 [2024-04-26 12:21:25.069025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:23040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.863 [2024-04-26 12:21:25.069031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.124 [2024-04-26 12:21:25.082633] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:24.124 [2024-04-26 12:21:25.082651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:6315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.124 [2024-04-26 12:21:25.082657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.124 [2024-04-26 12:21:25.094805] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:24.124 [2024-04-26 12:21:25.094823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:23597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.124 [2024-04-26 12:21:25.094830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.124 [2024-04-26 12:21:25.106208] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:24.124 [2024-04-26 12:21:25.106226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:2575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.124 [2024-04-26 12:21:25.106234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.124 [2024-04-26 12:21:25.119095] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:24.124 [2024-04-26 12:21:25.119112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:10757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.124 [2024-04-26 12:21:25.119118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.124 [2024-04-26 12:21:25.131445] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:24.124 [2024-04-26 12:21:25.131463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:17563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.124 [2024-04-26 12:21:25.131470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.124 [2024-04-26 12:21:25.145026] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:24.124 [2024-04-26 12:21:25.145043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.124 [2024-04-26 12:21:25.145050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.124 [2024-04-26 12:21:25.158273] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:24.124 [2024-04-26 12:21:25.158290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:11787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.124 [2024-04-26 12:21:25.158297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.124 [2024-04-26 12:21:25.170244] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:24.124 [2024-04-26 12:21:25.170262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.124 [2024-04-26 12:21:25.170271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.124 [2024-04-26 12:21:25.183066] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:24.124 [2024-04-26 12:21:25.183083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:7095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.124 [2024-04-26 12:21:25.183089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.124 [2024-04-26 12:21:25.195399] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:24.124 [2024-04-26 12:21:25.195417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:1093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.124 [2024-04-26 12:21:25.195424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.124 [2024-04-26 12:21:25.206774] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:24.124 [2024-04-26 12:21:25.206792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:1152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.124 [2024-04-26 12:21:25.206799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.124 [2024-04-26 12:21:25.221642] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:24.124 [2024-04-26 12:21:25.221660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:21006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.124 [2024-04-26 12:21:25.221666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.124 [2024-04-26 12:21:25.233976] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:24.124 [2024-04-26 12:21:25.233994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:25029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.124 [2024-04-26 12:21:25.234000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.124 [2024-04-26 12:21:25.246291] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:24.124 [2024-04-26 12:21:25.246309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:19901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.124 [2024-04-26 12:21:25.246315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.124 [2024-04-26 12:21:25.258603] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:24.124 [2024-04-26 12:21:25.258620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:25421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.124 [2024-04-26 12:21:25.258627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.124 [2024-04-26 12:21:25.271811] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:24.124 [2024-04-26 12:21:25.271828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:14169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.124 [2024-04-26 12:21:25.271835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.124 [2024-04-26 12:21:25.283194] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:24.124 [2024-04-26 12:21:25.283215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:7742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.124 [2024-04-26 12:21:25.283221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.124 [2024-04-26 12:21:25.295402] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:24.124 [2024-04-26 12:21:25.295419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.124 [2024-04-26 12:21:25.295426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.124 [2024-04-26 12:21:25.308148] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:24.124 [2024-04-26 12:21:25.308166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:1796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.124 [2024-04-26 12:21:25.308173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.124 [2024-04-26 12:21:25.322132] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:24.124 [2024-04-26 12:21:25.322149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.124 [2024-04-26 12:21:25.322156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.125 [2024-04-26 12:21:25.335239] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:24.125 [2024-04-26 12:21:25.335256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.125 [2024-04-26 12:21:25.335262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.387 [2024-04-26 12:21:25.348277] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:24.387 [2024-04-26 12:21:25.348295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:15848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.387 [2024-04-26 12:21:25.348301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.387 [2024-04-26 12:21:25.359392] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:24.387 [2024-04-26 12:21:25.359410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:2273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.387 [2024-04-26 12:21:25.359416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.387 [2024-04-26 12:21:25.371376] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:24.387 [2024-04-26 12:21:25.371393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:7699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.387 [2024-04-26 12:21:25.371399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.387 [2024-04-26 12:21:25.384798] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:24.387 [2024-04-26 12:21:25.384816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.387 [2024-04-26 12:21:25.384823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.387 [2024-04-26 12:21:25.397435] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:24.387 [2024-04-26 12:21:25.397452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:20561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.387 [2024-04-26 12:21:25.397459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.387 [2024-04-26 12:21:25.410977] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:24.387 [2024-04-26 12:21:25.410995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.387 [2024-04-26 12:21:25.411001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.387 [2024-04-26 12:21:25.423224] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:24.387 [2024-04-26 12:21:25.423241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.387 [2024-04-26 12:21:25.423248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.387 [2024-04-26 12:21:25.435351] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:24.387 [2024-04-26 12:21:25.435368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:10785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.387 [2024-04-26 12:21:25.435375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.387 [2024-04-26 12:21:25.447308] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:24.387 [2024-04-26 12:21:25.447325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:14014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.387 [2024-04-26 12:21:25.447331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.387 [2024-04-26 12:21:25.460278] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:24.387 [2024-04-26 12:21:25.460296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:19328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.387 [2024-04-26 12:21:25.460302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.387 [2024-04-26 12:21:25.473203] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:24.388 [2024-04-26 12:21:25.473220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:24745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.388 [2024-04-26 12:21:25.473226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.388 [2024-04-26 12:21:25.486473] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:24.388 [2024-04-26 12:21:25.486490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:21166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.388 [2024-04-26 12:21:25.486497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.388 [2024-04-26 12:21:25.498742] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:24.388 [2024-04-26 12:21:25.498759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:20991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.388 [2024-04-26 12:21:25.498768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.388 [2024-04-26 12:21:25.512352] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:24.388 [2024-04-26 12:21:25.512370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.388 [2024-04-26 12:21:25.512377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.388 [2024-04-26 12:21:25.524768] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:24.388 [2024-04-26 12:21:25.524785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:10487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.388 [2024-04-26 12:21:25.524791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.388 [2024-04-26 12:21:25.536760] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:24.388 [2024-04-26 12:21:25.536778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.388 [2024-04-26 12:21:25.536784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.388 [2024-04-26 12:21:25.547331] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:24.388 [2024-04-26 12:21:25.547348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:5658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.388 [2024-04-26 12:21:25.547354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.388 [2024-04-26 12:21:25.562439] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:24.388 [2024-04-26 12:21:25.562456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:13032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.388 [2024-04-26 12:21:25.562462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.388 [2024-04-26 12:21:25.573955] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:24.388 [2024-04-26 12:21:25.573972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:4019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.388 [2024-04-26 12:21:25.573978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.388 [2024-04-26 12:21:25.587306] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:24.388 [2024-04-26 12:21:25.587323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:13966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.388 [2024-04-26 12:21:25.587331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.388 [2024-04-26 12:21:25.599156] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:24.388 [2024-04-26 12:21:25.599173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:24642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.388 [2024-04-26 12:21:25.599179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.649 [2024-04-26 12:21:25.611650] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:24.649 [2024-04-26 12:21:25.611671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.649 [2024-04-26 12:21:25.611678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.649 [2024-04-26 12:21:25.624981] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:24.649 [2024-04-26 12:21:25.624998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:12263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.649 [2024-04-26 12:21:25.625005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.649 [2024-04-26 12:21:25.637428] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:24.649 [2024-04-26 12:21:25.637445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:5826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.650 [2024-04-26 12:21:25.637451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.650 [2024-04-26 12:21:25.651277] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:24.650 [2024-04-26 12:21:25.651294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.650 [2024-04-26 12:21:25.651301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.650 [2024-04-26 12:21:25.664350] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:24.650 [2024-04-26 12:21:25.664368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.650 [2024-04-26 12:21:25.664374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.650 [2024-04-26 12:21:25.677575] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:24.650 [2024-04-26 12:21:25.677592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:16079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.650 [2024-04-26 12:21:25.677599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.650 [2024-04-26 12:21:25.689794] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:24.650 [2024-04-26 12:21:25.689812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:4571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.650 [2024-04-26 12:21:25.689819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.650 [2024-04-26 12:21:25.699842] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:24.650 [2024-04-26 12:21:25.699859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:7540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.650 [2024-04-26 12:21:25.699865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.650 [2024-04-26 12:21:25.713627] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:24.650 [2024-04-26 12:21:25.713645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.650 [2024-04-26 12:21:25.713651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.650 [2024-04-26 12:21:25.725941] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:24.650 [2024-04-26 12:21:25.725958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.650 [2024-04-26 12:21:25.725965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.650 [2024-04-26 12:21:25.739148] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:24.650 [2024-04-26 12:21:25.739166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.650 [2024-04-26 12:21:25.739172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.650 [2024-04-26 12:21:25.753114] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:24.650 [2024-04-26 12:21:25.753131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.650 [2024-04-26 12:21:25.753137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.650 [2024-04-26 12:21:25.764247] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:24.650 [2024-04-26 12:21:25.764264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:19966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.650 [2024-04-26 12:21:25.764270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.650 [2024-04-26 12:21:25.776683] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:24.650 [2024-04-26 12:21:25.776700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.650 [2024-04-26 12:21:25.776706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.650 [2024-04-26 12:21:25.790139] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:24.650 [2024-04-26 12:21:25.790156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:24333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.650 [2024-04-26 12:21:25.790162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.650 [2024-04-26 12:21:25.803797] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:24.650 [2024-04-26 12:21:25.803815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.650 [2024-04-26 12:21:25.803821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.650 [2024-04-26 12:21:25.812548] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf424c0) 00:25:24.650 [2024-04-26 12:21:25.812565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:2778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.650 [2024-04-26 12:21:25.812571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.650 00:25:24.650 Latency(us) 00:25:24.650 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:24.650 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:24.650 nvme0n1 : 2.04 19712.46 77.00 0.00 0.00 6356.59 2266.45 45001.39 00:25:24.650 =================================================================================================================== 00:25:24.650 Total : 19712.46 77.00 0.00 0.00 6356.59 2266.45 45001.39 00:25:24.650 0 00:25:24.911 12:21:25 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:24.911 12:21:25 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:24.911 12:21:25 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:24.911 | .driver_specific 00:25:24.911 | .nvme_error 00:25:24.911 | .status_code 00:25:24.911 | .command_transient_transport_error' 00:25:24.911 12:21:25 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:24.911 12:21:26 -- host/digest.sh@71 -- # (( 158 > 0 )) 00:25:24.911 12:21:26 -- host/digest.sh@73 -- # killprocess 3552643 00:25:24.911 12:21:26 -- common/autotest_common.sh@936 -- # '[' -z 3552643 ']' 00:25:24.911 12:21:26 -- common/autotest_common.sh@940 -- # kill -0 3552643 00:25:24.911 12:21:26 -- common/autotest_common.sh@941 -- # uname 00:25:24.911 12:21:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:24.911 12:21:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3552643 00:25:24.911 12:21:26 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:24.911 12:21:26 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:24.911 12:21:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3552643' 00:25:24.911 killing process with pid 3552643 00:25:24.911 12:21:26 -- common/autotest_common.sh@955 -- # kill 3552643 00:25:24.911 Received shutdown signal, test time was about 2.000000 seconds 00:25:24.911 00:25:24.911 Latency(us) 00:25:24.911 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:24.911 =================================================================================================================== 00:25:24.911 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:24.911 12:21:26 -- common/autotest_common.sh@960 -- # wait 3552643 00:25:25.171 12:21:26 -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:25:25.171 12:21:26 -- host/digest.sh@54 -- # local rw bs qd 00:25:25.171 12:21:26 -- host/digest.sh@56 -- # rw=randread 00:25:25.171 12:21:26 -- host/digest.sh@56 -- # bs=131072 00:25:25.171 12:21:26 -- host/digest.sh@56 -- # qd=16 00:25:25.171 12:21:26 -- host/digest.sh@58 -- # bperfpid=3553331 00:25:25.171 12:21:26 -- host/digest.sh@60 -- # waitforlisten 3553331 /var/tmp/bperf.sock 00:25:25.171 12:21:26 -- common/autotest_common.sh@817 -- # '[' -z 3553331 ']' 00:25:25.171 12:21:26 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:25:25.171 12:21:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:25.171 12:21:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:25.171 12:21:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:25.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:25.171 12:21:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:25.171 12:21:26 -- common/autotest_common.sh@10 -- # set +x 00:25:25.171 [2024-04-26 12:21:26.262167] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:25:25.171 [2024-04-26 12:21:26.262235] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3553331 ] 00:25:25.171 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:25.171 Zero copy mechanism will not be used. 00:25:25.171 EAL: No free 2048 kB hugepages reported on node 1 00:25:25.171 [2024-04-26 12:21:26.338294] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:25.171 [2024-04-26 12:21:26.390226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:26.113 12:21:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:26.113 12:21:27 -- common/autotest_common.sh@850 -- # return 0 00:25:26.114 12:21:27 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:26.114 12:21:27 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:26.114 12:21:27 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:26.114 12:21:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:26.114 12:21:27 -- common/autotest_common.sh@10 -- # set +x 00:25:26.114 12:21:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:26.114 12:21:27 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:26.114 12:21:27 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:26.374 nvme0n1 00:25:26.374 12:21:27 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:26.374 12:21:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:26.374 12:21:27 -- common/autotest_common.sh@10 -- # set +x 00:25:26.374 12:21:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:26.374 12:21:27 -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:26.374 12:21:27 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:26.635 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:26.635 Zero copy mechanism will not be used. 00:25:26.635 Running I/O for 2 seconds... 00:25:26.635 [2024-04-26 12:21:27.624761] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:26.635 [2024-04-26 12:21:27.624792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.635 [2024-04-26 12:21:27.624801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.635 [2024-04-26 12:21:27.634770] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:26.635 [2024-04-26 12:21:27.634791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.635 [2024-04-26 12:21:27.634798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.635 [2024-04-26 12:21:27.644700] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:26.635 [2024-04-26 12:21:27.644719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.635 [2024-04-26 12:21:27.644726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.635 [2024-04-26 12:21:27.655951] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:26.635 [2024-04-26 12:21:27.655970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.635 [2024-04-26 12:21:27.655977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.635 [2024-04-26 12:21:27.665942] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:26.635 [2024-04-26 12:21:27.665960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.635 [2024-04-26 12:21:27.665967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.635 [2024-04-26 12:21:27.674439] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:26.635 [2024-04-26 12:21:27.674457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.635 [2024-04-26 12:21:27.674463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.635 [2024-04-26 12:21:27.684154] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:26.635 [2024-04-26 12:21:27.684172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.635 [2024-04-26 12:21:27.684178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.635 [2024-04-26 12:21:27.696145] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:26.635 [2024-04-26 12:21:27.696163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.635 [2024-04-26 12:21:27.696170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.635 [2024-04-26 12:21:27.705450] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:26.635 [2024-04-26 12:21:27.705469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.635 [2024-04-26 12:21:27.705475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.635 [2024-04-26 12:21:27.715326] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:26.635 [2024-04-26 12:21:27.715344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.635 [2024-04-26 12:21:27.715351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.635 [2024-04-26 12:21:27.726863] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:26.635 [2024-04-26 12:21:27.726881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.635 [2024-04-26 12:21:27.726887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.635 [2024-04-26 12:21:27.736798] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:26.635 [2024-04-26 12:21:27.736816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.635 [2024-04-26 12:21:27.736822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.635 [2024-04-26 12:21:27.747017] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:26.635 [2024-04-26 12:21:27.747036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.635 [2024-04-26 12:21:27.747042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.636 [2024-04-26 12:21:27.757583] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:26.636 [2024-04-26 12:21:27.757601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.636 [2024-04-26 12:21:27.757611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.636 [2024-04-26 12:21:27.766781] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:26.636 [2024-04-26 12:21:27.766798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.636 [2024-04-26 12:21:27.766804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.636 [2024-04-26 12:21:27.776112] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:26.636 [2024-04-26 12:21:27.776130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.636 [2024-04-26 12:21:27.776137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.636 [2024-04-26 12:21:27.784893] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:26.636 [2024-04-26 12:21:27.784912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.636 [2024-04-26 12:21:27.784918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.636 [2024-04-26 12:21:27.795497] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:26.636 [2024-04-26 12:21:27.795515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.636 [2024-04-26 12:21:27.795521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.636 [2024-04-26 12:21:27.807180] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:26.636 [2024-04-26 12:21:27.807198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.636 [2024-04-26 12:21:27.807204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.636 [2024-04-26 12:21:27.817977] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:26.636 [2024-04-26 12:21:27.817995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.636 [2024-04-26 12:21:27.818001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.636 [2024-04-26 12:21:27.828551] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:26.636 [2024-04-26 12:21:27.828569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.636 [2024-04-26 12:21:27.828575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.636 [2024-04-26 12:21:27.838027] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:26.636 [2024-04-26 12:21:27.838045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.636 [2024-04-26 12:21:27.838052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.636 [2024-04-26 12:21:27.847112] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:26.636 [2024-04-26 12:21:27.847135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.636 [2024-04-26 12:21:27.847142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.896 [2024-04-26 12:21:27.857490] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:26.896 [2024-04-26 12:21:27.857509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.896 [2024-04-26 12:21:27.857515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.896 [2024-04-26 12:21:27.867084] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:26.896 [2024-04-26 12:21:27.867102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.896 [2024-04-26 12:21:27.867109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.896 [2024-04-26 12:21:27.877544] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:26.896 [2024-04-26 12:21:27.877562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.896 [2024-04-26 12:21:27.877568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.896 [2024-04-26 12:21:27.887512] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:26.896 [2024-04-26 12:21:27.887530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.896 [2024-04-26 12:21:27.887537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.896 [2024-04-26 12:21:27.896246] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:26.896 [2024-04-26 12:21:27.896264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.896 [2024-04-26 12:21:27.896270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.896 [2024-04-26 12:21:27.906938] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:26.896 [2024-04-26 12:21:27.906956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.896 [2024-04-26 12:21:27.906962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.896 [2024-04-26 12:21:27.917656] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:26.896 [2024-04-26 12:21:27.917675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.896 [2024-04-26 12:21:27.917681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.897 [2024-04-26 12:21:27.925956] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:26.897 [2024-04-26 12:21:27.925976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.897 [2024-04-26 12:21:27.925982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.897 [2024-04-26 12:21:27.934831] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:26.897 [2024-04-26 12:21:27.934853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.897 [2024-04-26 12:21:27.934860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.897 [2024-04-26 12:21:27.944151] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:26.897 [2024-04-26 12:21:27.944169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.897 [2024-04-26 12:21:27.944175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.897 [2024-04-26 12:21:27.954776] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:26.897 [2024-04-26 12:21:27.954795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.897 [2024-04-26 12:21:27.954801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.897 [2024-04-26 12:21:27.965620] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:26.897 [2024-04-26 12:21:27.965638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.897 [2024-04-26 12:21:27.965644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.897 [2024-04-26 12:21:27.975378] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:26.897 [2024-04-26 12:21:27.975396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.897 [2024-04-26 12:21:27.975403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.897 [2024-04-26 12:21:27.985624] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:26.897 [2024-04-26 12:21:27.985642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.897 [2024-04-26 12:21:27.985648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.897 [2024-04-26 12:21:27.994827] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:26.897 [2024-04-26 12:21:27.994849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.897 [2024-04-26 12:21:27.994856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.897 [2024-04-26 12:21:28.005462] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:26.897 [2024-04-26 12:21:28.005480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.897 [2024-04-26 12:21:28.005487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.897 [2024-04-26 12:21:28.014520] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:26.897 [2024-04-26 12:21:28.014539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.897 [2024-04-26 12:21:28.014548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.897 [2024-04-26 12:21:28.026544] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:26.897 [2024-04-26 12:21:28.026562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.897 [2024-04-26 12:21:28.026568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.897 [2024-04-26 12:21:28.038795] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:26.897 [2024-04-26 12:21:28.038813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.897 [2024-04-26 12:21:28.038819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.897 [2024-04-26 12:21:28.048322] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:26.897 [2024-04-26 12:21:28.048340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.897 [2024-04-26 12:21:28.048346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.897 [2024-04-26 12:21:28.057649] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:26.897 [2024-04-26 12:21:28.057666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.897 [2024-04-26 12:21:28.057673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.897 [2024-04-26 12:21:28.068171] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:26.897 [2024-04-26 12:21:28.068189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.897 [2024-04-26 12:21:28.068195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.897 [2024-04-26 12:21:28.078827] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:26.897 [2024-04-26 12:21:28.078849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.897 [2024-04-26 12:21:28.078856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.897 [2024-04-26 12:21:28.089847] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:26.897 [2024-04-26 12:21:28.089865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.897 [2024-04-26 12:21:28.089871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.897 [2024-04-26 12:21:28.098457] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:26.897 [2024-04-26 12:21:28.098475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.897 [2024-04-26 12:21:28.098481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.897 [2024-04-26 12:21:28.108806] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:26.897 [2024-04-26 12:21:28.108828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.897 [2024-04-26 12:21:28.108834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.158 [2024-04-26 12:21:28.119725] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.158 [2024-04-26 12:21:28.119744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.158 [2024-04-26 12:21:28.119750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.158 [2024-04-26 12:21:28.129408] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.158 [2024-04-26 12:21:28.129426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.158 [2024-04-26 12:21:28.129433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.158 [2024-04-26 12:21:28.138682] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.158 [2024-04-26 12:21:28.138701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.158 [2024-04-26 12:21:28.138708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.158 [2024-04-26 12:21:28.147299] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.158 [2024-04-26 12:21:28.147317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.158 [2024-04-26 12:21:28.147324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.158 [2024-04-26 12:21:28.157783] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.158 [2024-04-26 12:21:28.157801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.158 [2024-04-26 12:21:28.157807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.158 [2024-04-26 12:21:28.168793] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.158 [2024-04-26 12:21:28.168810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.158 [2024-04-26 12:21:28.168817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.158 [2024-04-26 12:21:28.178192] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.158 [2024-04-26 12:21:28.178210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.158 [2024-04-26 12:21:28.178218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.158 [2024-04-26 12:21:28.187788] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.158 [2024-04-26 12:21:28.187806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.158 [2024-04-26 12:21:28.187813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.158 [2024-04-26 12:21:28.197402] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.158 [2024-04-26 12:21:28.197421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.158 [2024-04-26 12:21:28.197427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.158 [2024-04-26 12:21:28.209143] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.158 [2024-04-26 12:21:28.209162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.158 [2024-04-26 12:21:28.209169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.158 [2024-04-26 12:21:28.221818] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.158 [2024-04-26 12:21:28.221842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.158 [2024-04-26 12:21:28.221849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.158 [2024-04-26 12:21:28.234584] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.158 [2024-04-26 12:21:28.234603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.158 [2024-04-26 12:21:28.234609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.158 [2024-04-26 12:21:28.245818] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.158 [2024-04-26 12:21:28.245845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.158 [2024-04-26 12:21:28.245852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.158 [2024-04-26 12:21:28.255186] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.158 [2024-04-26 12:21:28.255206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.158 [2024-04-26 12:21:28.255213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.158 [2024-04-26 12:21:28.265385] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.158 [2024-04-26 12:21:28.265403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.158 [2024-04-26 12:21:28.265409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.158 [2024-04-26 12:21:28.275104] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.158 [2024-04-26 12:21:28.275123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.158 [2024-04-26 12:21:28.275129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.158 [2024-04-26 12:21:28.285564] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.158 [2024-04-26 12:21:28.285586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.158 [2024-04-26 12:21:28.285592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.158 [2024-04-26 12:21:28.294669] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.158 [2024-04-26 12:21:28.294687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.158 [2024-04-26 12:21:28.294694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.158 [2024-04-26 12:21:28.301344] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.158 [2024-04-26 12:21:28.301362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.158 [2024-04-26 12:21:28.301369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.158 [2024-04-26 12:21:28.307338] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.158 [2024-04-26 12:21:28.307356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.158 [2024-04-26 12:21:28.307363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.158 [2024-04-26 12:21:28.318856] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.158 [2024-04-26 12:21:28.318874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.158 [2024-04-26 12:21:28.318880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.158 [2024-04-26 12:21:28.332133] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.158 [2024-04-26 12:21:28.332152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.158 [2024-04-26 12:21:28.332158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.158 [2024-04-26 12:21:28.344442] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.158 [2024-04-26 12:21:28.344461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.158 [2024-04-26 12:21:28.344467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.158 [2024-04-26 12:21:28.353903] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.158 [2024-04-26 12:21:28.353922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.158 [2024-04-26 12:21:28.353928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.158 [2024-04-26 12:21:28.362154] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.158 [2024-04-26 12:21:28.362172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.158 [2024-04-26 12:21:28.362179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.158 [2024-04-26 12:21:28.370637] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.158 [2024-04-26 12:21:28.370655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.159 [2024-04-26 12:21:28.370661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.421 [2024-04-26 12:21:28.381083] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.421 [2024-04-26 12:21:28.381102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.421 [2024-04-26 12:21:28.381110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.421 [2024-04-26 12:21:28.389903] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.421 [2024-04-26 12:21:28.389921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.421 [2024-04-26 12:21:28.389928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.421 [2024-04-26 12:21:28.400196] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.421 [2024-04-26 12:21:28.400215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.421 [2024-04-26 12:21:28.400221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.421 [2024-04-26 12:21:28.410416] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.421 [2024-04-26 12:21:28.410435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.421 [2024-04-26 12:21:28.410441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.421 [2024-04-26 12:21:28.420415] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.421 [2024-04-26 12:21:28.420433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.421 [2024-04-26 12:21:28.420440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.421 [2024-04-26 12:21:28.429212] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.421 [2024-04-26 12:21:28.429230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.421 [2024-04-26 12:21:28.429236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.421 [2024-04-26 12:21:28.439292] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.421 [2024-04-26 12:21:28.439310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.421 [2024-04-26 12:21:28.439316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.421 [2024-04-26 12:21:28.449340] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.421 [2024-04-26 12:21:28.449358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.421 [2024-04-26 12:21:28.449367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.421 [2024-04-26 12:21:28.458358] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.421 [2024-04-26 12:21:28.458377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.421 [2024-04-26 12:21:28.458383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.421 [2024-04-26 12:21:28.466924] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.421 [2024-04-26 12:21:28.466942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.421 [2024-04-26 12:21:28.466948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.421 [2024-04-26 12:21:28.476945] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.421 [2024-04-26 12:21:28.476963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.421 [2024-04-26 12:21:28.476969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.421 [2024-04-26 12:21:28.485943] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.421 [2024-04-26 12:21:28.485962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.421 [2024-04-26 12:21:28.485968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.421 [2024-04-26 12:21:28.494461] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.421 [2024-04-26 12:21:28.494479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.421 [2024-04-26 12:21:28.494486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.421 [2024-04-26 12:21:28.506073] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.421 [2024-04-26 12:21:28.506091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.421 [2024-04-26 12:21:28.506098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.421 [2024-04-26 12:21:28.515037] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.421 [2024-04-26 12:21:28.515056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.421 [2024-04-26 12:21:28.515063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.421 [2024-04-26 12:21:28.524478] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.421 [2024-04-26 12:21:28.524497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.421 [2024-04-26 12:21:28.524503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.421 [2024-04-26 12:21:28.534989] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.421 [2024-04-26 12:21:28.535011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.421 [2024-04-26 12:21:28.535017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.421 [2024-04-26 12:21:28.544427] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.421 [2024-04-26 12:21:28.544445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.421 [2024-04-26 12:21:28.544452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.421 [2024-04-26 12:21:28.554037] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.421 [2024-04-26 12:21:28.554056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.422 [2024-04-26 12:21:28.554062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.422 [2024-04-26 12:21:28.564864] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.422 [2024-04-26 12:21:28.564882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.422 [2024-04-26 12:21:28.564889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.422 [2024-04-26 12:21:28.574498] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.422 [2024-04-26 12:21:28.574517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.422 [2024-04-26 12:21:28.574523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.422 [2024-04-26 12:21:28.586000] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.422 [2024-04-26 12:21:28.586019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.422 [2024-04-26 12:21:28.586026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.422 [2024-04-26 12:21:28.597743] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.422 [2024-04-26 12:21:28.597763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.422 [2024-04-26 12:21:28.597769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.422 [2024-04-26 12:21:28.607873] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.422 [2024-04-26 12:21:28.607892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.422 [2024-04-26 12:21:28.607898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.422 [2024-04-26 12:21:28.618714] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.422 [2024-04-26 12:21:28.618733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.422 [2024-04-26 12:21:28.618739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.422 [2024-04-26 12:21:28.627859] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.422 [2024-04-26 12:21:28.627877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.422 [2024-04-26 12:21:28.627884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.422 [2024-04-26 12:21:28.639708] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.422 [2024-04-26 12:21:28.639727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.422 [2024-04-26 12:21:28.639733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.684 [2024-04-26 12:21:28.650834] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.684 [2024-04-26 12:21:28.650857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.684 [2024-04-26 12:21:28.650864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.684 [2024-04-26 12:21:28.661710] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.684 [2024-04-26 12:21:28.661729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.684 [2024-04-26 12:21:28.661735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.684 [2024-04-26 12:21:28.672322] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.684 [2024-04-26 12:21:28.672341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.684 [2024-04-26 12:21:28.672347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.684 [2024-04-26 12:21:28.683329] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.684 [2024-04-26 12:21:28.683348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.684 [2024-04-26 12:21:28.683355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.684 [2024-04-26 12:21:28.693219] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.684 [2024-04-26 12:21:28.693238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.684 [2024-04-26 12:21:28.693244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.684 [2024-04-26 12:21:28.704206] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.684 [2024-04-26 12:21:28.704224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.684 [2024-04-26 12:21:28.704231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.684 [2024-04-26 12:21:28.714757] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.684 [2024-04-26 12:21:28.714776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.684 [2024-04-26 12:21:28.714786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.684 [2024-04-26 12:21:28.724850] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.684 [2024-04-26 12:21:28.724869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.684 [2024-04-26 12:21:28.724875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.684 [2024-04-26 12:21:28.733805] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.684 [2024-04-26 12:21:28.733824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.684 [2024-04-26 12:21:28.733831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.684 [2024-04-26 12:21:28.744474] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.684 [2024-04-26 12:21:28.744493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.684 [2024-04-26 12:21:28.744499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.684 [2024-04-26 12:21:28.754388] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.684 [2024-04-26 12:21:28.754407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.684 [2024-04-26 12:21:28.754413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.684 [2024-04-26 12:21:28.764809] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.684 [2024-04-26 12:21:28.764828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.684 [2024-04-26 12:21:28.764834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.684 [2024-04-26 12:21:28.775555] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.684 [2024-04-26 12:21:28.775573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.684 [2024-04-26 12:21:28.775579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.684 [2024-04-26 12:21:28.786981] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.684 [2024-04-26 12:21:28.786999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.684 [2024-04-26 12:21:28.787006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.684 [2024-04-26 12:21:28.797972] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.684 [2024-04-26 12:21:28.797991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.684 [2024-04-26 12:21:28.797997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.684 [2024-04-26 12:21:28.806992] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.684 [2024-04-26 12:21:28.807014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.684 [2024-04-26 12:21:28.807020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.684 [2024-04-26 12:21:28.817950] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.684 [2024-04-26 12:21:28.817969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.684 [2024-04-26 12:21:28.817975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.684 [2024-04-26 12:21:28.826967] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.684 [2024-04-26 12:21:28.826986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.684 [2024-04-26 12:21:28.826992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.684 [2024-04-26 12:21:28.837822] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.684 [2024-04-26 12:21:28.837845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.684 [2024-04-26 12:21:28.837852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.685 [2024-04-26 12:21:28.849242] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.685 [2024-04-26 12:21:28.849261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.685 [2024-04-26 12:21:28.849267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.685 [2024-04-26 12:21:28.858854] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.685 [2024-04-26 12:21:28.858873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.685 [2024-04-26 12:21:28.858880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.685 [2024-04-26 12:21:28.869221] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.685 [2024-04-26 12:21:28.869240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.685 [2024-04-26 12:21:28.869246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.685 [2024-04-26 12:21:28.878665] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.685 [2024-04-26 12:21:28.878684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.685 [2024-04-26 12:21:28.878690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.685 [2024-04-26 12:21:28.888722] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.685 [2024-04-26 12:21:28.888740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.685 [2024-04-26 12:21:28.888747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.685 [2024-04-26 12:21:28.898275] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.685 [2024-04-26 12:21:28.898294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.685 [2024-04-26 12:21:28.898300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.946 [2024-04-26 12:21:28.907986] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.946 [2024-04-26 12:21:28.908005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.946 [2024-04-26 12:21:28.908012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.946 [2024-04-26 12:21:28.918037] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.946 [2024-04-26 12:21:28.918054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.946 [2024-04-26 12:21:28.918061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.946 [2024-04-26 12:21:28.928146] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.946 [2024-04-26 12:21:28.928164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.946 [2024-04-26 12:21:28.928170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.946 [2024-04-26 12:21:28.937967] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.946 [2024-04-26 12:21:28.937985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.946 [2024-04-26 12:21:28.937991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.946 [2024-04-26 12:21:28.948229] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.946 [2024-04-26 12:21:28.948247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.946 [2024-04-26 12:21:28.948253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.946 [2024-04-26 12:21:28.958918] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.946 [2024-04-26 12:21:28.958936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.946 [2024-04-26 12:21:28.958942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.946 [2024-04-26 12:21:28.969061] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.946 [2024-04-26 12:21:28.969079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.946 [2024-04-26 12:21:28.969086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.946 [2024-04-26 12:21:28.981155] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.946 [2024-04-26 12:21:28.981176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.946 [2024-04-26 12:21:28.981183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.946 [2024-04-26 12:21:28.991352] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.946 [2024-04-26 12:21:28.991371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.946 [2024-04-26 12:21:28.991377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.946 [2024-04-26 12:21:29.000942] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.946 [2024-04-26 12:21:29.000961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.946 [2024-04-26 12:21:29.000968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.946 [2024-04-26 12:21:29.010403] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.946 [2024-04-26 12:21:29.010421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.946 [2024-04-26 12:21:29.010427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.946 [2024-04-26 12:21:29.017951] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.946 [2024-04-26 12:21:29.017969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.946 [2024-04-26 12:21:29.017976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.946 [2024-04-26 12:21:29.028684] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.946 [2024-04-26 12:21:29.028701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.946 [2024-04-26 12:21:29.028708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.946 [2024-04-26 12:21:29.039093] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.946 [2024-04-26 12:21:29.039111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.946 [2024-04-26 12:21:29.039118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.946 [2024-04-26 12:21:29.046128] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.946 [2024-04-26 12:21:29.046146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.946 [2024-04-26 12:21:29.046152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.946 [2024-04-26 12:21:29.056259] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.946 [2024-04-26 12:21:29.056278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.946 [2024-04-26 12:21:29.056286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.946 [2024-04-26 12:21:29.067501] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.946 [2024-04-26 12:21:29.067519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.946 [2024-04-26 12:21:29.067526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.946 [2024-04-26 12:21:29.079875] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.946 [2024-04-26 12:21:29.079894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.946 [2024-04-26 12:21:29.079901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.946 [2024-04-26 12:21:29.090023] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.946 [2024-04-26 12:21:29.090042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.946 [2024-04-26 12:21:29.090048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.946 [2024-04-26 12:21:29.099207] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.946 [2024-04-26 12:21:29.099225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.946 [2024-04-26 12:21:29.099231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.946 [2024-04-26 12:21:29.109991] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.946 [2024-04-26 12:21:29.110010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.946 [2024-04-26 12:21:29.110016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.946 [2024-04-26 12:21:29.118357] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.946 [2024-04-26 12:21:29.118375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.946 [2024-04-26 12:21:29.118382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.946 [2024-04-26 12:21:29.129399] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.946 [2024-04-26 12:21:29.129417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.946 [2024-04-26 12:21:29.129424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.946 [2024-04-26 12:21:29.139206] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.946 [2024-04-26 12:21:29.139225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.946 [2024-04-26 12:21:29.139231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.946 [2024-04-26 12:21:29.147984] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.947 [2024-04-26 12:21:29.148002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.947 [2024-04-26 12:21:29.148014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.947 [2024-04-26 12:21:29.158391] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:27.947 [2024-04-26 12:21:29.158409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.947 [2024-04-26 12:21:29.158415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:28.211 [2024-04-26 12:21:29.168065] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:28.211 [2024-04-26 12:21:29.168085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.211 [2024-04-26 12:21:29.168091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:28.211 [2024-04-26 12:21:29.177921] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:28.211 [2024-04-26 12:21:29.177939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.211 [2024-04-26 12:21:29.177946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:28.212 [2024-04-26 12:21:29.187273] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:28.212 [2024-04-26 12:21:29.187291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.212 [2024-04-26 12:21:29.187298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.212 [2024-04-26 12:21:29.198435] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:28.212 [2024-04-26 12:21:29.198453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.212 [2024-04-26 12:21:29.198459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:28.212 [2024-04-26 12:21:29.210967] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:28.212 [2024-04-26 12:21:29.210987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.212 [2024-04-26 12:21:29.210993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:28.212 [2024-04-26 12:21:29.223371] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:28.212 [2024-04-26 12:21:29.223389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.212 [2024-04-26 12:21:29.223395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:28.212 [2024-04-26 12:21:29.236383] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:28.212 [2024-04-26 12:21:29.236401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.212 [2024-04-26 12:21:29.236408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.212 [2024-04-26 12:21:29.248802] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:28.212 [2024-04-26 12:21:29.248824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.212 [2024-04-26 12:21:29.248831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:28.212 [2024-04-26 12:21:29.258449] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:28.212 [2024-04-26 12:21:29.258468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.212 [2024-04-26 12:21:29.258475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:28.212 [2024-04-26 12:21:29.268308] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:28.212 [2024-04-26 12:21:29.268327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.212 [2024-04-26 12:21:29.268334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:28.212 [2024-04-26 12:21:29.278719] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:28.212 [2024-04-26 12:21:29.278737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.212 [2024-04-26 12:21:29.278743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.212 [2024-04-26 12:21:29.288435] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:28.212 [2024-04-26 12:21:29.288453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.212 [2024-04-26 12:21:29.288460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:28.212 [2024-04-26 12:21:29.298619] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:28.212 [2024-04-26 12:21:29.298637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.212 [2024-04-26 12:21:29.298643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:28.212 [2024-04-26 12:21:29.308764] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:28.212 [2024-04-26 12:21:29.308783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.212 [2024-04-26 12:21:29.308789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:28.212 [2024-04-26 12:21:29.318929] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:28.212 [2024-04-26 12:21:29.318947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.212 [2024-04-26 12:21:29.318954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.212 [2024-04-26 12:21:29.329147] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:28.212 [2024-04-26 12:21:29.329166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.212 [2024-04-26 12:21:29.329172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:28.212 [2024-04-26 12:21:29.339423] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:28.212 [2024-04-26 12:21:29.339441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.212 [2024-04-26 12:21:29.339447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:28.212 [2024-04-26 12:21:29.349963] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:28.212 [2024-04-26 12:21:29.349981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.212 [2024-04-26 12:21:29.349987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:28.212 [2024-04-26 12:21:29.359500] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:28.212 [2024-04-26 12:21:29.359518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.212 [2024-04-26 12:21:29.359524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.212 [2024-04-26 12:21:29.369290] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:28.212 [2024-04-26 12:21:29.369308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.212 [2024-04-26 12:21:29.369315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:28.212 [2024-04-26 12:21:29.380256] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:28.212 [2024-04-26 12:21:29.380274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.212 [2024-04-26 12:21:29.380280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:28.212 [2024-04-26 12:21:29.389977] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:28.212 [2024-04-26 12:21:29.389995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.212 [2024-04-26 12:21:29.390002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:28.212 [2024-04-26 12:21:29.400242] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:28.212 [2024-04-26 12:21:29.400260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.212 [2024-04-26 12:21:29.400266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.212 [2024-04-26 12:21:29.409621] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:28.212 [2024-04-26 12:21:29.409640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.212 [2024-04-26 12:21:29.409647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:28.212 [2024-04-26 12:21:29.419691] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:28.212 [2024-04-26 12:21:29.419710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.212 [2024-04-26 12:21:29.419719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:28.504 [2024-04-26 12:21:29.427684] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:28.504 [2024-04-26 12:21:29.427702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.504 [2024-04-26 12:21:29.427708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:28.504 [2024-04-26 12:21:29.438230] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:28.504 [2024-04-26 12:21:29.438249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.504 [2024-04-26 12:21:29.438255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.504 [2024-04-26 12:21:29.447761] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:28.504 [2024-04-26 12:21:29.447779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.504 [2024-04-26 12:21:29.447785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:28.504 [2024-04-26 12:21:29.458076] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:28.504 [2024-04-26 12:21:29.458094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.504 [2024-04-26 12:21:29.458101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:28.504 [2024-04-26 12:21:29.469095] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:28.504 [2024-04-26 12:21:29.469113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.504 [2024-04-26 12:21:29.469119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:28.504 [2024-04-26 12:21:29.479701] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:28.504 [2024-04-26 12:21:29.479720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.504 [2024-04-26 12:21:29.479727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.504 [2024-04-26 12:21:29.489538] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:28.504 [2024-04-26 12:21:29.489557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.504 [2024-04-26 12:21:29.489563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:28.504 [2024-04-26 12:21:29.499863] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:28.504 [2024-04-26 12:21:29.499881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.504 [2024-04-26 12:21:29.499887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:28.504 [2024-04-26 12:21:29.510074] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:28.504 [2024-04-26 12:21:29.510096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.504 [2024-04-26 12:21:29.510103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:28.504 [2024-04-26 12:21:29.518962] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:28.504 [2024-04-26 12:21:29.518979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.505 [2024-04-26 12:21:29.518985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.505 [2024-04-26 12:21:29.528879] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:28.505 [2024-04-26 12:21:29.528897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.505 [2024-04-26 12:21:29.528903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:28.505 [2024-04-26 12:21:29.538220] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:28.505 [2024-04-26 12:21:29.538238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.505 [2024-04-26 12:21:29.538244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:28.505 [2024-04-26 12:21:29.546888] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:28.505 [2024-04-26 12:21:29.546907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.505 [2024-04-26 12:21:29.546913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:28.505 [2024-04-26 12:21:29.556209] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:28.505 [2024-04-26 12:21:29.556228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.505 [2024-04-26 12:21:29.556235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.505 [2024-04-26 12:21:29.564958] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:28.505 [2024-04-26 12:21:29.564976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.505 [2024-04-26 12:21:29.564983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:28.505 [2024-04-26 12:21:29.576783] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:28.505 [2024-04-26 12:21:29.576801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.505 [2024-04-26 12:21:29.576808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:28.505 [2024-04-26 12:21:29.586308] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:28.505 [2024-04-26 12:21:29.586327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.505 [2024-04-26 12:21:29.586334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:28.505 [2024-04-26 12:21:29.590944] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:28.505 [2024-04-26 12:21:29.590961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.505 [2024-04-26 12:21:29.590968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.505 [2024-04-26 12:21:29.597546] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:28.505 [2024-04-26 12:21:29.597564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.505 [2024-04-26 12:21:29.597571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:28.505 [2024-04-26 12:21:29.607065] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:28.505 [2024-04-26 12:21:29.607084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.505 [2024-04-26 12:21:29.607090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:28.505 [2024-04-26 12:21:29.616881] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a277c0) 00:25:28.505 [2024-04-26 12:21:29.616899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.505 [2024-04-26 12:21:29.616905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:28.505 00:25:28.505 Latency(us) 00:25:28.505 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:28.505 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:28.505 nvme0n1 : 2.01 3077.81 384.73 0.00 0.00 5194.64 1303.89 13161.81 00:25:28.505 =================================================================================================================== 00:25:28.505 Total : 3077.81 384.73 0.00 0.00 5194.64 1303.89 13161.81 00:25:28.505 0 00:25:28.505 12:21:29 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:28.505 12:21:29 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:28.505 12:21:29 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:28.505 | .driver_specific 00:25:28.505 | .nvme_error 00:25:28.505 | .status_code 00:25:28.505 | .command_transient_transport_error' 00:25:28.505 12:21:29 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:28.811 12:21:29 -- host/digest.sh@71 -- # (( 199 > 0 )) 00:25:28.811 12:21:29 -- host/digest.sh@73 -- # killprocess 3553331 00:25:28.811 12:21:29 -- common/autotest_common.sh@936 -- # '[' -z 3553331 ']' 00:25:28.811 12:21:29 -- common/autotest_common.sh@940 -- # kill -0 3553331 00:25:28.811 12:21:29 -- common/autotest_common.sh@941 -- # uname 00:25:28.811 12:21:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:28.811 12:21:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3553331 00:25:28.811 12:21:29 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:28.811 12:21:29 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:28.811 12:21:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3553331' 00:25:28.811 killing process with pid 3553331 00:25:28.811 12:21:29 -- common/autotest_common.sh@955 -- # kill 3553331 00:25:28.811 Received shutdown signal, test time was about 2.000000 seconds 00:25:28.811 00:25:28.811 Latency(us) 00:25:28.811 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:28.811 =================================================================================================================== 00:25:28.811 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:28.811 12:21:29 -- common/autotest_common.sh@960 -- # wait 3553331 00:25:28.811 12:21:29 -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:25:28.811 12:21:29 -- host/digest.sh@54 -- # local rw bs qd 00:25:28.811 12:21:29 -- host/digest.sh@56 -- # rw=randwrite 00:25:28.811 12:21:29 -- host/digest.sh@56 -- # bs=4096 00:25:28.811 12:21:29 -- host/digest.sh@56 -- # qd=128 00:25:28.811 12:21:29 -- host/digest.sh@58 -- # bperfpid=3554022 00:25:28.811 12:21:29 -- host/digest.sh@60 -- # waitforlisten 3554022 /var/tmp/bperf.sock 00:25:28.811 12:21:29 -- common/autotest_common.sh@817 -- # '[' -z 3554022 ']' 00:25:28.811 12:21:29 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:25:28.811 12:21:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:28.811 12:21:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:28.811 12:21:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:28.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:28.811 12:21:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:28.811 12:21:29 -- common/autotest_common.sh@10 -- # set +x 00:25:28.811 [2024-04-26 12:21:30.020449] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:25:28.811 [2024-04-26 12:21:30.020500] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3554022 ] 00:25:29.072 EAL: No free 2048 kB hugepages reported on node 1 00:25:29.072 [2024-04-26 12:21:30.097493] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:29.072 [2024-04-26 12:21:30.149424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:29.643 12:21:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:29.643 12:21:30 -- common/autotest_common.sh@850 -- # return 0 00:25:29.643 12:21:30 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:29.643 12:21:30 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:29.904 12:21:30 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:29.904 12:21:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:29.904 12:21:30 -- common/autotest_common.sh@10 -- # set +x 00:25:29.904 12:21:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:29.904 12:21:30 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:29.904 12:21:30 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:30.166 nvme0n1 00:25:30.166 12:21:31 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:30.166 12:21:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:30.166 12:21:31 -- common/autotest_common.sh@10 -- # set +x 00:25:30.166 12:21:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:30.166 12:21:31 -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:30.166 12:21:31 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:30.166 Running I/O for 2 seconds... 00:25:30.166 [2024-04-26 12:21:31.321901] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190eb760 00:25:30.166 [2024-04-26 12:21:31.323698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:13438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.166 [2024-04-26 12:21:31.323729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:30.166 [2024-04-26 12:21:31.332585] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190f7970 00:25:30.166 [2024-04-26 12:21:31.333728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.166 [2024-04-26 12:21:31.333746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:30.166 [2024-04-26 12:21:31.344811] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190f6890 00:25:30.166 [2024-04-26 12:21:31.345911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.166 [2024-04-26 12:21:31.345928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:30.166 [2024-04-26 12:21:31.356938] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190eee38 00:25:30.166 [2024-04-26 12:21:31.358061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.166 [2024-04-26 12:21:31.358077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:30.166 [2024-04-26 12:21:31.369117] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190edd58 00:25:30.166 [2024-04-26 12:21:31.370226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.166 [2024-04-26 12:21:31.370242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:30.166 [2024-04-26 12:21:31.381314] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190ecc78 00:25:30.166 [2024-04-26 12:21:31.382419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:7672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.166 [2024-04-26 12:21:31.382435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:30.429 [2024-04-26 12:21:31.395055] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190ebb98 00:25:30.429 [2024-04-26 12:21:31.396842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.429 [2024-04-26 12:21:31.396858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:30.429 [2024-04-26 12:21:31.405672] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190fb048 00:25:30.429 [2024-04-26 12:21:31.406796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.429 [2024-04-26 12:21:31.406811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:30.429 [2024-04-26 12:21:31.417820] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190e84c0 00:25:30.429 [2024-04-26 12:21:31.418930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.429 [2024-04-26 12:21:31.418946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:30.429 [2024-04-26 12:21:31.430012] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190e95a0 00:25:30.429 [2024-04-26 12:21:31.431103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.429 [2024-04-26 12:21:31.431122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:30.429 [2024-04-26 12:21:31.442213] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190f7100 00:25:30.429 [2024-04-26 12:21:31.443346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.429 [2024-04-26 12:21:31.443362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:30.429 [2024-04-26 12:21:31.454381] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190ed4e8 00:25:30.429 [2024-04-26 12:21:31.455489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:25161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.429 [2024-04-26 12:21:31.455505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:30.429 [2024-04-26 12:21:31.466528] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190ec408 00:25:30.429 [2024-04-26 12:21:31.467635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:25430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.429 [2024-04-26 12:21:31.467651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:30.429 [2024-04-26 12:21:31.478667] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190fb048 00:25:30.429 [2024-04-26 12:21:31.479805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.429 [2024-04-26 12:21:31.479821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:30.429 [2024-04-26 12:21:31.490787] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190fdeb0 00:25:30.429 [2024-04-26 12:21:31.491905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.429 [2024-04-26 12:21:31.491920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:30.429 [2024-04-26 12:21:31.502933] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190fda78 00:25:30.429 [2024-04-26 12:21:31.504058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.429 [2024-04-26 12:21:31.504074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:30.429 [2024-04-26 12:21:31.516577] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190fc998 00:25:30.429 [2024-04-26 12:21:31.518364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.429 [2024-04-26 12:21:31.518379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:30.429 [2024-04-26 12:21:31.527193] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190ecc78 00:25:30.429 [2024-04-26 12:21:31.528265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:12353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.429 [2024-04-26 12:21:31.528282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:30.429 [2024-04-26 12:21:31.539455] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190edd58 00:25:30.429 [2024-04-26 12:21:31.540559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.429 [2024-04-26 12:21:31.540575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:30.429 [2024-04-26 12:21:31.551600] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190fa3a0 00:25:30.429 [2024-04-26 12:21:31.552690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.429 [2024-04-26 12:21:31.552705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:30.429 [2024-04-26 12:21:31.563750] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190fb480 00:25:30.429 [2024-04-26 12:21:31.564835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.429 [2024-04-26 12:21:31.564853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:30.429 [2024-04-26 12:21:31.575923] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190fef90 00:25:30.429 [2024-04-26 12:21:31.577028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.429 [2024-04-26 12:21:31.577043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:30.429 [2024-04-26 12:21:31.588067] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190fd640 00:25:30.429 [2024-04-26 12:21:31.589165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.429 [2024-04-26 12:21:31.589180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:30.429 [2024-04-26 12:21:31.600217] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190fc560 00:25:30.429 [2024-04-26 12:21:31.601304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:25386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.429 [2024-04-26 12:21:31.601319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:30.429 [2024-04-26 12:21:31.612375] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190e0ea0 00:25:30.429 [2024-04-26 12:21:31.613462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.429 [2024-04-26 12:21:31.613478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:30.429 [2024-04-26 12:21:31.624493] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190e95a0 00:25:30.429 [2024-04-26 12:21:31.625611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.429 [2024-04-26 12:21:31.625627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:30.429 [2024-04-26 12:21:31.636676] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190e84c0 00:25:30.429 [2024-04-26 12:21:31.637788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:10432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.429 [2024-04-26 12:21:31.637804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:30.692 [2024-04-26 12:21:31.648880] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190e73e0 00:25:30.692 [2024-04-26 12:21:31.649998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.692 [2024-04-26 12:21:31.650014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:30.692 [2024-04-26 12:21:31.661060] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190e6300 00:25:30.692 [2024-04-26 12:21:31.662170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.692 [2024-04-26 12:21:31.662186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:30.692 [2024-04-26 12:21:31.673222] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190df988 00:25:30.692 [2024-04-26 12:21:31.674337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:20605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.692 [2024-04-26 12:21:31.674352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:30.692 [2024-04-26 12:21:31.684598] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190df118 00:25:30.692 [2024-04-26 12:21:31.685693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.692 [2024-04-26 12:21:31.685708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:30.692 [2024-04-26 12:21:31.697553] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190e49b0 00:25:30.692 [2024-04-26 12:21:31.698649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.692 [2024-04-26 12:21:31.698665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:30.692 [2024-04-26 12:21:31.709735] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190f4b08 00:25:30.692 [2024-04-26 12:21:31.710830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.692 [2024-04-26 12:21:31.710849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:30.692 [2024-04-26 12:21:31.721921] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190de8a8 00:25:30.692 [2024-04-26 12:21:31.723016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.692 [2024-04-26 12:21:31.723033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:30.692 [2024-04-26 12:21:31.734136] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190f2948 00:25:30.692 [2024-04-26 12:21:31.735228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:19754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.692 [2024-04-26 12:21:31.735244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:30.692 [2024-04-26 12:21:31.746504] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190fb8b8 00:25:30.692 [2024-04-26 12:21:31.747567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:14650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.692 [2024-04-26 12:21:31.747588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:30.692 [2024-04-26 12:21:31.758696] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190fc998 00:25:30.692 [2024-04-26 12:21:31.759782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:19381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.692 [2024-04-26 12:21:31.759798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:30.692 [2024-04-26 12:21:31.770856] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190fda78 00:25:30.692 [2024-04-26 12:21:31.771945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.692 [2024-04-26 12:21:31.771961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:30.692 [2024-04-26 12:21:31.783026] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190fdeb0 00:25:30.692 [2024-04-26 12:21:31.784134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:13545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.692 [2024-04-26 12:21:31.784150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:30.692 [2024-04-26 12:21:31.796734] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190fb048 00:25:30.692 [2024-04-26 12:21:31.798535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:25095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.692 [2024-04-26 12:21:31.798550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:30.692 [2024-04-26 12:21:31.807338] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190e99d8 00:25:30.692 [2024-04-26 12:21:31.808450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:16649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.692 [2024-04-26 12:21:31.808467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:30.692 [2024-04-26 12:21:31.819512] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190eaab8 00:25:30.692 [2024-04-26 12:21:31.820625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:13874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.692 [2024-04-26 12:21:31.820641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:30.692 [2024-04-26 12:21:31.831667] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190f7100 00:25:30.692 [2024-04-26 12:21:31.832762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.692 [2024-04-26 12:21:31.832778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:30.692 [2024-04-26 12:21:31.843821] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190f81e0 00:25:30.692 [2024-04-26 12:21:31.844923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.692 [2024-04-26 12:21:31.844939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:30.692 [2024-04-26 12:21:31.857708] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190f92c0 00:25:30.692 [2024-04-26 12:21:31.859518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:15725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.692 [2024-04-26 12:21:31.859534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:30.692 [2024-04-26 12:21:31.867529] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190fd208 00:25:30.692 [2024-04-26 12:21:31.868629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:3320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.692 [2024-04-26 12:21:31.868644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:30.693 [2024-04-26 12:21:31.880429] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190e6300 00:25:30.693 [2024-04-26 12:21:31.881499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.693 [2024-04-26 12:21:31.881515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:30.693 [2024-04-26 12:21:31.892613] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190e73e0 00:25:30.693 [2024-04-26 12:21:31.893699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:25432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.693 [2024-04-26 12:21:31.893715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:30.693 [2024-04-26 12:21:31.904778] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190e84c0 00:25:30.693 [2024-04-26 12:21:31.905861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:19570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.693 [2024-04-26 12:21:31.905878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:30.954 [2024-04-26 12:21:31.918721] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190e95a0 00:25:30.954 [2024-04-26 12:21:31.920510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:12092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.954 [2024-04-26 12:21:31.920526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:30.954 [2024-04-26 12:21:31.929312] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190f81e0 00:25:30.954 [2024-04-26 12:21:31.930415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.954 [2024-04-26 12:21:31.930432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:30.954 [2024-04-26 12:21:31.941464] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190f7100 00:25:30.954 [2024-04-26 12:21:31.942541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.954 [2024-04-26 12:21:31.942559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:30.954 [2024-04-26 12:21:31.955173] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190f6020 00:25:30.954 [2024-04-26 12:21:31.956951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:11862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.954 [2024-04-26 12:21:31.956966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:30.954 [2024-04-26 12:21:31.965708] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190e1710 00:25:30.954 [2024-04-26 12:21:31.966792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.954 [2024-04-26 12:21:31.966809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:30.954 [2024-04-26 12:21:31.977860] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190e27f0 00:25:30.954 [2024-04-26 12:21:31.978938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:18601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.954 [2024-04-26 12:21:31.978954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:30.954 [2024-04-26 12:21:31.989980] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190fa3a0 00:25:30.954 [2024-04-26 12:21:31.991088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.954 [2024-04-26 12:21:31.991104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:30.954 [2024-04-26 12:21:32.002083] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190e49b0 00:25:30.954 [2024-04-26 12:21:32.003163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.954 [2024-04-26 12:21:32.003180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:30.954 [2024-04-26 12:21:32.014235] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190ee5c8 00:25:30.954 [2024-04-26 12:21:32.015255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.955 [2024-04-26 12:21:32.015270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:30.955 [2024-04-26 12:21:32.026392] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190f8618 00:25:30.955 [2024-04-26 12:21:32.027482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:14538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.955 [2024-04-26 12:21:32.027498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:30.955 [2024-04-26 12:21:32.038583] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190eea00 00:25:30.955 [2024-04-26 12:21:32.039632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.955 [2024-04-26 12:21:32.039647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:30.955 [2024-04-26 12:21:32.050717] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190fa7d8 00:25:30.955 [2024-04-26 12:21:32.051755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.955 [2024-04-26 12:21:32.051771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:30.955 [2024-04-26 12:21:32.062891] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190fe720 00:25:30.955 [2024-04-26 12:21:32.063984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:20716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.955 [2024-04-26 12:21:32.064002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:30.955 [2024-04-26 12:21:32.075035] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190ff3c8 00:25:30.955 [2024-04-26 12:21:32.076111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:11848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.955 [2024-04-26 12:21:32.076127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:30.955 [2024-04-26 12:21:32.087203] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190f31b8 00:25:30.955 [2024-04-26 12:21:32.088283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.955 [2024-04-26 12:21:32.088299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:30.955 [2024-04-26 12:21:32.099342] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190e84c0 00:25:30.955 [2024-04-26 12:21:32.100399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.955 [2024-04-26 12:21:32.100415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:30.955 [2024-04-26 12:21:32.111503] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190e95a0 00:25:30.955 [2024-04-26 12:21:32.112564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.955 [2024-04-26 12:21:32.112580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:30.955 [2024-04-26 12:21:32.123669] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190ea680 00:25:30.955 [2024-04-26 12:21:32.124733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:23569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.955 [2024-04-26 12:21:32.124749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:30.955 [2024-04-26 12:21:32.135871] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190eb760 00:25:30.955 [2024-04-26 12:21:32.136908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.955 [2024-04-26 12:21:32.136924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:30.955 [2024-04-26 12:21:32.148004] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190ec840 00:25:30.955 [2024-04-26 12:21:32.149063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:6056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.955 [2024-04-26 12:21:32.149079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:30.955 [2024-04-26 12:21:32.160120] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190ed920 00:25:30.955 [2024-04-26 12:21:32.161149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:24087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.955 [2024-04-26 12:21:32.161165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:30.955 [2024-04-26 12:21:32.172272] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190de470 00:25:30.955 [2024-04-26 12:21:32.173340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.955 [2024-04-26 12:21:32.173358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:31.217 [2024-04-26 12:21:32.184434] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190efae0 00:25:31.217 [2024-04-26 12:21:32.185497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.217 [2024-04-26 12:21:32.185512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:31.217 [2024-04-26 12:21:32.196605] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190f0bc0 00:25:31.217 [2024-04-26 12:21:32.197669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:8113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.217 [2024-04-26 12:21:32.197684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:31.217 [2024-04-26 12:21:32.208770] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190f1ca0 00:25:31.217 [2024-04-26 12:21:32.209832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.217 [2024-04-26 12:21:32.209851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:31.217 [2024-04-26 12:21:32.220915] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190e4de8 00:25:31.217 [2024-04-26 12:21:32.221971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.217 [2024-04-26 12:21:32.221987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:31.217 [2024-04-26 12:21:32.233038] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190f3e60 00:25:31.217 [2024-04-26 12:21:32.234084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.217 [2024-04-26 12:21:32.234100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:31.217 [2024-04-26 12:21:32.245201] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190fcdd0 00:25:31.217 [2024-04-26 12:21:32.246269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.217 [2024-04-26 12:21:32.246284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:31.217 [2024-04-26 12:21:32.258927] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190dfdc0 00:25:31.217 [2024-04-26 12:21:32.260686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.217 [2024-04-26 12:21:32.260702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:31.217 [2024-04-26 12:21:32.268743] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190ea680 00:25:31.217 [2024-04-26 12:21:32.269805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.217 [2024-04-26 12:21:32.269821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:31.217 [2024-04-26 12:21:32.281669] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190e95a0 00:25:31.217 [2024-04-26 12:21:32.282733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.217 [2024-04-26 12:21:32.282749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:31.217 [2024-04-26 12:21:32.293824] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190f31b8 00:25:31.217 [2024-04-26 12:21:32.294909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.217 [2024-04-26 12:21:32.294925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:31.217 [2024-04-26 12:21:32.306000] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190fc128 00:25:31.217 [2024-04-26 12:21:32.307077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.217 [2024-04-26 12:21:32.307093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:31.217 [2024-04-26 12:21:32.317369] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190fd640 00:25:31.217 [2024-04-26 12:21:32.318435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.217 [2024-04-26 12:21:32.318451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:31.217 [2024-04-26 12:21:32.330312] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190f57b0 00:25:31.217 [2024-04-26 12:21:32.331374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.217 [2024-04-26 12:21:32.331389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:31.217 [2024-04-26 12:21:32.344025] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190df550 00:25:31.217 [2024-04-26 12:21:32.345774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.217 [2024-04-26 12:21:32.345790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:31.217 [2024-04-26 12:21:32.353830] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190eaef0 00:25:31.217 [2024-04-26 12:21:32.354765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.217 [2024-04-26 12:21:32.354780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:31.217 [2024-04-26 12:21:32.366779] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190e9e10 00:25:31.217 [2024-04-26 12:21:32.367844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:10481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.217 [2024-04-26 12:21:32.367860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:31.217 [2024-04-26 12:21:32.378923] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190e8d30 00:25:31.217 [2024-04-26 12:21:32.379952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:18735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.217 [2024-04-26 12:21:32.379968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:31.217 [2024-04-26 12:21:32.392629] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190e7c50 00:25:31.217 [2024-04-26 12:21:32.394393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.217 [2024-04-26 12:21:32.394408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:31.217 [2024-04-26 12:21:32.403219] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190e0a68 00:25:31.217 [2024-04-26 12:21:32.404291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:11725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.217 [2024-04-26 12:21:32.404307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:31.217 [2024-04-26 12:21:32.415389] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190f1868 00:25:31.217 [2024-04-26 12:21:32.416429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:24549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.217 [2024-04-26 12:21:32.416445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:31.217 [2024-04-26 12:21:32.427538] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190f3e60 00:25:31.217 [2024-04-26 12:21:32.428594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.217 [2024-04-26 12:21:32.428610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:31.479 [2024-04-26 12:21:32.439690] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190fcdd0 00:25:31.479 [2024-04-26 12:21:32.440709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:19815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.479 [2024-04-26 12:21:32.440725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:31.479 [2024-04-26 12:21:32.451177] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190e73e0 00:25:31.479 [2024-04-26 12:21:32.452211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.479 [2024-04-26 12:21:32.452227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:31.479 [2024-04-26 12:21:32.464284] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190e6300 00:25:31.479 [2024-04-26 12:21:32.465496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:14005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.479 [2024-04-26 12:21:32.465512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:31.479 [2024-04-26 12:21:32.476598] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190f8e88 00:25:31.479 [2024-04-26 12:21:32.477820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:18513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.479 [2024-04-26 12:21:32.477836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:31.479 [2024-04-26 12:21:32.488751] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190e88f8 00:25:31.479 [2024-04-26 12:21:32.489999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:16869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.479 [2024-04-26 12:21:32.490017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:31.479 [2024-04-26 12:21:32.500903] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190e7818 00:25:31.479 [2024-04-26 12:21:32.502149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.479 [2024-04-26 12:21:32.502165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:31.479 [2024-04-26 12:21:32.513058] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190e6738 00:25:31.479 [2024-04-26 12:21:32.514315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.479 [2024-04-26 12:21:32.514331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:31.479 [2024-04-26 12:21:32.525230] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190df550 00:25:31.479 [2024-04-26 12:21:32.526471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:7076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.479 [2024-04-26 12:21:32.526487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:31.479 [2024-04-26 12:21:32.537387] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190f57b0 00:25:31.479 [2024-04-26 12:21:32.538634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:24029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.479 [2024-04-26 12:21:32.538650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:31.479 [2024-04-26 12:21:32.548848] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190ef6a8 00:25:31.479 [2024-04-26 12:21:32.550086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.479 [2024-04-26 12:21:32.550102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:31.479 [2024-04-26 12:21:32.563316] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190fb480 00:25:31.479 [2024-04-26 12:21:32.565253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.479 [2024-04-26 12:21:32.565269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:31.479 [2024-04-26 12:21:32.573913] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190fc128 00:25:31.479 [2024-04-26 12:21:32.575136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:25431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.479 [2024-04-26 12:21:32.575152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:31.479 [2024-04-26 12:21:32.585253] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190eea00 00:25:31.479 [2024-04-26 12:21:32.586479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:11390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.479 [2024-04-26 12:21:32.586494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:31.479 [2024-04-26 12:21:32.598166] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190f8618 00:25:31.479 [2024-04-26 12:21:32.599372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:16874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.479 [2024-04-26 12:21:32.599388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:31.479 [2024-04-26 12:21:32.611832] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190f7538 00:25:31.479 [2024-04-26 12:21:32.613764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:10954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.479 [2024-04-26 12:21:32.613780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:31.479 [2024-04-26 12:21:32.622414] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190f57b0 00:25:31.479 [2024-04-26 12:21:32.623658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:17487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.479 [2024-04-26 12:21:32.623673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:31.479 [2024-04-26 12:21:32.634560] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190fd640 00:25:31.479 [2024-04-26 12:21:32.635801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.480 [2024-04-26 12:21:32.635817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:31.480 [2024-04-26 12:21:32.646747] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190fbcf0 00:25:31.480 [2024-04-26 12:21:32.647994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:15948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.480 [2024-04-26 12:21:32.648009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:31.480 [2024-04-26 12:21:32.658904] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190e0630 00:25:31.480 [2024-04-26 12:21:32.660143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:9207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.480 [2024-04-26 12:21:32.660158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:31.480 [2024-04-26 12:21:32.671051] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190f1430 00:25:31.480 [2024-04-26 12:21:32.672285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:18589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.480 [2024-04-26 12:21:32.672300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:31.480 [2024-04-26 12:21:32.683192] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190f0788 00:25:31.480 [2024-04-26 12:21:32.684435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:21480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.480 [2024-04-26 12:21:32.684451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:31.480 [2024-04-26 12:21:32.694593] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190f92c0 00:25:31.480 [2024-04-26 12:21:32.695814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:24162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.480 [2024-04-26 12:21:32.695830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:31.741 [2024-04-26 12:21:32.707539] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190ef6a8 00:25:31.741 [2024-04-26 12:21:32.708770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:25573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.741 [2024-04-26 12:21:32.708785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:31.741 [2024-04-26 12:21:32.719701] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190fb480 00:25:31.741 [2024-04-26 12:21:32.720909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:20873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.741 [2024-04-26 12:21:32.720925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:31.741 [2024-04-26 12:21:32.731849] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190fef90 00:25:31.741 [2024-04-26 12:21:32.733090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:8833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.741 [2024-04-26 12:21:32.733105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:31.741 [2024-04-26 12:21:32.743993] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190f35f0 00:25:31.741 [2024-04-26 12:21:32.745211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.741 [2024-04-26 12:21:32.745226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:31.741 [2024-04-26 12:21:32.757669] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190fc560 00:25:31.741 [2024-04-26 12:21:32.759601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.741 [2024-04-26 12:21:32.759617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:31.741 [2024-04-26 12:21:32.768280] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190e3498 00:25:31.741 [2024-04-26 12:21:32.769522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:23937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.741 [2024-04-26 12:21:32.769538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:31.741 [2024-04-26 12:21:32.780442] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190fdeb0 00:25:31.741 [2024-04-26 12:21:32.781664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.741 [2024-04-26 12:21:32.781680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:31.741 [2024-04-26 12:21:32.792591] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190f0788 00:25:31.741 [2024-04-26 12:21:32.793841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.741 [2024-04-26 12:21:32.793857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:31.741 [2024-04-26 12:21:32.804713] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190ddc00 00:25:31.741 [2024-04-26 12:21:32.805960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.741 [2024-04-26 12:21:32.805978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:31.741 [2024-04-26 12:21:32.816832] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190e3060 00:25:31.741 [2024-04-26 12:21:32.818034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.741 [2024-04-26 12:21:32.818049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:31.741 [2024-04-26 12:21:32.828966] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190f2d80 00:25:31.741 [2024-04-26 12:21:32.830191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:21799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.741 [2024-04-26 12:21:32.830207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:31.741 [2024-04-26 12:21:32.841102] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190ec408 00:25:31.741 [2024-04-26 12:21:32.842365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:9434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.741 [2024-04-26 12:21:32.842381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:31.741 [2024-04-26 12:21:32.853258] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190fa3a0 00:25:31.741 [2024-04-26 12:21:32.854504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:24341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.741 [2024-04-26 12:21:32.854520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:31.741 [2024-04-26 12:21:32.864557] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190e12d8 00:25:31.741 [2024-04-26 12:21:32.865770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:10846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.741 [2024-04-26 12:21:32.865786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:31.741 [2024-04-26 12:21:32.879105] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190f1ca0 00:25:31.741 [2024-04-26 12:21:32.881009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:17700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.741 [2024-04-26 12:21:32.881025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:31.741 [2024-04-26 12:21:32.889702] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190f35f0 00:25:31.741 [2024-04-26 12:21:32.890921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.742 [2024-04-26 12:21:32.890937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:31.742 [2024-04-26 12:21:32.901895] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190fef90 00:25:31.742 [2024-04-26 12:21:32.903133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:10581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.742 [2024-04-26 12:21:32.903148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:31.742 [2024-04-26 12:21:32.914249] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190fb480 00:25:31.742 [2024-04-26 12:21:32.915485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.742 [2024-04-26 12:21:32.915501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:31.742 [2024-04-26 12:21:32.926425] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190ef6a8 00:25:31.742 [2024-04-26 12:21:32.927658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:22869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.742 [2024-04-26 12:21:32.927674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:31.742 [2024-04-26 12:21:32.940136] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190e27f0 00:25:31.742 [2024-04-26 12:21:32.942026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.742 [2024-04-26 12:21:32.942041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:31.742 [2024-04-26 12:21:32.950743] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190eea00 00:25:31.742 [2024-04-26 12:21:32.951980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.742 [2024-04-26 12:21:32.951996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:32.004 [2024-04-26 12:21:32.962112] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190fa3a0 00:25:32.004 [2024-04-26 12:21:32.963325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:17039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.004 [2024-04-26 12:21:32.963341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:32.004 [2024-04-26 12:21:32.976588] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190ec408 00:25:32.004 [2024-04-26 12:21:32.978514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.004 [2024-04-26 12:21:32.978529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:32.004 [2024-04-26 12:21:32.987205] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190fac10 00:25:32.004 [2024-04-26 12:21:32.988440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:4030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.004 [2024-04-26 12:21:32.988456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:32.004 [2024-04-26 12:21:32.999350] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190eee38 00:25:32.004 [2024-04-26 12:21:33.000572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:17059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.004 [2024-04-26 12:21:33.000588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:32.004 [2024-04-26 12:21:33.011498] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190f8a50 00:25:32.004 [2024-04-26 12:21:33.012731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.004 [2024-04-26 12:21:33.012747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:32.004 [2024-04-26 12:21:33.023682] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190f7970 00:25:32.004 [2024-04-26 12:21:33.024906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:20936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.004 [2024-04-26 12:21:33.024922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:32.004 [2024-04-26 12:21:33.035834] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190f5378 00:25:32.004 [2024-04-26 12:21:33.037040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:6974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.004 [2024-04-26 12:21:33.037056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:32.004 [2024-04-26 12:21:33.047994] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190e01f8 00:25:32.004 [2024-04-26 12:21:33.049223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.004 [2024-04-26 12:21:33.049240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:32.004 [2024-04-26 12:21:33.060128] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190e3060 00:25:32.004 [2024-04-26 12:21:33.061300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:14449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.004 [2024-04-26 12:21:33.061315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:32.004 [2024-04-26 12:21:33.073761] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190edd58 00:25:32.004 [2024-04-26 12:21:33.075673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.004 [2024-04-26 12:21:33.075689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:32.004 [2024-04-26 12:21:33.084328] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190e3060 00:25:32.004 [2024-04-26 12:21:33.085548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.004 [2024-04-26 12:21:33.085565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.004 [2024-04-26 12:21:33.096441] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190e3060 00:25:32.004 [2024-04-26 12:21:33.097653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.004 [2024-04-26 12:21:33.097669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.004 [2024-04-26 12:21:33.108593] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190e3060 00:25:32.004 [2024-04-26 12:21:33.109793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:21043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.004 [2024-04-26 12:21:33.109808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.004 [2024-04-26 12:21:33.120708] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190e3060 00:25:32.004 [2024-04-26 12:21:33.121912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:20539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.004 [2024-04-26 12:21:33.121931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.004 [2024-04-26 12:21:33.132808] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190e3060 00:25:32.004 [2024-04-26 12:21:33.134032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:23879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.004 [2024-04-26 12:21:33.134048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.004 [2024-04-26 12:21:33.144942] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190e3060 00:25:32.004 [2024-04-26 12:21:33.146153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:19738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.004 [2024-04-26 12:21:33.146169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.004 [2024-04-26 12:21:33.157057] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190e3060 00:25:32.004 [2024-04-26 12:21:33.158269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.004 [2024-04-26 12:21:33.158284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.004 [2024-04-26 12:21:33.169173] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190e3060 00:25:32.004 [2024-04-26 12:21:33.170388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:11137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.004 [2024-04-26 12:21:33.170404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.004 [2024-04-26 12:21:33.181296] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190e3060 00:25:32.004 [2024-04-26 12:21:33.182484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:9015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.004 [2024-04-26 12:21:33.182500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.004 [2024-04-26 12:21:33.193392] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190e3060 00:25:32.004 [2024-04-26 12:21:33.194585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:5974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.004 [2024-04-26 12:21:33.194601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.005 [2024-04-26 12:21:33.205490] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190e3060 00:25:32.005 [2024-04-26 12:21:33.206698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:11547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.005 [2024-04-26 12:21:33.206714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.005 [2024-04-26 12:21:33.217597] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190e3060 00:25:32.005 [2024-04-26 12:21:33.218807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.005 [2024-04-26 12:21:33.218823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.266 [2024-04-26 12:21:33.229726] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190e3060 00:25:32.266 [2024-04-26 12:21:33.230942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:8794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.266 [2024-04-26 12:21:33.230958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.266 [2024-04-26 12:21:33.241867] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190e3060 00:25:32.266 [2024-04-26 12:21:33.243093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:22119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.266 [2024-04-26 12:21:33.243109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.266 [2024-04-26 12:21:33.253975] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190e3060 00:25:32.266 [2024-04-26 12:21:33.255155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.266 [2024-04-26 12:21:33.255171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.266 [2024-04-26 12:21:33.266097] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190e3060 00:25:32.266 [2024-04-26 12:21:33.267309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:1220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.266 [2024-04-26 12:21:33.267324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.266 [2024-04-26 12:21:33.278200] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190e3060 00:25:32.266 [2024-04-26 12:21:33.279368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.266 [2024-04-26 12:21:33.279384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.266 [2024-04-26 12:21:33.290322] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190e3060 00:25:32.266 [2024-04-26 12:21:33.291531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:5556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.266 [2024-04-26 12:21:33.291547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.266 [2024-04-26 12:21:33.302434] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190e3060 00:25:32.266 [2024-04-26 12:21:33.303649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.267 [2024-04-26 12:21:33.303665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.267 [2024-04-26 12:21:33.314552] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e370) with pdu=0x2000190e3060 00:25:32.267 [2024-04-26 12:21:33.315754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.267 [2024-04-26 12:21:33.315770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.267 00:25:32.267 Latency(us) 00:25:32.267 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:32.267 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:32.267 nvme0n1 : 2.01 20996.61 82.02 0.00 0.00 6087.55 2239.15 14090.24 00:25:32.267 =================================================================================================================== 00:25:32.267 Total : 20996.61 82.02 0.00 0.00 6087.55 2239.15 14090.24 00:25:32.267 0 00:25:32.267 12:21:33 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:32.267 12:21:33 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:32.267 12:21:33 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:32.267 | .driver_specific 00:25:32.267 | .nvme_error 00:25:32.267 | .status_code 00:25:32.267 | .command_transient_transport_error' 00:25:32.267 12:21:33 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:32.267 12:21:33 -- host/digest.sh@71 -- # (( 165 > 0 )) 00:25:32.267 12:21:33 -- host/digest.sh@73 -- # killprocess 3554022 00:25:32.267 12:21:33 -- common/autotest_common.sh@936 -- # '[' -z 3554022 ']' 00:25:32.267 12:21:33 -- common/autotest_common.sh@940 -- # kill -0 3554022 00:25:32.267 12:21:33 -- common/autotest_common.sh@941 -- # uname 00:25:32.528 12:21:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:32.528 12:21:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3554022 00:25:32.528 12:21:33 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:32.528 12:21:33 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:32.528 12:21:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3554022' 00:25:32.528 killing process with pid 3554022 00:25:32.528 12:21:33 -- common/autotest_common.sh@955 -- # kill 3554022 00:25:32.528 Received shutdown signal, test time was about 2.000000 seconds 00:25:32.528 00:25:32.528 Latency(us) 00:25:32.528 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:32.528 =================================================================================================================== 00:25:32.528 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:32.528 12:21:33 -- common/autotest_common.sh@960 -- # wait 3554022 00:25:32.528 12:21:33 -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:25:32.528 12:21:33 -- host/digest.sh@54 -- # local rw bs qd 00:25:32.528 12:21:33 -- host/digest.sh@56 -- # rw=randwrite 00:25:32.528 12:21:33 -- host/digest.sh@56 -- # bs=131072 00:25:32.528 12:21:33 -- host/digest.sh@56 -- # qd=16 00:25:32.528 12:21:33 -- host/digest.sh@58 -- # bperfpid=3554795 00:25:32.528 12:21:33 -- host/digest.sh@60 -- # waitforlisten 3554795 /var/tmp/bperf.sock 00:25:32.528 12:21:33 -- common/autotest_common.sh@817 -- # '[' -z 3554795 ']' 00:25:32.528 12:21:33 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:25:32.528 12:21:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:32.528 12:21:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:32.528 12:21:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:32.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:32.528 12:21:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:32.528 12:21:33 -- common/autotest_common.sh@10 -- # set +x 00:25:32.528 [2024-04-26 12:21:33.700990] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:25:32.528 [2024-04-26 12:21:33.701050] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3554795 ] 00:25:32.528 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:32.528 Zero copy mechanism will not be used. 00:25:32.528 EAL: No free 2048 kB hugepages reported on node 1 00:25:32.789 [2024-04-26 12:21:33.777660] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:32.789 [2024-04-26 12:21:33.829790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:33.362 12:21:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:33.362 12:21:34 -- common/autotest_common.sh@850 -- # return 0 00:25:33.362 12:21:34 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:33.362 12:21:34 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:33.623 12:21:34 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:33.623 12:21:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:33.623 12:21:34 -- common/autotest_common.sh@10 -- # set +x 00:25:33.623 12:21:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:33.623 12:21:34 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:33.623 12:21:34 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:33.884 nvme0n1 00:25:33.884 12:21:35 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:33.884 12:21:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:33.884 12:21:35 -- common/autotest_common.sh@10 -- # set +x 00:25:33.884 12:21:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:33.884 12:21:35 -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:33.884 12:21:35 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:33.884 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:33.884 Zero copy mechanism will not be used. 00:25:33.884 Running I/O for 2 seconds... 00:25:34.146 [2024-04-26 12:21:35.116362] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.146 [2024-04-26 12:21:35.116733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.146 [2024-04-26 12:21:35.116760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.146 [2024-04-26 12:21:35.128335] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.146 [2024-04-26 12:21:35.128686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.146 [2024-04-26 12:21:35.128705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.146 [2024-04-26 12:21:35.139371] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.146 [2024-04-26 12:21:35.139710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.146 [2024-04-26 12:21:35.139728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.146 [2024-04-26 12:21:35.151593] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.146 [2024-04-26 12:21:35.151954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.146 [2024-04-26 12:21:35.151973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.146 [2024-04-26 12:21:35.162065] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.146 [2024-04-26 12:21:35.162446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.146 [2024-04-26 12:21:35.162464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.146 [2024-04-26 12:21:35.168384] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.146 [2024-04-26 12:21:35.168727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.146 [2024-04-26 12:21:35.168750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.146 [2024-04-26 12:21:35.173620] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.146 [2024-04-26 12:21:35.173846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.146 [2024-04-26 12:21:35.173863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.146 [2024-04-26 12:21:35.181354] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.146 [2024-04-26 12:21:35.181701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.146 [2024-04-26 12:21:35.181718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.146 [2024-04-26 12:21:35.188372] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.146 [2024-04-26 12:21:35.188722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.146 [2024-04-26 12:21:35.188740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.146 [2024-04-26 12:21:35.193445] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.146 [2024-04-26 12:21:35.193774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.146 [2024-04-26 12:21:35.193791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.146 [2024-04-26 12:21:35.199074] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.146 [2024-04-26 12:21:35.199445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.146 [2024-04-26 12:21:35.199462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.146 [2024-04-26 12:21:35.204948] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.146 [2024-04-26 12:21:35.205264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.146 [2024-04-26 12:21:35.205282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.146 [2024-04-26 12:21:35.210490] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.146 [2024-04-26 12:21:35.210805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.146 [2024-04-26 12:21:35.210822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.146 [2024-04-26 12:21:35.216011] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.146 [2024-04-26 12:21:35.216355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.146 [2024-04-26 12:21:35.216373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.146 [2024-04-26 12:21:35.222083] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.146 [2024-04-26 12:21:35.222467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.146 [2024-04-26 12:21:35.222483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.146 [2024-04-26 12:21:35.227602] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.146 [2024-04-26 12:21:35.227820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.146 [2024-04-26 12:21:35.227840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.146 [2024-04-26 12:21:35.236223] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.146 [2024-04-26 12:21:35.236574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.146 [2024-04-26 12:21:35.236591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.146 [2024-04-26 12:21:35.242046] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.146 [2024-04-26 12:21:35.242257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.146 [2024-04-26 12:21:35.242274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.146 [2024-04-26 12:21:35.248970] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.146 [2024-04-26 12:21:35.249325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.146 [2024-04-26 12:21:35.249343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.146 [2024-04-26 12:21:35.254622] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.146 [2024-04-26 12:21:35.254930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.147 [2024-04-26 12:21:35.254947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.147 [2024-04-26 12:21:35.260293] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.147 [2024-04-26 12:21:35.260593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.147 [2024-04-26 12:21:35.260610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.147 [2024-04-26 12:21:35.266958] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.147 [2024-04-26 12:21:35.267175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.147 [2024-04-26 12:21:35.267192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.147 [2024-04-26 12:21:35.271187] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.147 [2024-04-26 12:21:35.271515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.147 [2024-04-26 12:21:35.271533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.147 [2024-04-26 12:21:35.280322] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.147 [2024-04-26 12:21:35.280531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.147 [2024-04-26 12:21:35.280548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.147 [2024-04-26 12:21:35.285902] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.147 [2024-04-26 12:21:35.286229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.147 [2024-04-26 12:21:35.286246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.147 [2024-04-26 12:21:35.291791] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.147 [2024-04-26 12:21:35.292009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.147 [2024-04-26 12:21:35.292025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.147 [2024-04-26 12:21:35.297391] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.147 [2024-04-26 12:21:35.297609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.147 [2024-04-26 12:21:35.297625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.147 [2024-04-26 12:21:35.302067] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.147 [2024-04-26 12:21:35.302418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.147 [2024-04-26 12:21:35.302435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.147 [2024-04-26 12:21:35.308201] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.147 [2024-04-26 12:21:35.308411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.147 [2024-04-26 12:21:35.308427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.147 [2024-04-26 12:21:35.315800] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.147 [2024-04-26 12:21:35.316146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.147 [2024-04-26 12:21:35.316163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.147 [2024-04-26 12:21:35.324726] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.147 [2024-04-26 12:21:35.325037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.147 [2024-04-26 12:21:35.325054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.147 [2024-04-26 12:21:35.330188] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.147 [2024-04-26 12:21:35.330397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.147 [2024-04-26 12:21:35.330418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.147 [2024-04-26 12:21:35.336741] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.147 [2024-04-26 12:21:35.337004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.147 [2024-04-26 12:21:35.337021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.147 [2024-04-26 12:21:35.343784] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.147 [2024-04-26 12:21:35.344130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.147 [2024-04-26 12:21:35.344147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.147 [2024-04-26 12:21:35.350698] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.147 [2024-04-26 12:21:35.351004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.147 [2024-04-26 12:21:35.351021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.147 [2024-04-26 12:21:35.356974] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.147 [2024-04-26 12:21:35.357323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.147 [2024-04-26 12:21:35.357340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.147 [2024-04-26 12:21:35.362696] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.147 [2024-04-26 12:21:35.363001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.147 [2024-04-26 12:21:35.363018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.409 [2024-04-26 12:21:35.369966] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.409 [2024-04-26 12:21:35.370294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.409 [2024-04-26 12:21:35.370311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.409 [2024-04-26 12:21:35.374968] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.409 [2024-04-26 12:21:35.375304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.409 [2024-04-26 12:21:35.375321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.409 [2024-04-26 12:21:35.379904] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.409 [2024-04-26 12:21:35.380289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.409 [2024-04-26 12:21:35.380306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.409 [2024-04-26 12:21:35.386763] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.409 [2024-04-26 12:21:35.386991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.410 [2024-04-26 12:21:35.387008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.410 [2024-04-26 12:21:35.393660] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.410 [2024-04-26 12:21:35.393972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.410 [2024-04-26 12:21:35.393989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.410 [2024-04-26 12:21:35.402021] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.410 [2024-04-26 12:21:35.402369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.410 [2024-04-26 12:21:35.402385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.410 [2024-04-26 12:21:35.410186] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.410 [2024-04-26 12:21:35.410528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.410 [2024-04-26 12:21:35.410545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.410 [2024-04-26 12:21:35.422479] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.410 [2024-04-26 12:21:35.422824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.410 [2024-04-26 12:21:35.422845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.410 [2024-04-26 12:21:35.433476] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.410 [2024-04-26 12:21:35.433822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.410 [2024-04-26 12:21:35.433844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.410 [2024-04-26 12:21:35.443443] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.410 [2024-04-26 12:21:35.443786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.410 [2024-04-26 12:21:35.443803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.410 [2024-04-26 12:21:35.454516] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.410 [2024-04-26 12:21:35.454611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.410 [2024-04-26 12:21:35.454625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.410 [2024-04-26 12:21:35.463911] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.410 [2024-04-26 12:21:35.464243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.410 [2024-04-26 12:21:35.464260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.410 [2024-04-26 12:21:35.469996] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.410 [2024-04-26 12:21:35.470352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.410 [2024-04-26 12:21:35.470368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.410 [2024-04-26 12:21:35.476859] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.410 [2024-04-26 12:21:35.477246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.410 [2024-04-26 12:21:35.477263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.410 [2024-04-26 12:21:35.485665] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.410 [2024-04-26 12:21:35.486002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.410 [2024-04-26 12:21:35.486019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.410 [2024-04-26 12:21:35.491904] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.410 [2024-04-26 12:21:35.492116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.410 [2024-04-26 12:21:35.492132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.410 [2024-04-26 12:21:35.498568] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.410 [2024-04-26 12:21:35.498780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.410 [2024-04-26 12:21:35.498796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.410 [2024-04-26 12:21:35.507080] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.410 [2024-04-26 12:21:35.507390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.410 [2024-04-26 12:21:35.507407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.410 [2024-04-26 12:21:35.513061] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.410 [2024-04-26 12:21:35.513270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.410 [2024-04-26 12:21:35.513286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.410 [2024-04-26 12:21:35.521977] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.410 [2024-04-26 12:21:35.522195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.410 [2024-04-26 12:21:35.522213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.410 [2024-04-26 12:21:35.530626] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.410 [2024-04-26 12:21:35.530987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.410 [2024-04-26 12:21:35.531006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.410 [2024-04-26 12:21:35.539902] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.410 [2024-04-26 12:21:35.539968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.410 [2024-04-26 12:21:35.539983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.410 [2024-04-26 12:21:35.548252] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.410 [2024-04-26 12:21:35.548460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.410 [2024-04-26 12:21:35.548476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.410 [2024-04-26 12:21:35.557652] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.410 [2024-04-26 12:21:35.558002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.410 [2024-04-26 12:21:35.558018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.410 [2024-04-26 12:21:35.564219] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.410 [2024-04-26 12:21:35.564554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.410 [2024-04-26 12:21:35.564570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.410 [2024-04-26 12:21:35.572085] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.410 [2024-04-26 12:21:35.572403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.411 [2024-04-26 12:21:35.572419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.411 [2024-04-26 12:21:35.578888] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.411 [2024-04-26 12:21:35.579098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.411 [2024-04-26 12:21:35.579115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.411 [2024-04-26 12:21:35.585251] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.411 [2024-04-26 12:21:35.585592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.411 [2024-04-26 12:21:35.585608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.411 [2024-04-26 12:21:35.592723] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.411 [2024-04-26 12:21:35.593067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.411 [2024-04-26 12:21:35.593084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.411 [2024-04-26 12:21:35.600211] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.411 [2024-04-26 12:21:35.600543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.411 [2024-04-26 12:21:35.600560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.411 [2024-04-26 12:21:35.606564] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.411 [2024-04-26 12:21:35.606889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.411 [2024-04-26 12:21:35.606906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.411 [2024-04-26 12:21:35.613861] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.411 [2024-04-26 12:21:35.614187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.411 [2024-04-26 12:21:35.614205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.411 [2024-04-26 12:21:35.621909] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.411 [2024-04-26 12:21:35.622255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.411 [2024-04-26 12:21:35.622271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.673 [2024-04-26 12:21:35.629889] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.673 [2024-04-26 12:21:35.629960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.673 [2024-04-26 12:21:35.629974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.673 [2024-04-26 12:21:35.637883] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.673 [2024-04-26 12:21:35.638221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.673 [2024-04-26 12:21:35.638237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.673 [2024-04-26 12:21:35.646028] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.673 [2024-04-26 12:21:35.646396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.673 [2024-04-26 12:21:35.646412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.673 [2024-04-26 12:21:35.652337] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.673 [2024-04-26 12:21:35.652770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.673 [2024-04-26 12:21:35.652788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.673 [2024-04-26 12:21:35.658226] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.673 [2024-04-26 12:21:35.658436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.673 [2024-04-26 12:21:35.658452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.673 [2024-04-26 12:21:35.668072] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.673 [2024-04-26 12:21:35.668383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.673 [2024-04-26 12:21:35.668400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.673 [2024-04-26 12:21:35.678543] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.674 [2024-04-26 12:21:35.678609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.674 [2024-04-26 12:21:35.678624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.674 [2024-04-26 12:21:35.690844] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.674 [2024-04-26 12:21:35.691173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.674 [2024-04-26 12:21:35.691190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.674 [2024-04-26 12:21:35.701288] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.674 [2024-04-26 12:21:35.701626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.674 [2024-04-26 12:21:35.701644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.674 [2024-04-26 12:21:35.709546] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.674 [2024-04-26 12:21:35.709754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.674 [2024-04-26 12:21:35.709770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.674 [2024-04-26 12:21:35.716215] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.674 [2024-04-26 12:21:35.716443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.674 [2024-04-26 12:21:35.716459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.674 [2024-04-26 12:21:35.721984] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.674 [2024-04-26 12:21:35.722193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.674 [2024-04-26 12:21:35.722209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.674 [2024-04-26 12:21:35.729475] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.674 [2024-04-26 12:21:35.729875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.674 [2024-04-26 12:21:35.729891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.674 [2024-04-26 12:21:35.738436] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.674 [2024-04-26 12:21:35.738781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.674 [2024-04-26 12:21:35.738801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.674 [2024-04-26 12:21:35.745257] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.674 [2024-04-26 12:21:35.745565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.674 [2024-04-26 12:21:35.745582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.674 [2024-04-26 12:21:35.753513] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.674 [2024-04-26 12:21:35.753865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.674 [2024-04-26 12:21:35.753882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.674 [2024-04-26 12:21:35.761003] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.674 [2024-04-26 12:21:35.761334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.674 [2024-04-26 12:21:35.761351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.674 [2024-04-26 12:21:35.767298] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.674 [2024-04-26 12:21:35.767628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.674 [2024-04-26 12:21:35.767644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.674 [2024-04-26 12:21:35.773076] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.674 [2024-04-26 12:21:35.773286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.674 [2024-04-26 12:21:35.773302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.674 [2024-04-26 12:21:35.780423] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.674 [2024-04-26 12:21:35.780755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.674 [2024-04-26 12:21:35.780772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.674 [2024-04-26 12:21:35.786664] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.674 [2024-04-26 12:21:35.786983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.674 [2024-04-26 12:21:35.787000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.674 [2024-04-26 12:21:35.795464] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.674 [2024-04-26 12:21:35.795774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.674 [2024-04-26 12:21:35.795791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.674 [2024-04-26 12:21:35.802907] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.674 [2024-04-26 12:21:35.803247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.674 [2024-04-26 12:21:35.803263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.674 [2024-04-26 12:21:35.809163] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.674 [2024-04-26 12:21:35.809474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.674 [2024-04-26 12:21:35.809491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.674 [2024-04-26 12:21:35.816879] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.674 [2024-04-26 12:21:35.817246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.674 [2024-04-26 12:21:35.817263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.674 [2024-04-26 12:21:35.825200] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.674 [2024-04-26 12:21:35.825540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.674 [2024-04-26 12:21:35.825557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.674 [2024-04-26 12:21:35.833739] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.674 [2024-04-26 12:21:35.834060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.674 [2024-04-26 12:21:35.834078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.674 [2024-04-26 12:21:35.842340] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.674 [2024-04-26 12:21:35.842660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.674 [2024-04-26 12:21:35.842677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.674 [2024-04-26 12:21:35.848976] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.674 [2024-04-26 12:21:35.849357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.674 [2024-04-26 12:21:35.849374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.674 [2024-04-26 12:21:35.855426] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.674 [2024-04-26 12:21:35.855772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.674 [2024-04-26 12:21:35.855789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.674 [2024-04-26 12:21:35.862084] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.674 [2024-04-26 12:21:35.862392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.674 [2024-04-26 12:21:35.862409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.674 [2024-04-26 12:21:35.870014] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.674 [2024-04-26 12:21:35.870347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.674 [2024-04-26 12:21:35.870364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.674 [2024-04-26 12:21:35.876212] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.674 [2024-04-26 12:21:35.876421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.674 [2024-04-26 12:21:35.876437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.674 [2024-04-26 12:21:35.881987] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.675 [2024-04-26 12:21:35.882307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.675 [2024-04-26 12:21:35.882324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.675 [2024-04-26 12:21:35.890040] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.675 [2024-04-26 12:21:35.890440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.675 [2024-04-26 12:21:35.890456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.937 [2024-04-26 12:21:35.900483] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.937 [2024-04-26 12:21:35.900798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.937 [2024-04-26 12:21:35.900815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.937 [2024-04-26 12:21:35.908546] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.937 [2024-04-26 12:21:35.908768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.937 [2024-04-26 12:21:35.908784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.937 [2024-04-26 12:21:35.917762] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.937 [2024-04-26 12:21:35.918082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.937 [2024-04-26 12:21:35.918099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.937 [2024-04-26 12:21:35.925475] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.937 [2024-04-26 12:21:35.925783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.937 [2024-04-26 12:21:35.925800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.937 [2024-04-26 12:21:35.936652] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.937 [2024-04-26 12:21:35.937094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.937 [2024-04-26 12:21:35.937115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.937 [2024-04-26 12:21:35.949169] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.937 [2024-04-26 12:21:35.949521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.937 [2024-04-26 12:21:35.949538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.937 [2024-04-26 12:21:35.961105] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.937 [2024-04-26 12:21:35.961503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.937 [2024-04-26 12:21:35.961519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.937 [2024-04-26 12:21:35.973668] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.937 [2024-04-26 12:21:35.973993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.937 [2024-04-26 12:21:35.974010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.937 [2024-04-26 12:21:35.986340] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.937 [2024-04-26 12:21:35.986578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.937 [2024-04-26 12:21:35.986593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.937 [2024-04-26 12:21:35.998715] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.937 [2024-04-26 12:21:35.999060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.937 [2024-04-26 12:21:35.999077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.937 [2024-04-26 12:21:36.011340] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.937 [2024-04-26 12:21:36.011578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.937 [2024-04-26 12:21:36.011596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.937 [2024-04-26 12:21:36.023332] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.937 [2024-04-26 12:21:36.023649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.937 [2024-04-26 12:21:36.023665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.937 [2024-04-26 12:21:36.031327] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.937 [2024-04-26 12:21:36.031638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.937 [2024-04-26 12:21:36.031654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.937 [2024-04-26 12:21:36.040856] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.937 [2024-04-26 12:21:36.041208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.937 [2024-04-26 12:21:36.041224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.937 [2024-04-26 12:21:36.046709] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.937 [2024-04-26 12:21:36.047059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.937 [2024-04-26 12:21:36.047076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.937 [2024-04-26 12:21:36.052380] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.937 [2024-04-26 12:21:36.052589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.937 [2024-04-26 12:21:36.052605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.937 [2024-04-26 12:21:36.058427] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.937 [2024-04-26 12:21:36.058637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.937 [2024-04-26 12:21:36.058653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.937 [2024-04-26 12:21:36.065848] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.937 [2024-04-26 12:21:36.066067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.937 [2024-04-26 12:21:36.066083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.937 [2024-04-26 12:21:36.071761] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.937 [2024-04-26 12:21:36.072114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.937 [2024-04-26 12:21:36.072131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.937 [2024-04-26 12:21:36.078251] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.937 [2024-04-26 12:21:36.078606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.937 [2024-04-26 12:21:36.078623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.937 [2024-04-26 12:21:36.084793] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.938 [2024-04-26 12:21:36.085033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.938 [2024-04-26 12:21:36.085050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.938 [2024-04-26 12:21:36.092279] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.938 [2024-04-26 12:21:36.092658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.938 [2024-04-26 12:21:36.092675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.938 [2024-04-26 12:21:36.100892] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.938 [2024-04-26 12:21:36.101205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.938 [2024-04-26 12:21:36.101222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.938 [2024-04-26 12:21:36.109226] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.938 [2024-04-26 12:21:36.109541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.938 [2024-04-26 12:21:36.109558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.938 [2024-04-26 12:21:36.116478] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.938 [2024-04-26 12:21:36.116814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.938 [2024-04-26 12:21:36.116830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.938 [2024-04-26 12:21:36.123926] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.938 [2024-04-26 12:21:36.124268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.938 [2024-04-26 12:21:36.124285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.938 [2024-04-26 12:21:36.130771] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.938 [2024-04-26 12:21:36.130994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.938 [2024-04-26 12:21:36.131010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.938 [2024-04-26 12:21:36.136978] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.938 [2024-04-26 12:21:36.137330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.938 [2024-04-26 12:21:36.137346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.938 [2024-04-26 12:21:36.144207] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.938 [2024-04-26 12:21:36.144554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.938 [2024-04-26 12:21:36.144570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.938 [2024-04-26 12:21:36.150910] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:34.938 [2024-04-26 12:21:36.151258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.938 [2024-04-26 12:21:36.151275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:35.201 [2024-04-26 12:21:36.158069] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.201 [2024-04-26 12:21:36.158281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.201 [2024-04-26 12:21:36.158299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.201 [2024-04-26 12:21:36.165212] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.201 [2024-04-26 12:21:36.165551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.201 [2024-04-26 12:21:36.165568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:35.201 [2024-04-26 12:21:36.173092] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.201 [2024-04-26 12:21:36.173408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.201 [2024-04-26 12:21:36.173424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:35.201 [2024-04-26 12:21:36.180034] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.201 [2024-04-26 12:21:36.180354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.201 [2024-04-26 12:21:36.180371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:35.201 [2024-04-26 12:21:36.187368] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.201 [2024-04-26 12:21:36.187717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.201 [2024-04-26 12:21:36.187733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.201 [2024-04-26 12:21:36.194237] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.201 [2024-04-26 12:21:36.194545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.201 [2024-04-26 12:21:36.194561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:35.201 [2024-04-26 12:21:36.201298] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.201 [2024-04-26 12:21:36.201640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.201 [2024-04-26 12:21:36.201657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:35.201 [2024-04-26 12:21:36.208320] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.201 [2024-04-26 12:21:36.208667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.201 [2024-04-26 12:21:36.208684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:35.201 [2024-04-26 12:21:36.216271] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.201 [2024-04-26 12:21:36.216655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.201 [2024-04-26 12:21:36.216672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.201 [2024-04-26 12:21:36.224124] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.201 [2024-04-26 12:21:36.224476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.201 [2024-04-26 12:21:36.224492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:35.201 [2024-04-26 12:21:36.230818] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.201 [2024-04-26 12:21:36.231035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.201 [2024-04-26 12:21:36.231051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:35.201 [2024-04-26 12:21:36.238130] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.201 [2024-04-26 12:21:36.238495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.201 [2024-04-26 12:21:36.238511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:35.201 [2024-04-26 12:21:36.246356] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.201 [2024-04-26 12:21:36.246702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.201 [2024-04-26 12:21:36.246719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.201 [2024-04-26 12:21:36.254936] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.201 [2024-04-26 12:21:36.255280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.201 [2024-04-26 12:21:36.255297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:35.201 [2024-04-26 12:21:36.259579] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.201 [2024-04-26 12:21:36.259926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.201 [2024-04-26 12:21:36.259943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:35.201 [2024-04-26 12:21:36.265804] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.201 [2024-04-26 12:21:36.266023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.201 [2024-04-26 12:21:36.266040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:35.201 [2024-04-26 12:21:36.272135] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.201 [2024-04-26 12:21:36.272344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.201 [2024-04-26 12:21:36.272360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.201 [2024-04-26 12:21:36.279985] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.201 [2024-04-26 12:21:36.280325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.201 [2024-04-26 12:21:36.280345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:35.201 [2024-04-26 12:21:36.289248] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.201 [2024-04-26 12:21:36.289586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.201 [2024-04-26 12:21:36.289602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:35.201 [2024-04-26 12:21:36.296760] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.201 [2024-04-26 12:21:36.296982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.201 [2024-04-26 12:21:36.296998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:35.201 [2024-04-26 12:21:36.304228] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.201 [2024-04-26 12:21:36.304561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.201 [2024-04-26 12:21:36.304578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.201 [2024-04-26 12:21:36.311234] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.201 [2024-04-26 12:21:36.311444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.201 [2024-04-26 12:21:36.311460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:35.201 [2024-04-26 12:21:36.318849] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.201 [2024-04-26 12:21:36.319192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.202 [2024-04-26 12:21:36.319209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:35.202 [2024-04-26 12:21:36.325761] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.202 [2024-04-26 12:21:36.325974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.202 [2024-04-26 12:21:36.325990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:35.202 [2024-04-26 12:21:36.333808] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.202 [2024-04-26 12:21:36.334123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.202 [2024-04-26 12:21:36.334140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.202 [2024-04-26 12:21:36.340606] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.202 [2024-04-26 12:21:36.340956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.202 [2024-04-26 12:21:36.340973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:35.202 [2024-04-26 12:21:36.347973] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.202 [2024-04-26 12:21:36.348301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.202 [2024-04-26 12:21:36.348321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:35.202 [2024-04-26 12:21:36.355328] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.202 [2024-04-26 12:21:36.355643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.202 [2024-04-26 12:21:36.355660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:35.202 [2024-04-26 12:21:36.361487] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.202 [2024-04-26 12:21:36.361802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.202 [2024-04-26 12:21:36.361819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.202 [2024-04-26 12:21:36.370004] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.202 [2024-04-26 12:21:36.370342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.202 [2024-04-26 12:21:36.370358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:35.202 [2024-04-26 12:21:36.376928] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.202 [2024-04-26 12:21:36.377272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.202 [2024-04-26 12:21:36.377289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:35.202 [2024-04-26 12:21:36.385122] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.202 [2024-04-26 12:21:36.385460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.202 [2024-04-26 12:21:36.385477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:35.202 [2024-04-26 12:21:36.392586] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.202 [2024-04-26 12:21:36.392904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.202 [2024-04-26 12:21:36.392921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.202 [2024-04-26 12:21:36.400242] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.202 [2024-04-26 12:21:36.400561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.202 [2024-04-26 12:21:36.400577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:35.202 [2024-04-26 12:21:36.407762] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.202 [2024-04-26 12:21:36.407973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.202 [2024-04-26 12:21:36.407989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:35.202 [2024-04-26 12:21:36.416164] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.202 [2024-04-26 12:21:36.416490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.202 [2024-04-26 12:21:36.416506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:35.464 [2024-04-26 12:21:36.421944] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.464 [2024-04-26 12:21:36.422285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.464 [2024-04-26 12:21:36.422302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.464 [2024-04-26 12:21:36.429514] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.464 [2024-04-26 12:21:36.429883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.464 [2024-04-26 12:21:36.429899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:35.464 [2024-04-26 12:21:36.436983] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.464 [2024-04-26 12:21:36.437355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.464 [2024-04-26 12:21:36.437372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:35.464 [2024-04-26 12:21:36.444073] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.464 [2024-04-26 12:21:36.444415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.464 [2024-04-26 12:21:36.444431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:35.464 [2024-04-26 12:21:36.454032] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.464 [2024-04-26 12:21:36.454372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.464 [2024-04-26 12:21:36.454389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.464 [2024-04-26 12:21:36.464897] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.465 [2024-04-26 12:21:36.465237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.465 [2024-04-26 12:21:36.465254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:35.465 [2024-04-26 12:21:36.471129] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.465 [2024-04-26 12:21:36.471468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.465 [2024-04-26 12:21:36.471484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:35.465 [2024-04-26 12:21:36.476353] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.465 [2024-04-26 12:21:36.476561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.465 [2024-04-26 12:21:36.476580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:35.465 [2024-04-26 12:21:36.481329] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.465 [2024-04-26 12:21:36.481648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.465 [2024-04-26 12:21:36.481665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.465 [2024-04-26 12:21:36.489116] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.465 [2024-04-26 12:21:36.489327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.465 [2024-04-26 12:21:36.489343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:35.465 [2024-04-26 12:21:36.494703] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.465 [2024-04-26 12:21:36.494765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.465 [2024-04-26 12:21:36.494780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:35.465 [2024-04-26 12:21:36.503902] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.465 [2024-04-26 12:21:36.504121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.465 [2024-04-26 12:21:36.504137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:35.465 [2024-04-26 12:21:36.510695] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.465 [2024-04-26 12:21:36.510907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.465 [2024-04-26 12:21:36.510924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.465 [2024-04-26 12:21:36.517713] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.465 [2024-04-26 12:21:36.517775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.465 [2024-04-26 12:21:36.517790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:35.465 [2024-04-26 12:21:36.526133] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.465 [2024-04-26 12:21:36.526448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.465 [2024-04-26 12:21:36.526465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:35.465 [2024-04-26 12:21:36.533935] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.465 [2024-04-26 12:21:36.534281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.465 [2024-04-26 12:21:36.534297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:35.465 [2024-04-26 12:21:36.543619] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.465 [2024-04-26 12:21:36.543992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.465 [2024-04-26 12:21:36.544009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.465 [2024-04-26 12:21:36.551557] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.465 [2024-04-26 12:21:36.551882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.465 [2024-04-26 12:21:36.551898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:35.465 [2024-04-26 12:21:36.558651] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.465 [2024-04-26 12:21:36.558969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.465 [2024-04-26 12:21:36.558986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:35.465 [2024-04-26 12:21:36.565191] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.465 [2024-04-26 12:21:36.565536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.465 [2024-04-26 12:21:36.565553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:35.465 [2024-04-26 12:21:36.574060] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.465 [2024-04-26 12:21:36.574399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.465 [2024-04-26 12:21:36.574416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.465 [2024-04-26 12:21:36.580301] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.465 [2024-04-26 12:21:36.580632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.465 [2024-04-26 12:21:36.580649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:35.465 [2024-04-26 12:21:36.589769] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.465 [2024-04-26 12:21:36.589986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.465 [2024-04-26 12:21:36.590002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:35.465 [2024-04-26 12:21:36.596807] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.465 [2024-04-26 12:21:36.597141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.465 [2024-04-26 12:21:36.597158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:35.465 [2024-04-26 12:21:36.607729] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.465 [2024-04-26 12:21:36.608142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.465 [2024-04-26 12:21:36.608159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.465 [2024-04-26 12:21:36.619099] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.465 [2024-04-26 12:21:36.619445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.465 [2024-04-26 12:21:36.619462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:35.465 [2024-04-26 12:21:36.630973] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.465 [2024-04-26 12:21:36.631323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.465 [2024-04-26 12:21:36.631339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:35.465 [2024-04-26 12:21:36.642938] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.465 [2024-04-26 12:21:36.643305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.465 [2024-04-26 12:21:36.643322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:35.465 [2024-04-26 12:21:36.655120] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.465 [2024-04-26 12:21:36.655425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.465 [2024-04-26 12:21:36.655442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.465 [2024-04-26 12:21:36.667506] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.465 [2024-04-26 12:21:36.667830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.465 [2024-04-26 12:21:36.667913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:35.465 [2024-04-26 12:21:36.679532] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.465 [2024-04-26 12:21:36.679885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.465 [2024-04-26 12:21:36.679902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:35.728 [2024-04-26 12:21:36.691087] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.728 [2024-04-26 12:21:36.691438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.728 [2024-04-26 12:21:36.691455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:35.728 [2024-04-26 12:21:36.701548] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.728 [2024-04-26 12:21:36.701862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.728 [2024-04-26 12:21:36.701879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.728 [2024-04-26 12:21:36.713138] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.728 [2024-04-26 12:21:36.713488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.728 [2024-04-26 12:21:36.713511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:35.728 [2024-04-26 12:21:36.722661] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.728 [2024-04-26 12:21:36.722978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.728 [2024-04-26 12:21:36.722997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:35.728 [2024-04-26 12:21:36.732548] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.728 [2024-04-26 12:21:36.732899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.728 [2024-04-26 12:21:36.732916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:35.728 [2024-04-26 12:21:36.739630] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.728 [2024-04-26 12:21:36.739942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.728 [2024-04-26 12:21:36.739961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.728 [2024-04-26 12:21:36.748992] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.728 [2024-04-26 12:21:36.749390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.728 [2024-04-26 12:21:36.749407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:35.728 [2024-04-26 12:21:36.760471] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.728 [2024-04-26 12:21:36.760781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.728 [2024-04-26 12:21:36.760797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:35.728 [2024-04-26 12:21:36.772818] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.728 [2024-04-26 12:21:36.773056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.728 [2024-04-26 12:21:36.773073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:35.728 [2024-04-26 12:21:36.785894] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.728 [2024-04-26 12:21:36.786270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.728 [2024-04-26 12:21:36.786287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.728 [2024-04-26 12:21:36.797714] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.728 [2024-04-26 12:21:36.798050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.728 [2024-04-26 12:21:36.798067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:35.728 [2024-04-26 12:21:36.809928] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.728 [2024-04-26 12:21:36.810278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.728 [2024-04-26 12:21:36.810296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:35.728 [2024-04-26 12:21:36.821897] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.728 [2024-04-26 12:21:36.822208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.728 [2024-04-26 12:21:36.822225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:35.728 [2024-04-26 12:21:36.831509] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.728 [2024-04-26 12:21:36.831727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.728 [2024-04-26 12:21:36.831744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.728 [2024-04-26 12:21:36.841855] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.728 [2024-04-26 12:21:36.842188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.728 [2024-04-26 12:21:36.842205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:35.728 [2024-04-26 12:21:36.853178] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.728 [2024-04-26 12:21:36.853519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.728 [2024-04-26 12:21:36.853536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:35.728 [2024-04-26 12:21:36.864951] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.728 [2024-04-26 12:21:36.865344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.728 [2024-04-26 12:21:36.865361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:35.728 [2024-04-26 12:21:36.877162] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.728 [2024-04-26 12:21:36.877564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.728 [2024-04-26 12:21:36.877581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.728 [2024-04-26 12:21:36.887631] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.728 [2024-04-26 12:21:36.888059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.728 [2024-04-26 12:21:36.888076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:35.728 [2024-04-26 12:21:36.893584] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.728 [2024-04-26 12:21:36.893793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.728 [2024-04-26 12:21:36.893810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:35.728 [2024-04-26 12:21:36.900445] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.728 [2024-04-26 12:21:36.900746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.728 [2024-04-26 12:21:36.900763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:35.728 [2024-04-26 12:21:36.906851] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.729 [2024-04-26 12:21:36.907267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.729 [2024-04-26 12:21:36.907284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.729 [2024-04-26 12:21:36.913951] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.729 [2024-04-26 12:21:36.914222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.729 [2024-04-26 12:21:36.914238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:35.729 [2024-04-26 12:21:36.924322] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.729 [2024-04-26 12:21:36.924595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.729 [2024-04-26 12:21:36.924613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:35.729 [2024-04-26 12:21:36.932963] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.729 [2024-04-26 12:21:36.933290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.729 [2024-04-26 12:21:36.933307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:35.729 [2024-04-26 12:21:36.940732] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.729 [2024-04-26 12:21:36.940957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.729 [2024-04-26 12:21:36.940973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.991 [2024-04-26 12:21:36.949858] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.991 [2024-04-26 12:21:36.950227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.991 [2024-04-26 12:21:36.950243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:35.991 [2024-04-26 12:21:36.961358] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.991 [2024-04-26 12:21:36.961750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.991 [2024-04-26 12:21:36.961766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:35.991 [2024-04-26 12:21:36.969482] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.991 [2024-04-26 12:21:36.969776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.991 [2024-04-26 12:21:36.969796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:35.991 [2024-04-26 12:21:36.978538] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.991 [2024-04-26 12:21:36.978761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.991 [2024-04-26 12:21:36.978777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.991 [2024-04-26 12:21:36.988123] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.991 [2024-04-26 12:21:36.988507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.991 [2024-04-26 12:21:36.988524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:35.991 [2024-04-26 12:21:36.997762] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.991 [2024-04-26 12:21:36.998063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.991 [2024-04-26 12:21:36.998080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:35.991 [2024-04-26 12:21:37.005334] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.991 [2024-04-26 12:21:37.005698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.991 [2024-04-26 12:21:37.005715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:35.991 [2024-04-26 12:21:37.014426] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.991 [2024-04-26 12:21:37.014746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.991 [2024-04-26 12:21:37.014763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.991 [2024-04-26 12:21:37.022949] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.991 [2024-04-26 12:21:37.023243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.991 [2024-04-26 12:21:37.023260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:35.991 [2024-04-26 12:21:37.031768] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.991 [2024-04-26 12:21:37.032076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.991 [2024-04-26 12:21:37.032093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:35.991 [2024-04-26 12:21:37.041646] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.991 [2024-04-26 12:21:37.041987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.991 [2024-04-26 12:21:37.042004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:35.991 [2024-04-26 12:21:37.051306] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.991 [2024-04-26 12:21:37.051560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.991 [2024-04-26 12:21:37.051576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.991 [2024-04-26 12:21:37.061329] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.991 [2024-04-26 12:21:37.061677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.991 [2024-04-26 12:21:37.061694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:35.991 [2024-04-26 12:21:37.070362] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.991 [2024-04-26 12:21:37.070647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.991 [2024-04-26 12:21:37.070664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:35.991 [2024-04-26 12:21:37.079272] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.991 [2024-04-26 12:21:37.079574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.991 [2024-04-26 12:21:37.079590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:35.991 [2024-04-26 12:21:37.088223] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.992 [2024-04-26 12:21:37.088553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.992 [2024-04-26 12:21:37.088569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.992 [2024-04-26 12:21:37.098252] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.992 [2024-04-26 12:21:37.098591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.992 [2024-04-26 12:21:37.098608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:35.992 [2024-04-26 12:21:37.107990] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe4e720) with pdu=0x2000190fef90 00:25:35.992 [2024-04-26 12:21:37.108300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.992 [2024-04-26 12:21:37.108316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:35.992 00:25:35.992 Latency(us) 00:25:35.992 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:35.992 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:35.992 nvme0n1 : 2.00 3809.51 476.19 0.00 0.00 4192.00 2007.04 13434.88 00:25:35.992 =================================================================================================================== 00:25:35.992 Total : 3809.51 476.19 0.00 0.00 4192.00 2007.04 13434.88 00:25:35.992 0 00:25:35.992 12:21:37 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:35.992 12:21:37 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:35.992 12:21:37 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:35.992 | .driver_specific 00:25:35.992 | .nvme_error 00:25:35.992 | .status_code 00:25:35.992 | .command_transient_transport_error' 00:25:35.992 12:21:37 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:36.254 12:21:37 -- host/digest.sh@71 -- # (( 246 > 0 )) 00:25:36.254 12:21:37 -- host/digest.sh@73 -- # killprocess 3554795 00:25:36.254 12:21:37 -- common/autotest_common.sh@936 -- # '[' -z 3554795 ']' 00:25:36.254 12:21:37 -- common/autotest_common.sh@940 -- # kill -0 3554795 00:25:36.254 12:21:37 -- common/autotest_common.sh@941 -- # uname 00:25:36.254 12:21:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:36.254 12:21:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3554795 00:25:36.254 12:21:37 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:36.254 12:21:37 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:36.254 12:21:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3554795' 00:25:36.254 killing process with pid 3554795 00:25:36.254 12:21:37 -- common/autotest_common.sh@955 -- # kill 3554795 00:25:36.254 Received shutdown signal, test time was about 2.000000 seconds 00:25:36.254 00:25:36.254 Latency(us) 00:25:36.254 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:36.254 =================================================================================================================== 00:25:36.254 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:36.254 12:21:37 -- common/autotest_common.sh@960 -- # wait 3554795 00:25:36.254 12:21:37 -- host/digest.sh@116 -- # killprocess 3552372 00:25:36.254 12:21:37 -- common/autotest_common.sh@936 -- # '[' -z 3552372 ']' 00:25:36.254 12:21:37 -- common/autotest_common.sh@940 -- # kill -0 3552372 00:25:36.254 12:21:37 -- common/autotest_common.sh@941 -- # uname 00:25:36.254 12:21:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:36.254 12:21:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3552372 00:25:36.515 12:21:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:36.515 12:21:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:36.515 12:21:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3552372' 00:25:36.515 killing process with pid 3552372 00:25:36.515 12:21:37 -- common/autotest_common.sh@955 -- # kill 3552372 00:25:36.515 12:21:37 -- common/autotest_common.sh@960 -- # wait 3552372 00:25:36.515 00:25:36.515 real 0m16.165s 00:25:36.515 user 0m31.733s 00:25:36.515 sys 0m3.342s 00:25:36.515 12:21:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:36.515 12:21:37 -- common/autotest_common.sh@10 -- # set +x 00:25:36.515 ************************************ 00:25:36.515 END TEST nvmf_digest_error 00:25:36.515 ************************************ 00:25:36.515 12:21:37 -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:25:36.515 12:21:37 -- host/digest.sh@150 -- # nvmftestfini 00:25:36.515 12:21:37 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:36.515 12:21:37 -- nvmf/common.sh@117 -- # sync 00:25:36.515 12:21:37 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:36.515 12:21:37 -- nvmf/common.sh@120 -- # set +e 00:25:36.515 12:21:37 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:36.515 12:21:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:36.515 rmmod nvme_tcp 00:25:36.515 rmmod nvme_fabrics 00:25:36.515 rmmod nvme_keyring 00:25:36.515 12:21:37 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:36.515 12:21:37 -- nvmf/common.sh@124 -- # set -e 00:25:36.515 12:21:37 -- nvmf/common.sh@125 -- # return 0 00:25:36.515 12:21:37 -- nvmf/common.sh@478 -- # '[' -n 3552372 ']' 00:25:36.515 12:21:37 -- nvmf/common.sh@479 -- # killprocess 3552372 00:25:36.515 12:21:37 -- common/autotest_common.sh@936 -- # '[' -z 3552372 ']' 00:25:36.515 12:21:37 -- common/autotest_common.sh@940 -- # kill -0 3552372 00:25:36.515 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (3552372) - No such process 00:25:36.515 12:21:37 -- common/autotest_common.sh@963 -- # echo 'Process with pid 3552372 is not found' 00:25:36.515 Process with pid 3552372 is not found 00:25:36.776 12:21:37 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:36.776 12:21:37 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:36.776 12:21:37 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:36.776 12:21:37 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:36.776 12:21:37 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:36.776 12:21:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:36.776 12:21:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:36.776 12:21:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:38.690 12:21:39 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:38.690 00:25:38.690 real 0m42.199s 00:25:38.690 user 1m5.694s 00:25:38.690 sys 0m12.225s 00:25:38.690 12:21:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:38.690 12:21:39 -- common/autotest_common.sh@10 -- # set +x 00:25:38.690 ************************************ 00:25:38.690 END TEST nvmf_digest 00:25:38.690 ************************************ 00:25:38.690 12:21:39 -- nvmf/nvmf.sh@108 -- # [[ 0 -eq 1 ]] 00:25:38.690 12:21:39 -- nvmf/nvmf.sh@113 -- # [[ 0 -eq 1 ]] 00:25:38.690 12:21:39 -- nvmf/nvmf.sh@118 -- # [[ phy == phy ]] 00:25:38.690 12:21:39 -- nvmf/nvmf.sh@119 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:25:38.690 12:21:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:38.690 12:21:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:38.690 12:21:39 -- common/autotest_common.sh@10 -- # set +x 00:25:38.952 ************************************ 00:25:38.952 START TEST nvmf_bdevperf 00:25:38.952 ************************************ 00:25:38.952 12:21:39 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:25:38.952 * Looking for test storage... 00:25:38.952 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:38.952 12:21:40 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:38.952 12:21:40 -- nvmf/common.sh@7 -- # uname -s 00:25:38.952 12:21:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:38.952 12:21:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:38.952 12:21:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:38.952 12:21:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:38.952 12:21:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:38.952 12:21:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:38.952 12:21:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:38.952 12:21:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:38.952 12:21:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:38.952 12:21:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:38.952 12:21:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:38.952 12:21:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:38.953 12:21:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:38.953 12:21:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:38.953 12:21:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:38.953 12:21:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:38.953 12:21:40 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:38.953 12:21:40 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:38.953 12:21:40 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:38.953 12:21:40 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:38.953 12:21:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.953 12:21:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.953 12:21:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.953 12:21:40 -- paths/export.sh@5 -- # export PATH 00:25:38.953 12:21:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.953 12:21:40 -- nvmf/common.sh@47 -- # : 0 00:25:38.953 12:21:40 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:38.953 12:21:40 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:38.953 12:21:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:38.953 12:21:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:38.953 12:21:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:38.953 12:21:40 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:38.953 12:21:40 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:38.953 12:21:40 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:38.953 12:21:40 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:38.953 12:21:40 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:38.953 12:21:40 -- host/bdevperf.sh@24 -- # nvmftestinit 00:25:38.953 12:21:40 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:38.953 12:21:40 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:38.953 12:21:40 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:38.953 12:21:40 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:38.953 12:21:40 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:38.953 12:21:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:38.953 12:21:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:38.953 12:21:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:38.953 12:21:40 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:25:38.953 12:21:40 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:25:38.953 12:21:40 -- nvmf/common.sh@285 -- # xtrace_disable 00:25:38.953 12:21:40 -- common/autotest_common.sh@10 -- # set +x 00:25:45.539 12:21:46 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:45.539 12:21:46 -- nvmf/common.sh@291 -- # pci_devs=() 00:25:45.539 12:21:46 -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:45.539 12:21:46 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:45.539 12:21:46 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:45.539 12:21:46 -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:45.539 12:21:46 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:45.539 12:21:46 -- nvmf/common.sh@295 -- # net_devs=() 00:25:45.539 12:21:46 -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:45.539 12:21:46 -- nvmf/common.sh@296 -- # e810=() 00:25:45.539 12:21:46 -- nvmf/common.sh@296 -- # local -ga e810 00:25:45.539 12:21:46 -- nvmf/common.sh@297 -- # x722=() 00:25:45.539 12:21:46 -- nvmf/common.sh@297 -- # local -ga x722 00:25:45.539 12:21:46 -- nvmf/common.sh@298 -- # mlx=() 00:25:45.539 12:21:46 -- nvmf/common.sh@298 -- # local -ga mlx 00:25:45.539 12:21:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:45.539 12:21:46 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:45.539 12:21:46 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:45.539 12:21:46 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:45.539 12:21:46 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:45.539 12:21:46 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:45.539 12:21:46 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:45.540 12:21:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:45.540 12:21:46 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:45.540 12:21:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:45.540 12:21:46 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:45.540 12:21:46 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:45.540 12:21:46 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:45.540 12:21:46 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:45.540 12:21:46 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:45.540 12:21:46 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:45.540 12:21:46 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:45.540 12:21:46 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:45.540 12:21:46 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:45.540 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:45.540 12:21:46 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:45.540 12:21:46 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:45.540 12:21:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:45.540 12:21:46 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:45.540 12:21:46 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:45.540 12:21:46 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:45.540 12:21:46 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:45.540 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:45.540 12:21:46 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:45.540 12:21:46 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:45.540 12:21:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:45.540 12:21:46 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:45.540 12:21:46 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:45.540 12:21:46 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:45.540 12:21:46 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:45.540 12:21:46 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:45.540 12:21:46 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:45.540 12:21:46 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:45.540 12:21:46 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:45.540 12:21:46 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:45.540 12:21:46 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:45.540 Found net devices under 0000:31:00.0: cvl_0_0 00:25:45.540 12:21:46 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:45.540 12:21:46 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:45.540 12:21:46 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:45.540 12:21:46 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:45.540 12:21:46 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:45.540 12:21:46 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:45.540 Found net devices under 0000:31:00.1: cvl_0_1 00:25:45.540 12:21:46 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:45.540 12:21:46 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:25:45.540 12:21:46 -- nvmf/common.sh@403 -- # is_hw=yes 00:25:45.540 12:21:46 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:25:45.540 12:21:46 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:25:45.540 12:21:46 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:25:45.540 12:21:46 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:45.540 12:21:46 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:45.540 12:21:46 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:45.540 12:21:46 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:45.540 12:21:46 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:45.540 12:21:46 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:45.540 12:21:46 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:45.540 12:21:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:45.540 12:21:46 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:45.540 12:21:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:45.540 12:21:46 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:45.540 12:21:46 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:45.540 12:21:46 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:45.540 12:21:46 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:45.540 12:21:46 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:45.540 12:21:46 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:45.540 12:21:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:45.540 12:21:46 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:45.540 12:21:46 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:45.540 12:21:46 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:45.540 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:45.540 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.513 ms 00:25:45.540 00:25:45.540 --- 10.0.0.2 ping statistics --- 00:25:45.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:45.540 rtt min/avg/max/mdev = 0.513/0.513/0.513/0.000 ms 00:25:45.540 12:21:46 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:45.540 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:45.540 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:25:45.540 00:25:45.540 --- 10.0.0.1 ping statistics --- 00:25:45.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:45.540 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:25:45.540 12:21:46 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:45.540 12:21:46 -- nvmf/common.sh@411 -- # return 0 00:25:45.540 12:21:46 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:45.540 12:21:46 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:45.540 12:21:46 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:45.540 12:21:46 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:45.540 12:21:46 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:45.540 12:21:46 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:45.540 12:21:46 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:45.540 12:21:46 -- host/bdevperf.sh@25 -- # tgt_init 00:25:45.540 12:21:46 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:25:45.540 12:21:46 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:45.540 12:21:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:45.540 12:21:46 -- common/autotest_common.sh@10 -- # set +x 00:25:45.540 12:21:46 -- nvmf/common.sh@470 -- # nvmfpid=3559774 00:25:45.540 12:21:46 -- nvmf/common.sh@471 -- # waitforlisten 3559774 00:25:45.540 12:21:46 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:45.540 12:21:46 -- common/autotest_common.sh@817 -- # '[' -z 3559774 ']' 00:25:45.540 12:21:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:45.540 12:21:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:45.540 12:21:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:45.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:45.540 12:21:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:45.540 12:21:46 -- common/autotest_common.sh@10 -- # set +x 00:25:45.803 [2024-04-26 12:21:46.777988] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:25:45.803 [2024-04-26 12:21:46.778054] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:45.803 EAL: No free 2048 kB hugepages reported on node 1 00:25:45.803 [2024-04-26 12:21:46.866162] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:45.803 [2024-04-26 12:21:46.957964] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:45.803 [2024-04-26 12:21:46.958027] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:45.803 [2024-04-26 12:21:46.958035] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:45.803 [2024-04-26 12:21:46.958042] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:45.803 [2024-04-26 12:21:46.958048] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:45.803 [2024-04-26 12:21:46.958193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:45.803 [2024-04-26 12:21:46.958357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:45.803 [2024-04-26 12:21:46.958359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:46.374 12:21:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:46.374 12:21:47 -- common/autotest_common.sh@850 -- # return 0 00:25:46.374 12:21:47 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:46.374 12:21:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:46.374 12:21:47 -- common/autotest_common.sh@10 -- # set +x 00:25:46.635 12:21:47 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:46.635 12:21:47 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:46.635 12:21:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:46.635 12:21:47 -- common/autotest_common.sh@10 -- # set +x 00:25:46.635 [2024-04-26 12:21:47.620337] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:46.635 12:21:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:46.635 12:21:47 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:46.635 12:21:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:46.635 12:21:47 -- common/autotest_common.sh@10 -- # set +x 00:25:46.635 Malloc0 00:25:46.635 12:21:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:46.635 12:21:47 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:46.635 12:21:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:46.635 12:21:47 -- common/autotest_common.sh@10 -- # set +x 00:25:46.635 12:21:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:46.635 12:21:47 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:46.635 12:21:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:46.635 12:21:47 -- common/autotest_common.sh@10 -- # set +x 00:25:46.635 12:21:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:46.635 12:21:47 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:46.635 12:21:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:46.635 12:21:47 -- common/autotest_common.sh@10 -- # set +x 00:25:46.635 [2024-04-26 12:21:47.688391] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:46.635 12:21:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:46.635 12:21:47 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:25:46.635 12:21:47 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:25:46.635 12:21:47 -- nvmf/common.sh@521 -- # config=() 00:25:46.635 12:21:47 -- nvmf/common.sh@521 -- # local subsystem config 00:25:46.635 12:21:47 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:46.635 12:21:47 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:46.635 { 00:25:46.635 "params": { 00:25:46.635 "name": "Nvme$subsystem", 00:25:46.635 "trtype": "$TEST_TRANSPORT", 00:25:46.635 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:46.635 "adrfam": "ipv4", 00:25:46.635 "trsvcid": "$NVMF_PORT", 00:25:46.635 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:46.635 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:46.635 "hdgst": ${hdgst:-false}, 00:25:46.635 "ddgst": ${ddgst:-false} 00:25:46.635 }, 00:25:46.635 "method": "bdev_nvme_attach_controller" 00:25:46.635 } 00:25:46.635 EOF 00:25:46.635 )") 00:25:46.635 12:21:47 -- nvmf/common.sh@543 -- # cat 00:25:46.635 12:21:47 -- nvmf/common.sh@545 -- # jq . 00:25:46.635 12:21:47 -- nvmf/common.sh@546 -- # IFS=, 00:25:46.635 12:21:47 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:25:46.635 "params": { 00:25:46.635 "name": "Nvme1", 00:25:46.635 "trtype": "tcp", 00:25:46.635 "traddr": "10.0.0.2", 00:25:46.635 "adrfam": "ipv4", 00:25:46.635 "trsvcid": "4420", 00:25:46.635 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:46.635 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:46.635 "hdgst": false, 00:25:46.635 "ddgst": false 00:25:46.635 }, 00:25:46.635 "method": "bdev_nvme_attach_controller" 00:25:46.635 }' 00:25:46.635 [2024-04-26 12:21:47.739977] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:25:46.635 [2024-04-26 12:21:47.740029] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3559833 ] 00:25:46.635 EAL: No free 2048 kB hugepages reported on node 1 00:25:46.635 [2024-04-26 12:21:47.799878] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:46.896 [2024-04-26 12:21:47.863089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:47.157 Running I/O for 1 seconds... 00:25:48.098 00:25:48.098 Latency(us) 00:25:48.098 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:48.098 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:48.098 Verification LBA range: start 0x0 length 0x4000 00:25:48.098 Nvme1n1 : 1.01 8829.06 34.49 0.00 0.00 14437.70 3099.31 15837.87 00:25:48.098 =================================================================================================================== 00:25:48.098 Total : 8829.06 34.49 0.00 0.00 14437.70 3099.31 15837.87 00:25:48.098 12:21:49 -- host/bdevperf.sh@30 -- # bdevperfpid=3560150 00:25:48.098 12:21:49 -- host/bdevperf.sh@32 -- # sleep 3 00:25:48.098 12:21:49 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:25:48.098 12:21:49 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:25:48.098 12:21:49 -- nvmf/common.sh@521 -- # config=() 00:25:48.098 12:21:49 -- nvmf/common.sh@521 -- # local subsystem config 00:25:48.098 12:21:49 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:48.098 12:21:49 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:48.098 { 00:25:48.098 "params": { 00:25:48.098 "name": "Nvme$subsystem", 00:25:48.098 "trtype": "$TEST_TRANSPORT", 00:25:48.098 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:48.098 "adrfam": "ipv4", 00:25:48.098 "trsvcid": "$NVMF_PORT", 00:25:48.098 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:48.098 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:48.098 "hdgst": ${hdgst:-false}, 00:25:48.098 "ddgst": ${ddgst:-false} 00:25:48.098 }, 00:25:48.098 "method": "bdev_nvme_attach_controller" 00:25:48.098 } 00:25:48.098 EOF 00:25:48.098 )") 00:25:48.098 12:21:49 -- nvmf/common.sh@543 -- # cat 00:25:48.098 12:21:49 -- nvmf/common.sh@545 -- # jq . 00:25:48.098 12:21:49 -- nvmf/common.sh@546 -- # IFS=, 00:25:48.098 12:21:49 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:25:48.098 "params": { 00:25:48.098 "name": "Nvme1", 00:25:48.098 "trtype": "tcp", 00:25:48.098 "traddr": "10.0.0.2", 00:25:48.098 "adrfam": "ipv4", 00:25:48.098 "trsvcid": "4420", 00:25:48.098 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:48.098 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:48.098 "hdgst": false, 00:25:48.098 "ddgst": false 00:25:48.098 }, 00:25:48.098 "method": "bdev_nvme_attach_controller" 00:25:48.098 }' 00:25:48.358 [2024-04-26 12:21:49.332168] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:25:48.358 [2024-04-26 12:21:49.332270] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3560150 ] 00:25:48.358 EAL: No free 2048 kB hugepages reported on node 1 00:25:48.358 [2024-04-26 12:21:49.393880] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:48.358 [2024-04-26 12:21:49.455990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:48.618 Running I/O for 15 seconds... 00:25:51.163 12:21:52 -- host/bdevperf.sh@33 -- # kill -9 3559774 00:25:51.163 12:21:52 -- host/bdevperf.sh@35 -- # sleep 3 00:25:51.163 [2024-04-26 12:21:52.290689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:93608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.163 [2024-04-26 12:21:52.290732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.163 [2024-04-26 12:21:52.290754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:93616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.163 [2024-04-26 12:21:52.290764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.163 [2024-04-26 12:21:52.290775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:93624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.163 [2024-04-26 12:21:52.290782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.163 [2024-04-26 12:21:52.290792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:93632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.163 [2024-04-26 12:21:52.290802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.163 [2024-04-26 12:21:52.290812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:93640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.163 [2024-04-26 12:21:52.290821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.163 [2024-04-26 12:21:52.290841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:93648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.163 [2024-04-26 12:21:52.290851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.163 [2024-04-26 12:21:52.290862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:93656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.163 [2024-04-26 12:21:52.290870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.163 [2024-04-26 12:21:52.290880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:93664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.163 [2024-04-26 12:21:52.290888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.163 [2024-04-26 12:21:52.290899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:93672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.163 [2024-04-26 12:21:52.290907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.163 [2024-04-26 12:21:52.290919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:93680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.163 [2024-04-26 12:21:52.290928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.163 [2024-04-26 12:21:52.290940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:93688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.163 [2024-04-26 12:21:52.290949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.163 [2024-04-26 12:21:52.290960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:93696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.163 [2024-04-26 12:21:52.290969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.163 [2024-04-26 12:21:52.290980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:93704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.163 [2024-04-26 12:21:52.290991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.163 [2024-04-26 12:21:52.291003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:93712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.163 [2024-04-26 12:21:52.291013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.163 [2024-04-26 12:21:52.291025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:93720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.163 [2024-04-26 12:21:52.291034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.163 [2024-04-26 12:21:52.291049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:93728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.163 [2024-04-26 12:21:52.291056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.163 [2024-04-26 12:21:52.291067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:93736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.163 [2024-04-26 12:21:52.291074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.163 [2024-04-26 12:21:52.291083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:93744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.163 [2024-04-26 12:21:52.291095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.163 [2024-04-26 12:21:52.291104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:93752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.163 [2024-04-26 12:21:52.291112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.163 [2024-04-26 12:21:52.291123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:93760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.163 [2024-04-26 12:21:52.291131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.163 [2024-04-26 12:21:52.291141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:93768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.163 [2024-04-26 12:21:52.291149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.163 [2024-04-26 12:21:52.291159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:93776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.163 [2024-04-26 12:21:52.291168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.163 [2024-04-26 12:21:52.291178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:93784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.164 [2024-04-26 12:21:52.291186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.164 [2024-04-26 12:21:52.291195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:93792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.164 [2024-04-26 12:21:52.291203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.164 [2024-04-26 12:21:52.291214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:93800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.164 [2024-04-26 12:21:52.291222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.164 [2024-04-26 12:21:52.291231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:93808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.164 [2024-04-26 12:21:52.291238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.164 [2024-04-26 12:21:52.291247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:93816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.164 [2024-04-26 12:21:52.291254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.164 [2024-04-26 12:21:52.291263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:93824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.164 [2024-04-26 12:21:52.291270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.164 [2024-04-26 12:21:52.291279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:93832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.164 [2024-04-26 12:21:52.291286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.164 [2024-04-26 12:21:52.291295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:93840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.164 [2024-04-26 12:21:52.291302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.164 [2024-04-26 12:21:52.291313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:93848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.164 [2024-04-26 12:21:52.291321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.164 [2024-04-26 12:21:52.291330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:93856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.164 [2024-04-26 12:21:52.291337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.164 [2024-04-26 12:21:52.291346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:93864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.164 [2024-04-26 12:21:52.291353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.164 [2024-04-26 12:21:52.291363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:93872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.164 [2024-04-26 12:21:52.291370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.164 [2024-04-26 12:21:52.291379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:93880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.164 [2024-04-26 12:21:52.291386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.164 [2024-04-26 12:21:52.291395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:93888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.164 [2024-04-26 12:21:52.291402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.164 [2024-04-26 12:21:52.291411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:93896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.164 [2024-04-26 12:21:52.291418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.164 [2024-04-26 12:21:52.291428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:93904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.164 [2024-04-26 12:21:52.291435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.164 [2024-04-26 12:21:52.291444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:93912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.164 [2024-04-26 12:21:52.291452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.164 [2024-04-26 12:21:52.291461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:93920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.164 [2024-04-26 12:21:52.291468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.164 [2024-04-26 12:21:52.291477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:93928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.164 [2024-04-26 12:21:52.291484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.164 [2024-04-26 12:21:52.291493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:93936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.164 [2024-04-26 12:21:52.291501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.164 [2024-04-26 12:21:52.291510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:93944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.164 [2024-04-26 12:21:52.291518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.164 [2024-04-26 12:21:52.291528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:93952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.164 [2024-04-26 12:21:52.291536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.164 [2024-04-26 12:21:52.291545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.164 [2024-04-26 12:21:52.291552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.164 [2024-04-26 12:21:52.291560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:93968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.164 [2024-04-26 12:21:52.291567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.164 [2024-04-26 12:21:52.291577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:93976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.164 [2024-04-26 12:21:52.291584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.164 [2024-04-26 12:21:52.291593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:93984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.164 [2024-04-26 12:21:52.291600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.164 [2024-04-26 12:21:52.291610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:93992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.164 [2024-04-26 12:21:52.291617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.164 [2024-04-26 12:21:52.291625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:94000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.164 [2024-04-26 12:21:52.291633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.164 [2024-04-26 12:21:52.291642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:94008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.164 [2024-04-26 12:21:52.291650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.164 [2024-04-26 12:21:52.291659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:93232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.164 [2024-04-26 12:21:52.291666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.164 [2024-04-26 12:21:52.291675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:93240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.164 [2024-04-26 12:21:52.291683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.164 [2024-04-26 12:21:52.291692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:93248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.164 [2024-04-26 12:21:52.291699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.164 [2024-04-26 12:21:52.291708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:94016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.164 [2024-04-26 12:21:52.291715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.164 [2024-04-26 12:21:52.291725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:94024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.164 [2024-04-26 12:21:52.291733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.164 [2024-04-26 12:21:52.291742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:94032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.164 [2024-04-26 12:21:52.291750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.164 [2024-04-26 12:21:52.291759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:94040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.164 [2024-04-26 12:21:52.291765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.164 [2024-04-26 12:21:52.291775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.164 [2024-04-26 12:21:52.291782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.164 [2024-04-26 12:21:52.291791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:94056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.164 [2024-04-26 12:21:52.291800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.164 [2024-04-26 12:21:52.291809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.164 [2024-04-26 12:21:52.291816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.164 [2024-04-26 12:21:52.291825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:94072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.164 [2024-04-26 12:21:52.291832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.164 [2024-04-26 12:21:52.291931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:94080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.164 [2024-04-26 12:21:52.291940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.164 [2024-04-26 12:21:52.291949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:94088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.164 [2024-04-26 12:21:52.291957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.164 [2024-04-26 12:21:52.291966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:94096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.164 [2024-04-26 12:21:52.291973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.164 [2024-04-26 12:21:52.291983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:94104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.164 [2024-04-26 12:21:52.291990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.164 [2024-04-26 12:21:52.292001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:94112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.164 [2024-04-26 12:21:52.292008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.164 [2024-04-26 12:21:52.292018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:94120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.164 [2024-04-26 12:21:52.292025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.164 [2024-04-26 12:21:52.292035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:94128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.164 [2024-04-26 12:21:52.292043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.164 [2024-04-26 12:21:52.292053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:94136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.164 [2024-04-26 12:21:52.292060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.164 [2024-04-26 12:21:52.292069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.164 [2024-04-26 12:21:52.292077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.164 [2024-04-26 12:21:52.292086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:94152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.164 [2024-04-26 12:21:52.292093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.164 [2024-04-26 12:21:52.292102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:94160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.164 [2024-04-26 12:21:52.292110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.164 [2024-04-26 12:21:52.292119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:94168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.164 [2024-04-26 12:21:52.292126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.164 [2024-04-26 12:21:52.292135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:94176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.164 [2024-04-26 12:21:52.292142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.165 [2024-04-26 12:21:52.292151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:94184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.165 [2024-04-26 12:21:52.292159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.165 [2024-04-26 12:21:52.292168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:94192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.165 [2024-04-26 12:21:52.292175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.165 [2024-04-26 12:21:52.292184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:94200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.165 [2024-04-26 12:21:52.292191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.165 [2024-04-26 12:21:52.292200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:94208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.165 [2024-04-26 12:21:52.292208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.165 [2024-04-26 12:21:52.292217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:94216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.165 [2024-04-26 12:21:52.292224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.165 [2024-04-26 12:21:52.292233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:94224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.165 [2024-04-26 12:21:52.292241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.165 [2024-04-26 12:21:52.292250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:94232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.165 [2024-04-26 12:21:52.292258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.165 [2024-04-26 12:21:52.292267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:94240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.165 [2024-04-26 12:21:52.292274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.165 [2024-04-26 12:21:52.292283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:94248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.165 [2024-04-26 12:21:52.292290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.165 [2024-04-26 12:21:52.292299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:93256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.165 [2024-04-26 12:21:52.292307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.165 [2024-04-26 12:21:52.292317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:93264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.165 [2024-04-26 12:21:52.292324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.165 [2024-04-26 12:21:52.292333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:93272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.165 [2024-04-26 12:21:52.292340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.165 [2024-04-26 12:21:52.292349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:93280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.165 [2024-04-26 12:21:52.292357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.165 [2024-04-26 12:21:52.292366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:93288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.165 [2024-04-26 12:21:52.292374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.165 [2024-04-26 12:21:52.292383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:93296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.165 [2024-04-26 12:21:52.292390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.165 [2024-04-26 12:21:52.292399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:93304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.165 [2024-04-26 12:21:52.292407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.165 [2024-04-26 12:21:52.292416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:93312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.165 [2024-04-26 12:21:52.292423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.165 [2024-04-26 12:21:52.292432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:93320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.165 [2024-04-26 12:21:52.292439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.165 [2024-04-26 12:21:52.292448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:93328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.165 [2024-04-26 12:21:52.292457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.165 [2024-04-26 12:21:52.292467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:93336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.165 [2024-04-26 12:21:52.292474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.165 [2024-04-26 12:21:52.292483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:93344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.165 [2024-04-26 12:21:52.292490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.165 [2024-04-26 12:21:52.292499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:93352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.165 [2024-04-26 12:21:52.292506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.165 [2024-04-26 12:21:52.292517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:93360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.165 [2024-04-26 12:21:52.292524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.165 [2024-04-26 12:21:52.292533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:93368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.165 [2024-04-26 12:21:52.292540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.165 [2024-04-26 12:21:52.292549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:93376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.165 [2024-04-26 12:21:52.292558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.165 [2024-04-26 12:21:52.292567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:93384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.165 [2024-04-26 12:21:52.292574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.165 [2024-04-26 12:21:52.292583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:93392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.165 [2024-04-26 12:21:52.292590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.165 [2024-04-26 12:21:52.292599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:93400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.165 [2024-04-26 12:21:52.292607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.165 [2024-04-26 12:21:52.292616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:93408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.165 [2024-04-26 12:21:52.292624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.165 [2024-04-26 12:21:52.292633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:93416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.165 [2024-04-26 12:21:52.292640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.165 [2024-04-26 12:21:52.292649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:93424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.165 [2024-04-26 12:21:52.292656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.165 [2024-04-26 12:21:52.292667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:93432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.165 [2024-04-26 12:21:52.292674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.165 [2024-04-26 12:21:52.292683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:93440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.165 [2024-04-26 12:21:52.292690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.165 [2024-04-26 12:21:52.292699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:93448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.165 [2024-04-26 12:21:52.292707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.165 [2024-04-26 12:21:52.292716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:93456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.165 [2024-04-26 12:21:52.292724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.165 [2024-04-26 12:21:52.292733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:93464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.165 [2024-04-26 12:21:52.292740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.165 [2024-04-26 12:21:52.292749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:93472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.165 [2024-04-26 12:21:52.292757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.165 [2024-04-26 12:21:52.292766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:93480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.165 [2024-04-26 12:21:52.292773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.165 [2024-04-26 12:21:52.292782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:93488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.165 [2024-04-26 12:21:52.292789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.165 [2024-04-26 12:21:52.292799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:93496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.165 [2024-04-26 12:21:52.292807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.165 [2024-04-26 12:21:52.292817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:93504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.165 [2024-04-26 12:21:52.292825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.165 [2024-04-26 12:21:52.292834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:93512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.165 [2024-04-26 12:21:52.292845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.165 [2024-04-26 12:21:52.292855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:93520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.165 [2024-04-26 12:21:52.292862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.165 [2024-04-26 12:21:52.292872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.165 [2024-04-26 12:21:52.292881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.165 [2024-04-26 12:21:52.292890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:93536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.165 [2024-04-26 12:21:52.292897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.165 [2024-04-26 12:21:52.292906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:93544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.165 [2024-04-26 12:21:52.292913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.165 [2024-04-26 12:21:52.292923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:93552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.165 [2024-04-26 12:21:52.292931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.165 [2024-04-26 12:21:52.292940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:93560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.165 [2024-04-26 12:21:52.292947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.165 [2024-04-26 12:21:52.292956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:93568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.165 [2024-04-26 12:21:52.292963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.165 [2024-04-26 12:21:52.292974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:93576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.165 [2024-04-26 12:21:52.292981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.165 [2024-04-26 12:21:52.292990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:93584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.165 [2024-04-26 12:21:52.292997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.166 [2024-04-26 12:21:52.293007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:93592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.166 [2024-04-26 12:21:52.293014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.166 [2024-04-26 12:21:52.293023] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21137c0 is same with the state(5) to be set 00:25:51.166 [2024-04-26 12:21:52.293032] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.166 [2024-04-26 12:21:52.293038] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.166 [2024-04-26 12:21:52.293045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93600 len:8 PRP1 0x0 PRP2 0x0 00:25:51.166 [2024-04-26 12:21:52.293052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.166 [2024-04-26 12:21:52.293090] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x21137c0 was disconnected and freed. reset controller. 00:25:51.166 [2024-04-26 12:21:52.296611] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.166 [2024-04-26 12:21:52.296657] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:51.166 [2024-04-26 12:21:52.297459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.166 [2024-04-26 12:21:52.297728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.166 [2024-04-26 12:21:52.297742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:51.166 [2024-04-26 12:21:52.297750] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:51.166 [2024-04-26 12:21:52.297974] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:51.166 [2024-04-26 12:21:52.298193] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.166 [2024-04-26 12:21:52.298202] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.166 [2024-04-26 12:21:52.298210] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.166 [2024-04-26 12:21:52.301735] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.166 [2024-04-26 12:21:52.310675] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.166 [2024-04-26 12:21:52.311349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.166 [2024-04-26 12:21:52.311716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.166 [2024-04-26 12:21:52.311730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:51.166 [2024-04-26 12:21:52.311741] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:51.166 [2024-04-26 12:21:52.311990] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:51.166 [2024-04-26 12:21:52.312214] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.166 [2024-04-26 12:21:52.312223] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.166 [2024-04-26 12:21:52.312231] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.166 [2024-04-26 12:21:52.315760] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.166 [2024-04-26 12:21:52.324489] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.166 [2024-04-26 12:21:52.325136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.166 [2024-04-26 12:21:52.325388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.166 [2024-04-26 12:21:52.325403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:51.166 [2024-04-26 12:21:52.325413] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:51.166 [2024-04-26 12:21:52.325650] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:51.166 [2024-04-26 12:21:52.325880] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.166 [2024-04-26 12:21:52.325890] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.166 [2024-04-26 12:21:52.325897] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.166 [2024-04-26 12:21:52.329423] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.166 [2024-04-26 12:21:52.338351] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.166 [2024-04-26 12:21:52.338939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.166 [2024-04-26 12:21:52.339280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.166 [2024-04-26 12:21:52.339295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:51.166 [2024-04-26 12:21:52.339309] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:51.166 [2024-04-26 12:21:52.339546] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:51.166 [2024-04-26 12:21:52.339767] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.166 [2024-04-26 12:21:52.339777] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.166 [2024-04-26 12:21:52.339784] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.166 [2024-04-26 12:21:52.343328] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.166 [2024-04-26 12:21:52.352269] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.166 [2024-04-26 12:21:52.352938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.166 [2024-04-26 12:21:52.353321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.166 [2024-04-26 12:21:52.353334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:51.166 [2024-04-26 12:21:52.353344] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:51.166 [2024-04-26 12:21:52.353580] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:51.166 [2024-04-26 12:21:52.353802] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.166 [2024-04-26 12:21:52.353812] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.166 [2024-04-26 12:21:52.353819] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.166 [2024-04-26 12:21:52.357355] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.166 [2024-04-26 12:21:52.366089] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.166 [2024-04-26 12:21:52.366743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.166 [2024-04-26 12:21:52.366978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.166 [2024-04-26 12:21:52.366993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:51.166 [2024-04-26 12:21:52.367004] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:51.166 [2024-04-26 12:21:52.367241] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:51.166 [2024-04-26 12:21:52.367463] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.166 [2024-04-26 12:21:52.367472] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.166 [2024-04-26 12:21:52.367480] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.166 [2024-04-26 12:21:52.371008] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.166 [2024-04-26 12:21:52.379940] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.428 [2024-04-26 12:21:52.380589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.428 [2024-04-26 12:21:52.380958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.428 [2024-04-26 12:21:52.380974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:51.428 [2024-04-26 12:21:52.380983] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:51.428 [2024-04-26 12:21:52.381224] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:51.428 [2024-04-26 12:21:52.381446] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.428 [2024-04-26 12:21:52.381455] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.428 [2024-04-26 12:21:52.381463] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.428 [2024-04-26 12:21:52.384990] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.428 [2024-04-26 12:21:52.393710] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.428 [2024-04-26 12:21:52.394354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.428 [2024-04-26 12:21:52.394741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.428 [2024-04-26 12:21:52.394755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:51.428 [2024-04-26 12:21:52.394764] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:51.428 [2024-04-26 12:21:52.395010] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:51.428 [2024-04-26 12:21:52.395232] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.428 [2024-04-26 12:21:52.395241] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.428 [2024-04-26 12:21:52.395249] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.428 [2024-04-26 12:21:52.398771] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.428 [2024-04-26 12:21:52.407500] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.428 [2024-04-26 12:21:52.408154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.428 [2024-04-26 12:21:52.408488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.428 [2024-04-26 12:21:52.408502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:51.428 [2024-04-26 12:21:52.408512] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:51.428 [2024-04-26 12:21:52.408748] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:51.429 [2024-04-26 12:21:52.408979] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.429 [2024-04-26 12:21:52.408990] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.429 [2024-04-26 12:21:52.408997] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.429 [2024-04-26 12:21:52.412522] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.429 [2024-04-26 12:21:52.421449] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.429 [2024-04-26 12:21:52.422136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.429 [2024-04-26 12:21:52.422472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.429 [2024-04-26 12:21:52.422486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:51.429 [2024-04-26 12:21:52.422496] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:51.429 [2024-04-26 12:21:52.422733] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:51.429 [2024-04-26 12:21:52.422967] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.429 [2024-04-26 12:21:52.422977] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.429 [2024-04-26 12:21:52.422985] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.429 [2024-04-26 12:21:52.426514] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.429 [2024-04-26 12:21:52.435236] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.429 [2024-04-26 12:21:52.435930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.429 [2024-04-26 12:21:52.436314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.429 [2024-04-26 12:21:52.436328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:51.429 [2024-04-26 12:21:52.436337] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:51.429 [2024-04-26 12:21:52.436574] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:51.429 [2024-04-26 12:21:52.436795] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.429 [2024-04-26 12:21:52.436804] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.429 [2024-04-26 12:21:52.436811] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.429 [2024-04-26 12:21:52.440352] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.429 [2024-04-26 12:21:52.449120] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.429 [2024-04-26 12:21:52.449770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.429 [2024-04-26 12:21:52.450118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.429 [2024-04-26 12:21:52.450133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:51.429 [2024-04-26 12:21:52.450142] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:51.429 [2024-04-26 12:21:52.450378] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:51.429 [2024-04-26 12:21:52.450600] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.429 [2024-04-26 12:21:52.450609] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.429 [2024-04-26 12:21:52.450617] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.429 [2024-04-26 12:21:52.454146] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.429 [2024-04-26 12:21:52.463087] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.429 [2024-04-26 12:21:52.463769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.429 [2024-04-26 12:21:52.464112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.429 [2024-04-26 12:21:52.464127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:51.429 [2024-04-26 12:21:52.464137] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:51.429 [2024-04-26 12:21:52.464374] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:51.429 [2024-04-26 12:21:52.464596] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.429 [2024-04-26 12:21:52.464612] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.429 [2024-04-26 12:21:52.464619] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.429 [2024-04-26 12:21:52.468151] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.429 [2024-04-26 12:21:52.476878] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.429 [2024-04-26 12:21:52.477547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.429 [2024-04-26 12:21:52.477934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.429 [2024-04-26 12:21:52.477949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:51.429 [2024-04-26 12:21:52.477959] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:51.429 [2024-04-26 12:21:52.478195] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:51.429 [2024-04-26 12:21:52.478417] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.429 [2024-04-26 12:21:52.478426] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.429 [2024-04-26 12:21:52.478434] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.429 [2024-04-26 12:21:52.481964] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.429 [2024-04-26 12:21:52.490687] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.429 [2024-04-26 12:21:52.491356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.429 [2024-04-26 12:21:52.491698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.429 [2024-04-26 12:21:52.491712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:51.429 [2024-04-26 12:21:52.491722] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:51.429 [2024-04-26 12:21:52.491966] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:51.429 [2024-04-26 12:21:52.492189] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.429 [2024-04-26 12:21:52.492199] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.429 [2024-04-26 12:21:52.492207] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.429 [2024-04-26 12:21:52.495732] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.429 [2024-04-26 12:21:52.504460] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.429 [2024-04-26 12:21:52.504996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.429 [2024-04-26 12:21:52.505301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.429 [2024-04-26 12:21:52.505312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:51.429 [2024-04-26 12:21:52.505320] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:51.429 [2024-04-26 12:21:52.505539] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:51.429 [2024-04-26 12:21:52.505757] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.429 [2024-04-26 12:21:52.505766] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.429 [2024-04-26 12:21:52.505781] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.429 [2024-04-26 12:21:52.509306] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.429 [2024-04-26 12:21:52.518237] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.429 [2024-04-26 12:21:52.518918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.429 [2024-04-26 12:21:52.519261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.429 [2024-04-26 12:21:52.519274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:51.429 [2024-04-26 12:21:52.519284] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:51.429 [2024-04-26 12:21:52.519521] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:51.429 [2024-04-26 12:21:52.519742] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.429 [2024-04-26 12:21:52.519752] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.429 [2024-04-26 12:21:52.519760] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.429 [2024-04-26 12:21:52.523291] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.429 [2024-04-26 12:21:52.532018] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.429 [2024-04-26 12:21:52.532662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.429 [2024-04-26 12:21:52.533018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.429 [2024-04-26 12:21:52.533033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:51.429 [2024-04-26 12:21:52.533043] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:51.429 [2024-04-26 12:21:52.533279] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:51.429 [2024-04-26 12:21:52.533500] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.429 [2024-04-26 12:21:52.533510] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.429 [2024-04-26 12:21:52.533517] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.430 [2024-04-26 12:21:52.537048] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.430 [2024-04-26 12:21:52.545784] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.430 [2024-04-26 12:21:52.546460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.430 [2024-04-26 12:21:52.546856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.430 [2024-04-26 12:21:52.546871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:51.430 [2024-04-26 12:21:52.546881] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:51.430 [2024-04-26 12:21:52.547118] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:51.430 [2024-04-26 12:21:52.547339] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.430 [2024-04-26 12:21:52.547349] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.430 [2024-04-26 12:21:52.547356] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.430 [2024-04-26 12:21:52.550894] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.430 [2024-04-26 12:21:52.559639] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.430 [2024-04-26 12:21:52.560201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.430 [2024-04-26 12:21:52.560551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.430 [2024-04-26 12:21:52.560562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:51.430 [2024-04-26 12:21:52.560569] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:51.430 [2024-04-26 12:21:52.560788] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:51.430 [2024-04-26 12:21:52.561010] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.430 [2024-04-26 12:21:52.561021] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.430 [2024-04-26 12:21:52.561029] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.430 [2024-04-26 12:21:52.564552] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.430 [2024-04-26 12:21:52.573480] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.430 [2024-04-26 12:21:52.574121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.430 [2024-04-26 12:21:52.574453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.430 [2024-04-26 12:21:52.574466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:51.430 [2024-04-26 12:21:52.574476] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:51.430 [2024-04-26 12:21:52.574712] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:51.430 [2024-04-26 12:21:52.574942] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.430 [2024-04-26 12:21:52.574952] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.430 [2024-04-26 12:21:52.574960] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.430 [2024-04-26 12:21:52.578486] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.430 [2024-04-26 12:21:52.587416] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.430 [2024-04-26 12:21:52.587974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.430 [2024-04-26 12:21:52.588361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.430 [2024-04-26 12:21:52.588374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:51.430 [2024-04-26 12:21:52.588384] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:51.430 [2024-04-26 12:21:52.588620] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:51.430 [2024-04-26 12:21:52.588850] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.430 [2024-04-26 12:21:52.588860] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.430 [2024-04-26 12:21:52.588867] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.430 [2024-04-26 12:21:52.592392] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.430 [2024-04-26 12:21:52.601329] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.430 [2024-04-26 12:21:52.601935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.430 [2024-04-26 12:21:52.602240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.430 [2024-04-26 12:21:52.602253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:51.430 [2024-04-26 12:21:52.602263] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:51.430 [2024-04-26 12:21:52.602499] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:51.430 [2024-04-26 12:21:52.602720] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.430 [2024-04-26 12:21:52.602730] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.430 [2024-04-26 12:21:52.602737] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.430 [2024-04-26 12:21:52.606268] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.430 [2024-04-26 12:21:52.615205] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.430 [2024-04-26 12:21:52.615859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.430 [2024-04-26 12:21:52.616219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.430 [2024-04-26 12:21:52.616232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:51.430 [2024-04-26 12:21:52.616242] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:51.430 [2024-04-26 12:21:52.616479] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:51.430 [2024-04-26 12:21:52.616700] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.430 [2024-04-26 12:21:52.616709] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.430 [2024-04-26 12:21:52.616717] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.430 [2024-04-26 12:21:52.620251] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.430 [2024-04-26 12:21:52.628974] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.430 [2024-04-26 12:21:52.629516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.430 [2024-04-26 12:21:52.629707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.430 [2024-04-26 12:21:52.629722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:51.430 [2024-04-26 12:21:52.629732] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:51.430 [2024-04-26 12:21:52.629976] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:51.430 [2024-04-26 12:21:52.630200] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.430 [2024-04-26 12:21:52.630210] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.430 [2024-04-26 12:21:52.630217] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.430 [2024-04-26 12:21:52.633742] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.430 [2024-04-26 12:21:52.642896] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.430 [2024-04-26 12:21:52.643458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.430 [2024-04-26 12:21:52.643758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.430 [2024-04-26 12:21:52.643769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:51.430 [2024-04-26 12:21:52.643777] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:51.430 [2024-04-26 12:21:52.644003] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:51.430 [2024-04-26 12:21:52.644223] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.430 [2024-04-26 12:21:52.644232] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.430 [2024-04-26 12:21:52.644239] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.694 [2024-04-26 12:21:52.647759] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.694 [2024-04-26 12:21:52.656720] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.694 [2024-04-26 12:21:52.657343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.694 [2024-04-26 12:21:52.657721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.694 [2024-04-26 12:21:52.657734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:51.694 [2024-04-26 12:21:52.657744] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:51.694 [2024-04-26 12:21:52.657989] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:51.694 [2024-04-26 12:21:52.658211] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.694 [2024-04-26 12:21:52.658221] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.694 [2024-04-26 12:21:52.658228] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.694 [2024-04-26 12:21:52.661758] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.694 [2024-04-26 12:21:52.670690] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.694 [2024-04-26 12:21:52.671340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.694 [2024-04-26 12:21:52.671684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.694 [2024-04-26 12:21:52.671698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:51.694 [2024-04-26 12:21:52.671708] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:51.694 [2024-04-26 12:21:52.671952] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:51.694 [2024-04-26 12:21:52.672174] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.694 [2024-04-26 12:21:52.672183] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.694 [2024-04-26 12:21:52.672191] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.694 [2024-04-26 12:21:52.675717] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.694 [2024-04-26 12:21:52.684649] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.694 [2024-04-26 12:21:52.685319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.694 [2024-04-26 12:21:52.685702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.694 [2024-04-26 12:21:52.685720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:51.694 [2024-04-26 12:21:52.685730] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:51.694 [2024-04-26 12:21:52.685975] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:51.694 [2024-04-26 12:21:52.686197] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.694 [2024-04-26 12:21:52.686206] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.694 [2024-04-26 12:21:52.686214] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.694 [2024-04-26 12:21:52.689739] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.695 [2024-04-26 12:21:52.698468] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.695 [2024-04-26 12:21:52.699149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.695 [2024-04-26 12:21:52.699485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.695 [2024-04-26 12:21:52.699498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:51.695 [2024-04-26 12:21:52.699508] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:51.695 [2024-04-26 12:21:52.699745] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:51.695 [2024-04-26 12:21:52.699975] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.695 [2024-04-26 12:21:52.699985] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.695 [2024-04-26 12:21:52.699992] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.695 [2024-04-26 12:21:52.703518] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.695 [2024-04-26 12:21:52.712248] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.695 [2024-04-26 12:21:52.712764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.695 [2024-04-26 12:21:52.713110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.695 [2024-04-26 12:21:52.713125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:51.695 [2024-04-26 12:21:52.713134] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:51.695 [2024-04-26 12:21:52.713371] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:51.695 [2024-04-26 12:21:52.713592] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.695 [2024-04-26 12:21:52.713602] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.695 [2024-04-26 12:21:52.713610] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.695 [2024-04-26 12:21:52.717140] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.695 [2024-04-26 12:21:52.726072] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.695 [2024-04-26 12:21:52.726741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.695 [2024-04-26 12:21:52.727072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.695 [2024-04-26 12:21:52.727087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:51.695 [2024-04-26 12:21:52.727101] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:51.695 [2024-04-26 12:21:52.727338] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:51.695 [2024-04-26 12:21:52.727560] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.695 [2024-04-26 12:21:52.727569] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.695 [2024-04-26 12:21:52.727576] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.695 [2024-04-26 12:21:52.731106] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.695 [2024-04-26 12:21:52.740038] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.695 [2024-04-26 12:21:52.740705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.695 [2024-04-26 12:21:52.741032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.695 [2024-04-26 12:21:52.741047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:51.695 [2024-04-26 12:21:52.741057] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:51.695 [2024-04-26 12:21:52.741293] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:51.695 [2024-04-26 12:21:52.741514] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.695 [2024-04-26 12:21:52.741524] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.695 [2024-04-26 12:21:52.741531] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.695 [2024-04-26 12:21:52.745070] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.695 [2024-04-26 12:21:52.754001] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.695 [2024-04-26 12:21:52.754662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.695 [2024-04-26 12:21:52.755005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.695 [2024-04-26 12:21:52.755022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:51.695 [2024-04-26 12:21:52.755031] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:51.695 [2024-04-26 12:21:52.755268] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:51.695 [2024-04-26 12:21:52.755490] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.695 [2024-04-26 12:21:52.755498] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.695 [2024-04-26 12:21:52.755506] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.695 [2024-04-26 12:21:52.759045] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.695 [2024-04-26 12:21:52.767778] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.695 [2024-04-26 12:21:52.768311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.695 [2024-04-26 12:21:52.768657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.695 [2024-04-26 12:21:52.768668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:51.695 [2024-04-26 12:21:52.768677] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:51.695 [2024-04-26 12:21:52.768904] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:51.695 [2024-04-26 12:21:52.769124] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.695 [2024-04-26 12:21:52.769132] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.695 [2024-04-26 12:21:52.769140] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.695 [2024-04-26 12:21:52.772660] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.695 [2024-04-26 12:21:52.781594] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.695 [2024-04-26 12:21:52.782090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.695 [2024-04-26 12:21:52.782423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.695 [2024-04-26 12:21:52.782434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:51.695 [2024-04-26 12:21:52.782442] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:51.695 [2024-04-26 12:21:52.782659] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:51.695 [2024-04-26 12:21:52.782883] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.695 [2024-04-26 12:21:52.782893] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.695 [2024-04-26 12:21:52.782900] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.695 [2024-04-26 12:21:52.786420] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.695 [2024-04-26 12:21:52.795358] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.695 [2024-04-26 12:21:52.795877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.695 [2024-04-26 12:21:52.796252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.695 [2024-04-26 12:21:52.796263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:51.695 [2024-04-26 12:21:52.796271] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:51.695 [2024-04-26 12:21:52.796489] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:51.695 [2024-04-26 12:21:52.796706] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.695 [2024-04-26 12:21:52.796715] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.695 [2024-04-26 12:21:52.796722] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.695 [2024-04-26 12:21:52.800255] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.695 [2024-04-26 12:21:52.809199] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.695 [2024-04-26 12:21:52.809741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.695 [2024-04-26 12:21:52.810068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.695 [2024-04-26 12:21:52.810080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:51.695 [2024-04-26 12:21:52.810087] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:51.695 [2024-04-26 12:21:52.810305] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:51.695 [2024-04-26 12:21:52.810526] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.695 [2024-04-26 12:21:52.810537] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.695 [2024-04-26 12:21:52.810544] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.695 [2024-04-26 12:21:52.814072] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.695 [2024-04-26 12:21:52.823021] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.695 [2024-04-26 12:21:52.823584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.695 [2024-04-26 12:21:52.823909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.695 [2024-04-26 12:21:52.823920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:51.696 [2024-04-26 12:21:52.823928] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:51.696 [2024-04-26 12:21:52.824147] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:51.696 [2024-04-26 12:21:52.824364] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.696 [2024-04-26 12:21:52.824372] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.696 [2024-04-26 12:21:52.824380] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.696 [2024-04-26 12:21:52.827907] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.696 [2024-04-26 12:21:52.836848] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.696 [2024-04-26 12:21:52.837324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.696 [2024-04-26 12:21:52.837627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.696 [2024-04-26 12:21:52.837638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:51.696 [2024-04-26 12:21:52.837645] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:51.696 [2024-04-26 12:21:52.837868] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:51.696 [2024-04-26 12:21:52.838087] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.696 [2024-04-26 12:21:52.838095] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.696 [2024-04-26 12:21:52.838102] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.696 [2024-04-26 12:21:52.841658] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.696 [2024-04-26 12:21:52.850613] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.696 [2024-04-26 12:21:52.851152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.696 [2024-04-26 12:21:52.851472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.696 [2024-04-26 12:21:52.851483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:51.696 [2024-04-26 12:21:52.851491] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:51.696 [2024-04-26 12:21:52.851709] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:51.696 [2024-04-26 12:21:52.851932] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.696 [2024-04-26 12:21:52.851945] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.696 [2024-04-26 12:21:52.851952] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.696 [2024-04-26 12:21:52.855495] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.696 [2024-04-26 12:21:52.864482] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.696 [2024-04-26 12:21:52.865190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.696 [2024-04-26 12:21:52.865516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.696 [2024-04-26 12:21:52.865529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:51.696 [2024-04-26 12:21:52.865538] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:51.696 [2024-04-26 12:21:52.865775] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:51.696 [2024-04-26 12:21:52.866003] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.696 [2024-04-26 12:21:52.866014] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.696 [2024-04-26 12:21:52.866021] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.696 [2024-04-26 12:21:52.869549] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.696 [2024-04-26 12:21:52.878281] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.696 [2024-04-26 12:21:52.878945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.696 [2024-04-26 12:21:52.879286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.696 [2024-04-26 12:21:52.879300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:51.696 [2024-04-26 12:21:52.879309] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:51.696 [2024-04-26 12:21:52.879546] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:51.696 [2024-04-26 12:21:52.879767] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.696 [2024-04-26 12:21:52.879776] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.696 [2024-04-26 12:21:52.879784] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.696 [2024-04-26 12:21:52.883317] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.696 [2024-04-26 12:21:52.892046] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.696 [2024-04-26 12:21:52.892583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.696 [2024-04-26 12:21:52.892918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.696 [2024-04-26 12:21:52.892930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:51.696 [2024-04-26 12:21:52.892938] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:51.696 [2024-04-26 12:21:52.893156] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:51.696 [2024-04-26 12:21:52.893374] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.696 [2024-04-26 12:21:52.893383] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.696 [2024-04-26 12:21:52.893395] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.696 [2024-04-26 12:21:52.896919] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.696 [2024-04-26 12:21:52.905851] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.696 [2024-04-26 12:21:52.906371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.696 [2024-04-26 12:21:52.906719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.696 [2024-04-26 12:21:52.906730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:51.696 [2024-04-26 12:21:52.906738] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:51.696 [2024-04-26 12:21:52.907128] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:51.696 [2024-04-26 12:21:52.907396] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.696 [2024-04-26 12:21:52.907407] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.696 [2024-04-26 12:21:52.907414] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.696 [2024-04-26 12:21:52.910941] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.959 [2024-04-26 12:21:52.919663] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.959 [2024-04-26 12:21:52.920201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.959 [2024-04-26 12:21:52.920518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.959 [2024-04-26 12:21:52.920529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:51.959 [2024-04-26 12:21:52.920537] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:51.959 [2024-04-26 12:21:52.920755] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:51.959 [2024-04-26 12:21:52.920979] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.959 [2024-04-26 12:21:52.920988] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.959 [2024-04-26 12:21:52.920995] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.959 [2024-04-26 12:21:52.924516] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.959 [2024-04-26 12:21:52.933451] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.959 [2024-04-26 12:21:52.933983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.959 [2024-04-26 12:21:52.934249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.959 [2024-04-26 12:21:52.934260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:51.959 [2024-04-26 12:21:52.934267] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:51.959 [2024-04-26 12:21:52.934485] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:51.959 [2024-04-26 12:21:52.934704] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.959 [2024-04-26 12:21:52.934712] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.959 [2024-04-26 12:21:52.934719] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.959 [2024-04-26 12:21:52.938245] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.959 [2024-04-26 12:21:52.947392] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.959 [2024-04-26 12:21:52.947921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.959 [2024-04-26 12:21:52.948239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.959 [2024-04-26 12:21:52.948249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:51.959 [2024-04-26 12:21:52.948257] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:51.959 [2024-04-26 12:21:52.948475] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:51.959 [2024-04-26 12:21:52.948692] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.959 [2024-04-26 12:21:52.948702] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.959 [2024-04-26 12:21:52.948709] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.959 [2024-04-26 12:21:52.952231] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.959 [2024-04-26 12:21:52.961169] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.959 [2024-04-26 12:21:52.961693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.959 [2024-04-26 12:21:52.962008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.959 [2024-04-26 12:21:52.962021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:51.959 [2024-04-26 12:21:52.962028] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:51.959 [2024-04-26 12:21:52.962246] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:51.959 [2024-04-26 12:21:52.962464] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.959 [2024-04-26 12:21:52.962473] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.959 [2024-04-26 12:21:52.962480] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.959 [2024-04-26 12:21:52.966000] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.959 [2024-04-26 12:21:52.975134] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.959 [2024-04-26 12:21:52.975656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.959 [2024-04-26 12:21:52.975972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.959 [2024-04-26 12:21:52.975983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:51.959 [2024-04-26 12:21:52.975991] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:51.959 [2024-04-26 12:21:52.976209] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:51.959 [2024-04-26 12:21:52.976427] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.959 [2024-04-26 12:21:52.976435] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.959 [2024-04-26 12:21:52.976443] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.959 [2024-04-26 12:21:52.979963] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.959 [2024-04-26 12:21:52.988897] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.959 [2024-04-26 12:21:52.989553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.959 [2024-04-26 12:21:52.989894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.960 [2024-04-26 12:21:52.989908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:51.960 [2024-04-26 12:21:52.989918] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:51.960 [2024-04-26 12:21:52.990155] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:51.960 [2024-04-26 12:21:52.990376] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.960 [2024-04-26 12:21:52.990385] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.960 [2024-04-26 12:21:52.990393] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.960 [2024-04-26 12:21:52.993924] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.960 [2024-04-26 12:21:53.002859] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.960 [2024-04-26 12:21:53.003431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.960 [2024-04-26 12:21:53.003742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.960 [2024-04-26 12:21:53.003752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:51.960 [2024-04-26 12:21:53.003760] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:51.960 [2024-04-26 12:21:53.003983] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:51.960 [2024-04-26 12:21:53.004201] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.960 [2024-04-26 12:21:53.004219] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.960 [2024-04-26 12:21:53.004226] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.960 [2024-04-26 12:21:53.007747] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.960 [2024-04-26 12:21:53.016680] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.960 [2024-04-26 12:21:53.017343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.960 [2024-04-26 12:21:53.017680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.960 [2024-04-26 12:21:53.017694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:51.960 [2024-04-26 12:21:53.017703] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:51.960 [2024-04-26 12:21:53.017947] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:51.960 [2024-04-26 12:21:53.018169] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.960 [2024-04-26 12:21:53.018178] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.960 [2024-04-26 12:21:53.018186] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.960 [2024-04-26 12:21:53.021711] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.960 [2024-04-26 12:21:53.030643] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.960 [2024-04-26 12:21:53.031103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.960 [2024-04-26 12:21:53.031442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.960 [2024-04-26 12:21:53.031453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:51.960 [2024-04-26 12:21:53.031461] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:51.960 [2024-04-26 12:21:53.031679] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:51.960 [2024-04-26 12:21:53.031902] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.960 [2024-04-26 12:21:53.031912] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.960 [2024-04-26 12:21:53.031919] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.960 [2024-04-26 12:21:53.035438] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.960 [2024-04-26 12:21:53.044586] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.960 [2024-04-26 12:21:53.045226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.960 [2024-04-26 12:21:53.045608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.960 [2024-04-26 12:21:53.045622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:51.960 [2024-04-26 12:21:53.045632] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:51.960 [2024-04-26 12:21:53.045874] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:51.960 [2024-04-26 12:21:53.046096] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.960 [2024-04-26 12:21:53.046106] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.960 [2024-04-26 12:21:53.046113] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.960 [2024-04-26 12:21:53.049641] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.960 [2024-04-26 12:21:53.058368] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.960 [2024-04-26 12:21:53.059048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.960 [2024-04-26 12:21:53.059383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.960 [2024-04-26 12:21:53.059397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:51.960 [2024-04-26 12:21:53.059406] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:51.960 [2024-04-26 12:21:53.059643] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:51.960 [2024-04-26 12:21:53.059872] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.960 [2024-04-26 12:21:53.059882] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.960 [2024-04-26 12:21:53.059889] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.960 [2024-04-26 12:21:53.063415] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.960 [2024-04-26 12:21:53.072170] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.960 [2024-04-26 12:21:53.072853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.960 [2024-04-26 12:21:53.073166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.960 [2024-04-26 12:21:53.073185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:51.960 [2024-04-26 12:21:53.073196] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:51.960 [2024-04-26 12:21:53.073433] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:51.960 [2024-04-26 12:21:53.073655] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.960 [2024-04-26 12:21:53.073664] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.960 [2024-04-26 12:21:53.073673] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.960 [2024-04-26 12:21:53.077205] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.960 [2024-04-26 12:21:53.086140] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.960 [2024-04-26 12:21:53.086756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.960 [2024-04-26 12:21:53.087100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.960 [2024-04-26 12:21:53.087115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:51.960 [2024-04-26 12:21:53.087125] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:51.960 [2024-04-26 12:21:53.087361] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:51.960 [2024-04-26 12:21:53.087583] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.960 [2024-04-26 12:21:53.087592] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.960 [2024-04-26 12:21:53.087600] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.960 [2024-04-26 12:21:53.091132] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.960 [2024-04-26 12:21:53.100069] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.960 [2024-04-26 12:21:53.100599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.960 [2024-04-26 12:21:53.100949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.960 [2024-04-26 12:21:53.100960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:51.960 [2024-04-26 12:21:53.100968] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:51.960 [2024-04-26 12:21:53.101187] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:51.960 [2024-04-26 12:21:53.101405] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.960 [2024-04-26 12:21:53.101414] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.960 [2024-04-26 12:21:53.101421] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.960 [2024-04-26 12:21:53.104944] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.960 [2024-04-26 12:21:53.113877] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.960 [2024-04-26 12:21:53.114324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.960 [2024-04-26 12:21:53.114671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.960 [2024-04-26 12:21:53.114682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:51.960 [2024-04-26 12:21:53.114694] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:51.961 [2024-04-26 12:21:53.114917] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:51.961 [2024-04-26 12:21:53.115137] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.961 [2024-04-26 12:21:53.115146] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.961 [2024-04-26 12:21:53.115153] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.961 [2024-04-26 12:21:53.118671] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.961 [2024-04-26 12:21:53.127806] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.961 [2024-04-26 12:21:53.128375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.961 [2024-04-26 12:21:53.128718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.961 [2024-04-26 12:21:53.128729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:51.961 [2024-04-26 12:21:53.128737] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:51.961 [2024-04-26 12:21:53.128959] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:51.961 [2024-04-26 12:21:53.129179] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.961 [2024-04-26 12:21:53.129187] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.961 [2024-04-26 12:21:53.129194] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.961 [2024-04-26 12:21:53.132712] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.961 [2024-04-26 12:21:53.141640] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.961 [2024-04-26 12:21:53.142326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.961 [2024-04-26 12:21:53.142708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.961 [2024-04-26 12:21:53.142721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:51.961 [2024-04-26 12:21:53.142731] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:51.961 [2024-04-26 12:21:53.142983] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:51.961 [2024-04-26 12:21:53.143205] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.961 [2024-04-26 12:21:53.143215] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.961 [2024-04-26 12:21:53.143222] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.961 [2024-04-26 12:21:53.146749] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.961 [2024-04-26 12:21:53.155474] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.961 [2024-04-26 12:21:53.156131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.961 [2024-04-26 12:21:53.156515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.961 [2024-04-26 12:21:53.156529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:51.961 [2024-04-26 12:21:53.156538] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:51.961 [2024-04-26 12:21:53.156782] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:51.961 [2024-04-26 12:21:53.157009] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.961 [2024-04-26 12:21:53.157020] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.961 [2024-04-26 12:21:53.157028] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.961 [2024-04-26 12:21:53.160559] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.961 [2024-04-26 12:21:53.169293] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.961 [2024-04-26 12:21:53.169823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.961 [2024-04-26 12:21:53.170120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.961 [2024-04-26 12:21:53.170132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:51.961 [2024-04-26 12:21:53.170140] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:51.961 [2024-04-26 12:21:53.170359] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:51.961 [2024-04-26 12:21:53.170577] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.961 [2024-04-26 12:21:53.170586] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.961 [2024-04-26 12:21:53.170593] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.961 [2024-04-26 12:21:53.174117] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.239 [2024-04-26 12:21:53.183256] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.239 [2024-04-26 12:21:53.183904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.239 [2024-04-26 12:21:53.184916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.239 [2024-04-26 12:21:53.184942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:52.239 [2024-04-26 12:21:53.184952] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:52.239 [2024-04-26 12:21:53.185190] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:52.239 [2024-04-26 12:21:53.185412] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.239 [2024-04-26 12:21:53.185421] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.239 [2024-04-26 12:21:53.185428] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.239 [2024-04-26 12:21:53.188965] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.239 [2024-04-26 12:21:53.197084] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.239 [2024-04-26 12:21:53.197641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.239 [2024-04-26 12:21:53.197994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.239 [2024-04-26 12:21:53.198009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:52.239 [2024-04-26 12:21:53.198019] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:52.239 [2024-04-26 12:21:53.198256] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:52.239 [2024-04-26 12:21:53.198482] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.239 [2024-04-26 12:21:53.198491] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.239 [2024-04-26 12:21:53.198499] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.240 [2024-04-26 12:21:53.202026] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.240 [2024-04-26 12:21:53.210960] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.240 [2024-04-26 12:21:53.211534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.240 [2024-04-26 12:21:53.211872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.240 [2024-04-26 12:21:53.211884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:52.240 [2024-04-26 12:21:53.211892] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:52.240 [2024-04-26 12:21:53.212111] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:52.240 [2024-04-26 12:21:53.212329] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.240 [2024-04-26 12:21:53.212338] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.240 [2024-04-26 12:21:53.212345] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.240 [2024-04-26 12:21:53.215869] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.240 [2024-04-26 12:21:53.224798] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.240 [2024-04-26 12:21:53.225451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.240 [2024-04-26 12:21:53.225834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.240 [2024-04-26 12:21:53.225854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:52.240 [2024-04-26 12:21:53.225864] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:52.240 [2024-04-26 12:21:53.226101] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:52.240 [2024-04-26 12:21:53.226322] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.240 [2024-04-26 12:21:53.226331] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.241 [2024-04-26 12:21:53.226339] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.241 [2024-04-26 12:21:53.229868] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.241 [2024-04-26 12:21:53.238597] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.241 [2024-04-26 12:21:53.239146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.241 [2024-04-26 12:21:53.239478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.241 [2024-04-26 12:21:53.239489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:52.241 [2024-04-26 12:21:53.239497] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:52.241 [2024-04-26 12:21:53.239715] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:52.241 [2024-04-26 12:21:53.239939] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.241 [2024-04-26 12:21:53.239953] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.241 [2024-04-26 12:21:53.239960] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.241 [2024-04-26 12:21:53.243481] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.241 [2024-04-26 12:21:53.252424] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.241 [2024-04-26 12:21:53.252983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.241 [2024-04-26 12:21:53.253356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.241 [2024-04-26 12:21:53.253370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:52.242 [2024-04-26 12:21:53.253380] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:52.242 [2024-04-26 12:21:53.253617] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:52.242 [2024-04-26 12:21:53.253846] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.242 [2024-04-26 12:21:53.253856] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.242 [2024-04-26 12:21:53.253864] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.242 [2024-04-26 12:21:53.257388] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.242 [2024-04-26 12:21:53.266333] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.242 [2024-04-26 12:21:53.266883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.242 [2024-04-26 12:21:53.267352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.242 [2024-04-26 12:21:53.267368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:52.242 [2024-04-26 12:21:53.267377] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:52.242 [2024-04-26 12:21:53.267602] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:52.242 [2024-04-26 12:21:53.267822] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.242 [2024-04-26 12:21:53.267831] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.242 [2024-04-26 12:21:53.267844] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.242 [2024-04-26 12:21:53.271369] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.242 [2024-04-26 12:21:53.280328] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.242 [2024-04-26 12:21:53.280871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.242 [2024-04-26 12:21:53.281166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.242 [2024-04-26 12:21:53.281177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:52.242 [2024-04-26 12:21:53.281185] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:52.242 [2024-04-26 12:21:53.281403] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:52.242 [2024-04-26 12:21:53.281622] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.242 [2024-04-26 12:21:53.281631] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.242 [2024-04-26 12:21:53.281643] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.242 [2024-04-26 12:21:53.285170] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.243 [2024-04-26 12:21:53.294101] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.243 [2024-04-26 12:21:53.294671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.243 [2024-04-26 12:21:53.295017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.243 [2024-04-26 12:21:53.295029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:52.243 [2024-04-26 12:21:53.295037] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:52.243 [2024-04-26 12:21:53.295255] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:52.243 [2024-04-26 12:21:53.295473] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.243 [2024-04-26 12:21:53.295482] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.243 [2024-04-26 12:21:53.295489] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.243 [2024-04-26 12:21:53.299013] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.243 [2024-04-26 12:21:53.307946] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.243 [2024-04-26 12:21:53.308605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.243 [2024-04-26 12:21:53.308973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.243 [2024-04-26 12:21:53.308987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:52.243 [2024-04-26 12:21:53.308997] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:52.243 [2024-04-26 12:21:53.309234] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:52.243 [2024-04-26 12:21:53.309454] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.243 [2024-04-26 12:21:53.309464] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.243 [2024-04-26 12:21:53.309472] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.243 [2024-04-26 12:21:53.313003] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.243 [2024-04-26 12:21:53.321731] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.243 [2024-04-26 12:21:53.322423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.243 [2024-04-26 12:21:53.322760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.243 [2024-04-26 12:21:53.322773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:52.244 [2024-04-26 12:21:53.322783] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:52.244 [2024-04-26 12:21:53.323027] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:52.244 [2024-04-26 12:21:53.323248] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.244 [2024-04-26 12:21:53.323258] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.244 [2024-04-26 12:21:53.323265] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.244 [2024-04-26 12:21:53.326794] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.244 [2024-04-26 12:21:53.335524] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.244 [2024-04-26 12:21:53.336098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.244 [2024-04-26 12:21:53.336471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.244 [2024-04-26 12:21:53.336482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:52.244 [2024-04-26 12:21:53.336490] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:52.244 [2024-04-26 12:21:53.336708] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:52.244 [2024-04-26 12:21:53.336933] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.244 [2024-04-26 12:21:53.336943] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.244 [2024-04-26 12:21:53.336951] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.244 [2024-04-26 12:21:53.340475] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.244 [2024-04-26 12:21:53.349498] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.244 [2024-04-26 12:21:53.350157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.244 [2024-04-26 12:21:53.350495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.244 [2024-04-26 12:21:53.350508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:52.244 [2024-04-26 12:21:53.350518] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:52.244 [2024-04-26 12:21:53.350754] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:52.244 [2024-04-26 12:21:53.350983] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.244 [2024-04-26 12:21:53.350994] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.244 [2024-04-26 12:21:53.351002] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.244 [2024-04-26 12:21:53.354530] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.244 [2024-04-26 12:21:53.363480] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.244 [2024-04-26 12:21:53.364136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.245 [2024-04-26 12:21:53.364481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.245 [2024-04-26 12:21:53.364495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:52.245 [2024-04-26 12:21:53.364504] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:52.245 [2024-04-26 12:21:53.364741] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:52.245 [2024-04-26 12:21:53.364969] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.245 [2024-04-26 12:21:53.364979] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.245 [2024-04-26 12:21:53.364987] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.245 [2024-04-26 12:21:53.368511] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.245 [2024-04-26 12:21:53.377452] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.245 [2024-04-26 12:21:53.378133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.245 [2024-04-26 12:21:53.378472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.245 [2024-04-26 12:21:53.378486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:52.245 [2024-04-26 12:21:53.378496] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:52.245 [2024-04-26 12:21:53.378732] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:52.245 [2024-04-26 12:21:53.378961] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.245 [2024-04-26 12:21:53.378971] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.245 [2024-04-26 12:21:53.378979] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.245 [2024-04-26 12:21:53.382503] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.245 [2024-04-26 12:21:53.391228] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.245 [2024-04-26 12:21:53.391765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.245 [2024-04-26 12:21:53.392089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.245 [2024-04-26 12:21:53.392101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:52.246 [2024-04-26 12:21:53.392109] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:52.246 [2024-04-26 12:21:53.392327] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:52.246 [2024-04-26 12:21:53.392545] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.246 [2024-04-26 12:21:53.392554] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.246 [2024-04-26 12:21:53.392562] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.246 [2024-04-26 12:21:53.396084] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.246 [2024-04-26 12:21:53.405014] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.246 [2024-04-26 12:21:53.405698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.246 [2024-04-26 12:21:53.406045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.246 [2024-04-26 12:21:53.406060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:52.246 [2024-04-26 12:21:53.406069] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:52.246 [2024-04-26 12:21:53.406306] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:52.246 [2024-04-26 12:21:53.406527] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.246 [2024-04-26 12:21:53.406536] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.246 [2024-04-26 12:21:53.406544] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.246 [2024-04-26 12:21:53.410073] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.246 [2024-04-26 12:21:53.418798] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.246 [2024-04-26 12:21:53.419335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.246 [2024-04-26 12:21:53.419715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.246 [2024-04-26 12:21:53.419726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:52.246 [2024-04-26 12:21:53.419734] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:52.247 [2024-04-26 12:21:53.419958] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:52.247 [2024-04-26 12:21:53.420177] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.247 [2024-04-26 12:21:53.420186] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.247 [2024-04-26 12:21:53.420193] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.247 [2024-04-26 12:21:53.423711] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.247 [2024-04-26 12:21:53.432640] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.247 [2024-04-26 12:21:53.433291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.247 [2024-04-26 12:21:53.433605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.247 [2024-04-26 12:21:53.433619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:52.247 [2024-04-26 12:21:53.433629] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:52.247 [2024-04-26 12:21:53.433872] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:52.247 [2024-04-26 12:21:53.434094] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.247 [2024-04-26 12:21:53.434104] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.247 [2024-04-26 12:21:53.434112] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.247 [2024-04-26 12:21:53.437639] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.247 [2024-04-26 12:21:53.446586] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.247 [2024-04-26 12:21:53.447134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.247 [2024-04-26 12:21:53.447445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.247 [2024-04-26 12:21:53.447456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:52.247 [2024-04-26 12:21:53.447463] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:52.247 [2024-04-26 12:21:53.447682] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:52.247 [2024-04-26 12:21:53.447904] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.247 [2024-04-26 12:21:53.447913] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.247 [2024-04-26 12:21:53.447921] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.247 [2024-04-26 12:21:53.451439] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.518 [2024-04-26 12:21:53.460380] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.518 [2024-04-26 12:21:53.460908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.518 [2024-04-26 12:21:53.461260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.518 [2024-04-26 12:21:53.461271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:52.518 [2024-04-26 12:21:53.461279] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:52.518 [2024-04-26 12:21:53.461497] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:52.518 [2024-04-26 12:21:53.461715] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.518 [2024-04-26 12:21:53.461725] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.518 [2024-04-26 12:21:53.461732] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.518 [2024-04-26 12:21:53.465257] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.518 [2024-04-26 12:21:53.474190] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.518 [2024-04-26 12:21:53.474865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.518 [2024-04-26 12:21:53.475254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.518 [2024-04-26 12:21:53.475268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:52.518 [2024-04-26 12:21:53.475277] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:52.518 [2024-04-26 12:21:53.475513] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:52.518 [2024-04-26 12:21:53.475734] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.518 [2024-04-26 12:21:53.475744] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.518 [2024-04-26 12:21:53.475751] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.518 [2024-04-26 12:21:53.479282] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.518 [2024-04-26 12:21:53.488038] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.518 [2024-04-26 12:21:53.488687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.518 [2024-04-26 12:21:53.489050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.518 [2024-04-26 12:21:53.489064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:52.518 [2024-04-26 12:21:53.489074] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:52.518 [2024-04-26 12:21:53.489311] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:52.518 [2024-04-26 12:21:53.489532] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.518 [2024-04-26 12:21:53.489541] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.518 [2024-04-26 12:21:53.489549] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.518 [2024-04-26 12:21:53.493074] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.518 [2024-04-26 12:21:53.502004] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.518 [2024-04-26 12:21:53.502653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.518 [2024-04-26 12:21:53.503015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.518 [2024-04-26 12:21:53.503031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:52.518 [2024-04-26 12:21:53.503045] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:52.518 [2024-04-26 12:21:53.503281] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:52.518 [2024-04-26 12:21:53.503503] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.518 [2024-04-26 12:21:53.503512] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.518 [2024-04-26 12:21:53.503519] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.518 [2024-04-26 12:21:53.507047] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.518 [2024-04-26 12:21:53.515767] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.518 [2024-04-26 12:21:53.516365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.518 [2024-04-26 12:21:53.516737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.518 [2024-04-26 12:21:53.516751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:52.518 [2024-04-26 12:21:53.516761] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:52.518 [2024-04-26 12:21:53.517006] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:52.518 [2024-04-26 12:21:53.517228] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.518 [2024-04-26 12:21:53.517237] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.518 [2024-04-26 12:21:53.517244] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.518 [2024-04-26 12:21:53.520769] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.518 [2024-04-26 12:21:53.529700] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.518 [2024-04-26 12:21:53.530362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.518 [2024-04-26 12:21:53.530704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.518 [2024-04-26 12:21:53.530718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:52.518 [2024-04-26 12:21:53.530727] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:52.518 [2024-04-26 12:21:53.530973] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:52.518 [2024-04-26 12:21:53.531196] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.518 [2024-04-26 12:21:53.531205] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.518 [2024-04-26 12:21:53.531212] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.518 [2024-04-26 12:21:53.534734] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.518 [2024-04-26 12:21:53.543708] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.518 [2024-04-26 12:21:53.544404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.518 [2024-04-26 12:21:53.544780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.518 [2024-04-26 12:21:53.544794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:52.518 [2024-04-26 12:21:53.544803] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:52.518 [2024-04-26 12:21:53.545055] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:52.518 [2024-04-26 12:21:53.545277] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.518 [2024-04-26 12:21:53.545287] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.518 [2024-04-26 12:21:53.545294] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.518 [2024-04-26 12:21:53.548818] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.518 [2024-04-26 12:21:53.557543] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.518 [2024-04-26 12:21:53.558231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.518 [2024-04-26 12:21:53.558595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.518 [2024-04-26 12:21:53.558609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:52.518 [2024-04-26 12:21:53.558619] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:52.518 [2024-04-26 12:21:53.558864] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:52.518 [2024-04-26 12:21:53.559092] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.518 [2024-04-26 12:21:53.559104] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.518 [2024-04-26 12:21:53.559112] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.518 [2024-04-26 12:21:53.562636] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.518 [2024-04-26 12:21:53.571364] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.518 [2024-04-26 12:21:53.571887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.518 [2024-04-26 12:21:53.572282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.518 [2024-04-26 12:21:53.572295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:52.518 [2024-04-26 12:21:53.572305] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:52.518 [2024-04-26 12:21:53.572541] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:52.518 [2024-04-26 12:21:53.572762] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.518 [2024-04-26 12:21:53.572771] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.518 [2024-04-26 12:21:53.572779] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.518 [2024-04-26 12:21:53.576311] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.518 [2024-04-26 12:21:53.585253] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.518 [2024-04-26 12:21:53.585799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.519 [2024-04-26 12:21:53.586172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.519 [2024-04-26 12:21:53.586186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:52.519 [2024-04-26 12:21:53.586196] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:52.519 [2024-04-26 12:21:53.586433] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:52.519 [2024-04-26 12:21:53.586659] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.519 [2024-04-26 12:21:53.586668] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.519 [2024-04-26 12:21:53.586676] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.519 [2024-04-26 12:21:53.590204] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.519 [2024-04-26 12:21:53.599135] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.519 [2024-04-26 12:21:53.599808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.519 [2024-04-26 12:21:53.600249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.519 [2024-04-26 12:21:53.600264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:52.519 [2024-04-26 12:21:53.600274] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:52.519 [2024-04-26 12:21:53.600510] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:52.519 [2024-04-26 12:21:53.600733] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.519 [2024-04-26 12:21:53.600743] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.519 [2024-04-26 12:21:53.600751] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.519 [2024-04-26 12:21:53.604280] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.519 [2024-04-26 12:21:53.613006] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.519 [2024-04-26 12:21:53.613648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.519 [2024-04-26 12:21:53.613994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.519 [2024-04-26 12:21:53.614010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:52.519 [2024-04-26 12:21:53.614019] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:52.519 [2024-04-26 12:21:53.614256] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:52.519 [2024-04-26 12:21:53.614478] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.519 [2024-04-26 12:21:53.614487] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.519 [2024-04-26 12:21:53.614494] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.519 [2024-04-26 12:21:53.618024] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.519 [2024-04-26 12:21:53.626963] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.519 [2024-04-26 12:21:53.627603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.519 [2024-04-26 12:21:53.627826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.519 [2024-04-26 12:21:53.627848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:52.519 [2024-04-26 12:21:53.627859] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:52.519 [2024-04-26 12:21:53.628096] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:52.519 [2024-04-26 12:21:53.628318] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.519 [2024-04-26 12:21:53.628331] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.519 [2024-04-26 12:21:53.628338] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.519 [2024-04-26 12:21:53.631874] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.519 [2024-04-26 12:21:53.640813] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.519 [2024-04-26 12:21:53.641482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.519 [2024-04-26 12:21:53.642016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.519 [2024-04-26 12:21:53.642055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:52.519 [2024-04-26 12:21:53.642065] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:52.519 [2024-04-26 12:21:53.642303] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:52.519 [2024-04-26 12:21:53.642524] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.519 [2024-04-26 12:21:53.642534] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.519 [2024-04-26 12:21:53.642542] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.519 [2024-04-26 12:21:53.646089] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.519 [2024-04-26 12:21:53.654612] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.519 [2024-04-26 12:21:53.655221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.519 [2024-04-26 12:21:53.655565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.519 [2024-04-26 12:21:53.655579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:52.519 [2024-04-26 12:21:53.655589] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:52.519 [2024-04-26 12:21:53.655825] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:52.519 [2024-04-26 12:21:53.656056] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.519 [2024-04-26 12:21:53.656067] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.519 [2024-04-26 12:21:53.656074] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.519 [2024-04-26 12:21:53.659606] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.519 [2024-04-26 12:21:53.668544] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.519 [2024-04-26 12:21:53.669196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.519 [2024-04-26 12:21:53.669537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.519 [2024-04-26 12:21:53.669551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:52.519 [2024-04-26 12:21:53.669560] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:52.519 [2024-04-26 12:21:53.669797] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:52.519 [2024-04-26 12:21:53.670027] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.519 [2024-04-26 12:21:53.670037] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.519 [2024-04-26 12:21:53.670049] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.519 [2024-04-26 12:21:53.673577] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.519 [2024-04-26 12:21:53.682504] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.519 [2024-04-26 12:21:53.683166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.519 [2024-04-26 12:21:53.683549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.519 [2024-04-26 12:21:53.683562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:52.519 [2024-04-26 12:21:53.683571] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:52.519 [2024-04-26 12:21:53.683808] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:52.519 [2024-04-26 12:21:53.684040] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.519 [2024-04-26 12:21:53.684051] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.519 [2024-04-26 12:21:53.684059] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.519 [2024-04-26 12:21:53.687589] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.519 [2024-04-26 12:21:53.696344] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.519 [2024-04-26 12:21:53.696902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.519 [2024-04-26 12:21:53.697286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.519 [2024-04-26 12:21:53.697300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:52.519 [2024-04-26 12:21:53.697309] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:52.519 [2024-04-26 12:21:53.697545] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:52.519 [2024-04-26 12:21:53.697767] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.519 [2024-04-26 12:21:53.697777] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.519 [2024-04-26 12:21:53.697784] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.519 [2024-04-26 12:21:53.701317] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.519 [2024-04-26 12:21:53.710243] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.519 [2024-04-26 12:21:53.710825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.519 [2024-04-26 12:21:53.711089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.519 [2024-04-26 12:21:53.711104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:52.519 [2024-04-26 12:21:53.711114] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:52.520 [2024-04-26 12:21:53.711350] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:52.520 [2024-04-26 12:21:53.711572] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.520 [2024-04-26 12:21:53.711581] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.520 [2024-04-26 12:21:53.711588] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.520 [2024-04-26 12:21:53.715126] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.520 [2024-04-26 12:21:53.724054] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.520 [2024-04-26 12:21:53.724724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.520 [2024-04-26 12:21:53.725074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.520 [2024-04-26 12:21:53.725088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:52.520 [2024-04-26 12:21:53.725098] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:52.520 [2024-04-26 12:21:53.725334] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:52.520 [2024-04-26 12:21:53.725556] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.520 [2024-04-26 12:21:53.725565] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.520 [2024-04-26 12:21:53.725572] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.520 [2024-04-26 12:21:53.729103] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.781 [2024-04-26 12:21:53.737841] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.781 [2024-04-26 12:21:53.738508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.781 [2024-04-26 12:21:53.738882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.781 [2024-04-26 12:21:53.738897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:52.781 [2024-04-26 12:21:53.738906] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:52.781 [2024-04-26 12:21:53.739143] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:52.781 [2024-04-26 12:21:53.739364] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.781 [2024-04-26 12:21:53.739373] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.781 [2024-04-26 12:21:53.739381] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.781 [2024-04-26 12:21:53.742927] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.781 [2024-04-26 12:21:53.751654] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.781 [2024-04-26 12:21:53.752310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.781 [2024-04-26 12:21:53.752643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.781 [2024-04-26 12:21:53.752657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:52.781 [2024-04-26 12:21:53.752666] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:52.781 [2024-04-26 12:21:53.752914] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:52.781 [2024-04-26 12:21:53.753136] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.782 [2024-04-26 12:21:53.753145] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.782 [2024-04-26 12:21:53.753153] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.782 [2024-04-26 12:21:53.756679] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.782 [2024-04-26 12:21:53.765622] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.782 [2024-04-26 12:21:53.766281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.782 [2024-04-26 12:21:53.766618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.782 [2024-04-26 12:21:53.766632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:52.782 [2024-04-26 12:21:53.766641] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:52.782 [2024-04-26 12:21:53.766888] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:52.782 [2024-04-26 12:21:53.767110] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.782 [2024-04-26 12:21:53.767119] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.782 [2024-04-26 12:21:53.767127] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.782 [2024-04-26 12:21:53.770651] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.782 [2024-04-26 12:21:53.779587] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.782 [2024-04-26 12:21:53.780262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.782 [2024-04-26 12:21:53.780596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.782 [2024-04-26 12:21:53.780609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:52.782 [2024-04-26 12:21:53.780618] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:52.782 [2024-04-26 12:21:53.780865] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:52.782 [2024-04-26 12:21:53.781088] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.782 [2024-04-26 12:21:53.781097] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.782 [2024-04-26 12:21:53.781104] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.782 [2024-04-26 12:21:53.784633] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.782 [2024-04-26 12:21:53.793376] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.782 [2024-04-26 12:21:53.793971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.782 [2024-04-26 12:21:53.794281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.782 [2024-04-26 12:21:53.794295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:52.782 [2024-04-26 12:21:53.794305] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:52.782 [2024-04-26 12:21:53.794541] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:52.782 [2024-04-26 12:21:53.794762] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.782 [2024-04-26 12:21:53.794772] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.782 [2024-04-26 12:21:53.794779] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.782 [2024-04-26 12:21:53.798314] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.782 [2024-04-26 12:21:53.807250] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.782 [2024-04-26 12:21:53.807827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.782 [2024-04-26 12:21:53.808136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.782 [2024-04-26 12:21:53.808148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:52.782 [2024-04-26 12:21:53.808156] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:52.782 [2024-04-26 12:21:53.808375] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:52.782 [2024-04-26 12:21:53.808593] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.782 [2024-04-26 12:21:53.808601] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.782 [2024-04-26 12:21:53.808608] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.782 [2024-04-26 12:21:53.812131] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.782 [2024-04-26 12:21:53.821058] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.782 [2024-04-26 12:21:53.821627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.782 [2024-04-26 12:21:53.821936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.782 [2024-04-26 12:21:53.821949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:52.782 [2024-04-26 12:21:53.821957] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:52.782 [2024-04-26 12:21:53.822175] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:52.782 [2024-04-26 12:21:53.822394] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.782 [2024-04-26 12:21:53.822402] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.782 [2024-04-26 12:21:53.822410] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.782 [2024-04-26 12:21:53.825932] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.782 [2024-04-26 12:21:53.834877] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.782 [2024-04-26 12:21:53.835538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.782 [2024-04-26 12:21:53.835877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.782 [2024-04-26 12:21:53.835892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:52.782 [2024-04-26 12:21:53.835901] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:52.782 [2024-04-26 12:21:53.836138] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:52.782 [2024-04-26 12:21:53.836359] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.782 [2024-04-26 12:21:53.836368] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.782 [2024-04-26 12:21:53.836375] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.782 [2024-04-26 12:21:53.839911] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.782 [2024-04-26 12:21:53.848657] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.782 [2024-04-26 12:21:53.849216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.782 [2024-04-26 12:21:53.849560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.782 [2024-04-26 12:21:53.849571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:52.782 [2024-04-26 12:21:53.849579] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:52.782 [2024-04-26 12:21:53.849797] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:52.782 [2024-04-26 12:21:53.850021] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.782 [2024-04-26 12:21:53.850030] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.782 [2024-04-26 12:21:53.850037] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.782 [2024-04-26 12:21:53.853563] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.782 [2024-04-26 12:21:53.862510] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.782 [2024-04-26 12:21:53.863146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.782 [2024-04-26 12:21:53.863527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.782 [2024-04-26 12:21:53.863540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:52.782 [2024-04-26 12:21:53.863550] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:52.782 [2024-04-26 12:21:53.863786] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:52.782 [2024-04-26 12:21:53.864015] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.782 [2024-04-26 12:21:53.864026] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.782 [2024-04-26 12:21:53.864033] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.782 [2024-04-26 12:21:53.867560] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.782 [2024-04-26 12:21:53.876284] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.782 [2024-04-26 12:21:53.876952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.782 [2024-04-26 12:21:53.877291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.782 [2024-04-26 12:21:53.877304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:52.782 [2024-04-26 12:21:53.877314] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:52.782 [2024-04-26 12:21:53.877550] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:52.782 [2024-04-26 12:21:53.877772] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.782 [2024-04-26 12:21:53.877781] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.782 [2024-04-26 12:21:53.877788] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.783 [2024-04-26 12:21:53.881321] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.783 [2024-04-26 12:21:53.890253] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.783 [2024-04-26 12:21:53.890823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.783 [2024-04-26 12:21:53.891174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.783 [2024-04-26 12:21:53.891186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:52.783 [2024-04-26 12:21:53.891199] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:52.783 [2024-04-26 12:21:53.891418] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:52.783 [2024-04-26 12:21:53.891636] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.783 [2024-04-26 12:21:53.891645] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.783 [2024-04-26 12:21:53.891652] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.783 [2024-04-26 12:21:53.895176] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.783 [2024-04-26 12:21:53.904134] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.783 [2024-04-26 12:21:53.904759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.783 [2024-04-26 12:21:53.905096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.783 [2024-04-26 12:21:53.905111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:52.783 [2024-04-26 12:21:53.905120] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:52.783 [2024-04-26 12:21:53.905357] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:52.783 [2024-04-26 12:21:53.905578] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.783 [2024-04-26 12:21:53.905587] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.783 [2024-04-26 12:21:53.905595] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.783 [2024-04-26 12:21:53.909311] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.783 [2024-04-26 12:21:53.918048] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.783 [2024-04-26 12:21:53.918720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.783 [2024-04-26 12:21:53.919067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.783 [2024-04-26 12:21:53.919083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:52.783 [2024-04-26 12:21:53.919093] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:52.783 [2024-04-26 12:21:53.919329] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:52.783 [2024-04-26 12:21:53.919551] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.783 [2024-04-26 12:21:53.919560] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.783 [2024-04-26 12:21:53.919567] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.783 [2024-04-26 12:21:53.923097] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.783 [2024-04-26 12:21:53.931822] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.783 [2024-04-26 12:21:53.932489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.783 [2024-04-26 12:21:53.932822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.783 [2024-04-26 12:21:53.932836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:52.783 [2024-04-26 12:21:53.932854] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:52.783 [2024-04-26 12:21:53.933095] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:52.783 [2024-04-26 12:21:53.933316] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.783 [2024-04-26 12:21:53.933325] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.783 [2024-04-26 12:21:53.933333] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.783 [2024-04-26 12:21:53.936861] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.783 [2024-04-26 12:21:53.945800] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.783 [2024-04-26 12:21:53.946473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.783 [2024-04-26 12:21:53.946856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.783 [2024-04-26 12:21:53.946871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:52.783 [2024-04-26 12:21:53.946880] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:52.783 [2024-04-26 12:21:53.947117] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:52.783 [2024-04-26 12:21:53.947338] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.783 [2024-04-26 12:21:53.947347] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.783 [2024-04-26 12:21:53.947355] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.783 [2024-04-26 12:21:53.950886] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.783 [2024-04-26 12:21:53.959614] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.783 [2024-04-26 12:21:53.960278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.783 [2024-04-26 12:21:53.960658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.783 [2024-04-26 12:21:53.960672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:52.783 [2024-04-26 12:21:53.960681] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:52.783 [2024-04-26 12:21:53.960926] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:52.783 [2024-04-26 12:21:53.961149] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.783 [2024-04-26 12:21:53.961158] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.783 [2024-04-26 12:21:53.961165] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.783 [2024-04-26 12:21:53.964689] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.783 [2024-04-26 12:21:53.973411] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.783 [2024-04-26 12:21:53.974076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.783 [2024-04-26 12:21:53.974415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.783 [2024-04-26 12:21:53.974428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:52.783 [2024-04-26 12:21:53.974437] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:52.783 [2024-04-26 12:21:53.974674] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:52.783 [2024-04-26 12:21:53.974906] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.783 [2024-04-26 12:21:53.974916] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.783 [2024-04-26 12:21:53.974924] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.783 [2024-04-26 12:21:53.978450] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.783 [2024-04-26 12:21:53.987172] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.783 [2024-04-26 12:21:53.987797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.783 [2024-04-26 12:21:53.988130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.783 [2024-04-26 12:21:53.988144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:52.783 [2024-04-26 12:21:53.988154] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:52.783 [2024-04-26 12:21:53.988390] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:52.783 [2024-04-26 12:21:53.988612] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.783 [2024-04-26 12:21:53.988621] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.783 [2024-04-26 12:21:53.988628] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.783 [2024-04-26 12:21:53.992160] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.045 [2024-04-26 12:21:54.001092] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.045 [2024-04-26 12:21:54.001757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.045 [2024-04-26 12:21:54.002105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.045 [2024-04-26 12:21:54.002120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.045 [2024-04-26 12:21:54.002129] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.045 [2024-04-26 12:21:54.002366] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.045 [2024-04-26 12:21:54.002587] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.045 [2024-04-26 12:21:54.002596] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.045 [2024-04-26 12:21:54.002604] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.045 [2024-04-26 12:21:54.006136] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.045 [2024-04-26 12:21:54.014861] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.045 [2024-04-26 12:21:54.015529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.045 [2024-04-26 12:21:54.015882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.045 [2024-04-26 12:21:54.015897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.045 [2024-04-26 12:21:54.015907] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.045 [2024-04-26 12:21:54.016144] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.045 [2024-04-26 12:21:54.016369] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.045 [2024-04-26 12:21:54.016378] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.045 [2024-04-26 12:21:54.016386] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.045 [2024-04-26 12:21:54.019917] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.045 [2024-04-26 12:21:54.028639] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.045 [2024-04-26 12:21:54.029309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.045 [2024-04-26 12:21:54.029641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.045 [2024-04-26 12:21:54.029655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.045 [2024-04-26 12:21:54.029664] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.045 [2024-04-26 12:21:54.029910] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.045 [2024-04-26 12:21:54.030132] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.045 [2024-04-26 12:21:54.030141] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.045 [2024-04-26 12:21:54.030148] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.045 [2024-04-26 12:21:54.033673] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.045 [2024-04-26 12:21:54.042604] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.045 [2024-04-26 12:21:54.043260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.045 [2024-04-26 12:21:54.043641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.045 [2024-04-26 12:21:54.043655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.045 [2024-04-26 12:21:54.043664] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.046 [2024-04-26 12:21:54.043909] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.046 [2024-04-26 12:21:54.044131] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.046 [2024-04-26 12:21:54.044140] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.046 [2024-04-26 12:21:54.044147] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.046 [2024-04-26 12:21:54.047673] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.046 [2024-04-26 12:21:54.056398] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.046 [2024-04-26 12:21:54.056960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.046 [2024-04-26 12:21:54.057314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.046 [2024-04-26 12:21:54.057327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.046 [2024-04-26 12:21:54.057337] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.046 [2024-04-26 12:21:54.057573] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.046 [2024-04-26 12:21:54.057794] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.046 [2024-04-26 12:21:54.057803] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.046 [2024-04-26 12:21:54.057815] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.046 [2024-04-26 12:21:54.061356] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.046 [2024-04-26 12:21:54.070295] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.046 [2024-04-26 12:21:54.070971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.046 [2024-04-26 12:21:54.071306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.046 [2024-04-26 12:21:54.071319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.046 [2024-04-26 12:21:54.071329] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.046 [2024-04-26 12:21:54.071565] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.046 [2024-04-26 12:21:54.071786] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.046 [2024-04-26 12:21:54.071795] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.046 [2024-04-26 12:21:54.071803] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.046 [2024-04-26 12:21:54.075336] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.046 [2024-04-26 12:21:54.084060] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.046 [2024-04-26 12:21:54.084631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.046 [2024-04-26 12:21:54.084973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.046 [2024-04-26 12:21:54.084985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.046 [2024-04-26 12:21:54.084993] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.046 [2024-04-26 12:21:54.085211] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.046 [2024-04-26 12:21:54.085429] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.046 [2024-04-26 12:21:54.085438] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.046 [2024-04-26 12:21:54.085445] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.046 [2024-04-26 12:21:54.088967] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.046 [2024-04-26 12:21:54.097894] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.046 [2024-04-26 12:21:54.098415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.046 [2024-04-26 12:21:54.098758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.046 [2024-04-26 12:21:54.098769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.046 [2024-04-26 12:21:54.098776] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.046 [2024-04-26 12:21:54.098999] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.046 [2024-04-26 12:21:54.099218] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.046 [2024-04-26 12:21:54.099226] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.046 [2024-04-26 12:21:54.099241] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.046 [2024-04-26 12:21:54.102761] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.046 [2024-04-26 12:21:54.111683] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.046 [2024-04-26 12:21:54.112334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.046 [2024-04-26 12:21:54.112714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.046 [2024-04-26 12:21:54.112728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.046 [2024-04-26 12:21:54.112737] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.046 [2024-04-26 12:21:54.112982] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.046 [2024-04-26 12:21:54.113204] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.046 [2024-04-26 12:21:54.113213] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.046 [2024-04-26 12:21:54.113220] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.046 [2024-04-26 12:21:54.116745] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.046 [2024-04-26 12:21:54.125467] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.046 [2024-04-26 12:21:54.126131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.046 [2024-04-26 12:21:54.126381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.046 [2024-04-26 12:21:54.126395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.046 [2024-04-26 12:21:54.126406] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.046 [2024-04-26 12:21:54.126643] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.046 [2024-04-26 12:21:54.126873] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.046 [2024-04-26 12:21:54.126883] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.046 [2024-04-26 12:21:54.126890] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.046 [2024-04-26 12:21:54.130418] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.046 [2024-04-26 12:21:54.139347] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.046 [2024-04-26 12:21:54.139940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.046 [2024-04-26 12:21:54.140258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.046 [2024-04-26 12:21:54.140271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.046 [2024-04-26 12:21:54.140281] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.046 [2024-04-26 12:21:54.140517] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.046 [2024-04-26 12:21:54.140738] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.046 [2024-04-26 12:21:54.140748] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.046 [2024-04-26 12:21:54.140755] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.046 [2024-04-26 12:21:54.144302] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.046 [2024-04-26 12:21:54.153243] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.046 [2024-04-26 12:21:54.153916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.046 [2024-04-26 12:21:54.154270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.046 [2024-04-26 12:21:54.154284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.046 [2024-04-26 12:21:54.154293] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.046 [2024-04-26 12:21:54.154530] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.046 [2024-04-26 12:21:54.154751] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.046 [2024-04-26 12:21:54.154760] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.046 [2024-04-26 12:21:54.154768] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.046 [2024-04-26 12:21:54.158299] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.046 [2024-04-26 12:21:54.167032] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.046 [2024-04-26 12:21:54.167697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.046 [2024-04-26 12:21:54.168020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.046 [2024-04-26 12:21:54.168035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.046 [2024-04-26 12:21:54.168045] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.046 [2024-04-26 12:21:54.168282] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.046 [2024-04-26 12:21:54.168502] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.046 [2024-04-26 12:21:54.168512] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.047 [2024-04-26 12:21:54.168520] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.047 [2024-04-26 12:21:54.172049] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.047 [2024-04-26 12:21:54.180980] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.047 [2024-04-26 12:21:54.181642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.047 [2024-04-26 12:21:54.182018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.047 [2024-04-26 12:21:54.182033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.047 [2024-04-26 12:21:54.182042] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.047 [2024-04-26 12:21:54.182279] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.047 [2024-04-26 12:21:54.182500] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.047 [2024-04-26 12:21:54.182509] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.047 [2024-04-26 12:21:54.182517] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.047 [2024-04-26 12:21:54.186047] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.047 [2024-04-26 12:21:54.194771] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.047 [2024-04-26 12:21:54.195408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.047 [2024-04-26 12:21:54.195742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.047 [2024-04-26 12:21:54.195756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.047 [2024-04-26 12:21:54.195765] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.047 [2024-04-26 12:21:54.196010] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.047 [2024-04-26 12:21:54.196232] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.047 [2024-04-26 12:21:54.196241] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.047 [2024-04-26 12:21:54.196249] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.047 [2024-04-26 12:21:54.199774] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.047 [2024-04-26 12:21:54.208704] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.047 [2024-04-26 12:21:54.209379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.047 [2024-04-26 12:21:54.209762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.047 [2024-04-26 12:21:54.209776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.047 [2024-04-26 12:21:54.209786] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.047 [2024-04-26 12:21:54.210030] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.047 [2024-04-26 12:21:54.210252] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.047 [2024-04-26 12:21:54.210261] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.047 [2024-04-26 12:21:54.210269] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.047 [2024-04-26 12:21:54.213791] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.047 [2024-04-26 12:21:54.222516] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.047 [2024-04-26 12:21:54.223133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.047 [2024-04-26 12:21:54.223515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.047 [2024-04-26 12:21:54.223529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.047 [2024-04-26 12:21:54.223538] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.047 [2024-04-26 12:21:54.223775] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.047 [2024-04-26 12:21:54.224005] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.047 [2024-04-26 12:21:54.224014] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.047 [2024-04-26 12:21:54.224022] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.047 [2024-04-26 12:21:54.227547] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.047 [2024-04-26 12:21:54.236476] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.047 [2024-04-26 12:21:54.237188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.047 [2024-04-26 12:21:54.237527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.047 [2024-04-26 12:21:54.237541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.047 [2024-04-26 12:21:54.237550] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.047 [2024-04-26 12:21:54.237787] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.047 [2024-04-26 12:21:54.238016] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.047 [2024-04-26 12:21:54.238027] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.047 [2024-04-26 12:21:54.238034] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.047 [2024-04-26 12:21:54.241560] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.047 [2024-04-26 12:21:54.250299] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.047 [2024-04-26 12:21:54.250885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.047 [2024-04-26 12:21:54.251222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.047 [2024-04-26 12:21:54.251233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.047 [2024-04-26 12:21:54.251241] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.047 [2024-04-26 12:21:54.251464] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.047 [2024-04-26 12:21:54.251683] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.047 [2024-04-26 12:21:54.251692] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.047 [2024-04-26 12:21:54.251700] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.047 [2024-04-26 12:21:54.255228] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.309 [2024-04-26 12:21:54.264162] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.309 [2024-04-26 12:21:54.264826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.309 [2024-04-26 12:21:54.265189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.309 [2024-04-26 12:21:54.265202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.309 [2024-04-26 12:21:54.265211] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.309 [2024-04-26 12:21:54.265448] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.309 [2024-04-26 12:21:54.265670] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.309 [2024-04-26 12:21:54.265679] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.309 [2024-04-26 12:21:54.265686] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.309 [2024-04-26 12:21:54.269217] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.309 [2024-04-26 12:21:54.277949] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.309 [2024-04-26 12:21:54.278588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.309 [2024-04-26 12:21:54.278953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.309 [2024-04-26 12:21:54.278969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.309 [2024-04-26 12:21:54.278983] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.309 [2024-04-26 12:21:54.279220] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.309 [2024-04-26 12:21:54.279441] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.309 [2024-04-26 12:21:54.279451] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.309 [2024-04-26 12:21:54.279458] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.309 [2024-04-26 12:21:54.282988] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.309 [2024-04-26 12:21:54.291716] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.309 [2024-04-26 12:21:54.292398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.309 [2024-04-26 12:21:54.292733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.309 [2024-04-26 12:21:54.292747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.309 [2024-04-26 12:21:54.292756] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.309 [2024-04-26 12:21:54.293000] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.309 [2024-04-26 12:21:54.293222] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.309 [2024-04-26 12:21:54.293232] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.309 [2024-04-26 12:21:54.293240] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.309 [2024-04-26 12:21:54.296764] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.309 [2024-04-26 12:21:54.305492] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.309 [2024-04-26 12:21:54.306048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.309 [2024-04-26 12:21:54.306372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.309 [2024-04-26 12:21:54.306384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.310 [2024-04-26 12:21:54.306393] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.310 [2024-04-26 12:21:54.306612] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.310 [2024-04-26 12:21:54.306832] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.310 [2024-04-26 12:21:54.306846] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.310 [2024-04-26 12:21:54.306853] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.310 [2024-04-26 12:21:54.310376] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.310 [2024-04-26 12:21:54.319302] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.310 [2024-04-26 12:21:54.319827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.310 [2024-04-26 12:21:54.320184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.310 [2024-04-26 12:21:54.320197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.310 [2024-04-26 12:21:54.320205] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.310 [2024-04-26 12:21:54.320428] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.310 [2024-04-26 12:21:54.320646] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.310 [2024-04-26 12:21:54.320655] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.310 [2024-04-26 12:21:54.320662] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.310 [2024-04-26 12:21:54.324187] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.310 [2024-04-26 12:21:54.333112] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.310 [2024-04-26 12:21:54.333631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.310 [2024-04-26 12:21:54.333943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.310 [2024-04-26 12:21:54.333954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.310 [2024-04-26 12:21:54.333962] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.310 [2024-04-26 12:21:54.334180] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.310 [2024-04-26 12:21:54.334397] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.310 [2024-04-26 12:21:54.334406] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.310 [2024-04-26 12:21:54.334413] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.310 [2024-04-26 12:21:54.337935] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.310 [2024-04-26 12:21:54.347078] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.310 [2024-04-26 12:21:54.347697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.310 [2024-04-26 12:21:54.348040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.310 [2024-04-26 12:21:54.348056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.310 [2024-04-26 12:21:54.348065] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.310 [2024-04-26 12:21:54.348302] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.310 [2024-04-26 12:21:54.348523] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.310 [2024-04-26 12:21:54.348532] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.310 [2024-04-26 12:21:54.348540] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.310 [2024-04-26 12:21:54.352072] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.310 [2024-04-26 12:21:54.361007] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.310 [2024-04-26 12:21:54.361678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.310 [2024-04-26 12:21:54.362060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.310 [2024-04-26 12:21:54.362076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.310 [2024-04-26 12:21:54.362086] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.310 [2024-04-26 12:21:54.362326] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.310 [2024-04-26 12:21:54.362547] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.310 [2024-04-26 12:21:54.362556] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.310 [2024-04-26 12:21:54.362564] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.310 [2024-04-26 12:21:54.366096] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.310 [2024-04-26 12:21:54.374818] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.310 [2024-04-26 12:21:54.375489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.310 [2024-04-26 12:21:54.375876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.310 [2024-04-26 12:21:54.375892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.310 [2024-04-26 12:21:54.375901] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.310 [2024-04-26 12:21:54.376138] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.310 [2024-04-26 12:21:54.376359] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.310 [2024-04-26 12:21:54.376368] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.310 [2024-04-26 12:21:54.376375] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.310 [2024-04-26 12:21:54.379905] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.310 [2024-04-26 12:21:54.388627] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.310 [2024-04-26 12:21:54.389249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.310 [2024-04-26 12:21:54.389629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.310 [2024-04-26 12:21:54.389643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.310 [2024-04-26 12:21:54.389652] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.310 [2024-04-26 12:21:54.389897] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.310 [2024-04-26 12:21:54.390119] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.310 [2024-04-26 12:21:54.390128] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.310 [2024-04-26 12:21:54.390136] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.310 [2024-04-26 12:21:54.393663] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.310 [2024-04-26 12:21:54.402663] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.310 [2024-04-26 12:21:54.403334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.310 [2024-04-26 12:21:54.403669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.310 [2024-04-26 12:21:54.403683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.310 [2024-04-26 12:21:54.403692] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.310 [2024-04-26 12:21:54.403937] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.310 [2024-04-26 12:21:54.404163] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.310 [2024-04-26 12:21:54.404173] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.310 [2024-04-26 12:21:54.404181] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.310 [2024-04-26 12:21:54.407707] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.310 [2024-04-26 12:21:54.416636] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.310 [2024-04-26 12:21:54.417202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.310 [2024-04-26 12:21:54.417542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.310 [2024-04-26 12:21:54.417553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.310 [2024-04-26 12:21:54.417561] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.310 [2024-04-26 12:21:54.417780] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.310 [2024-04-26 12:21:54.418004] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.310 [2024-04-26 12:21:54.418013] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.310 [2024-04-26 12:21:54.418020] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.310 [2024-04-26 12:21:54.421539] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.310 [2024-04-26 12:21:54.430465] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.310 [2024-04-26 12:21:54.430971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.310 [2024-04-26 12:21:54.431356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.310 [2024-04-26 12:21:54.431369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.311 [2024-04-26 12:21:54.431378] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.311 [2024-04-26 12:21:54.431615] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.311 [2024-04-26 12:21:54.431845] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.311 [2024-04-26 12:21:54.431855] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.311 [2024-04-26 12:21:54.431862] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.311 [2024-04-26 12:21:54.435388] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.311 [2024-04-26 12:21:54.444329] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.311 [2024-04-26 12:21:54.445065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.311 [2024-04-26 12:21:54.445313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.311 [2024-04-26 12:21:54.445326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.311 [2024-04-26 12:21:54.445337] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.311 [2024-04-26 12:21:54.445574] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.311 [2024-04-26 12:21:54.445795] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.311 [2024-04-26 12:21:54.445808] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.311 [2024-04-26 12:21:54.445817] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.311 [2024-04-26 12:21:54.449352] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.311 [2024-04-26 12:21:54.458285] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.311 [2024-04-26 12:21:54.458939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.311 [2024-04-26 12:21:54.459282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.311 [2024-04-26 12:21:54.459296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.311 [2024-04-26 12:21:54.459305] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.311 [2024-04-26 12:21:54.459546] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.311 [2024-04-26 12:21:54.459769] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.311 [2024-04-26 12:21:54.459779] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.311 [2024-04-26 12:21:54.459786] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.311 [2024-04-26 12:21:54.463319] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.311 [2024-04-26 12:21:54.472251] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.311 [2024-04-26 12:21:54.472915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.311 [2024-04-26 12:21:54.473315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.311 [2024-04-26 12:21:54.473329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.311 [2024-04-26 12:21:54.473338] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.311 [2024-04-26 12:21:54.473575] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.311 [2024-04-26 12:21:54.473796] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.311 [2024-04-26 12:21:54.473805] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.311 [2024-04-26 12:21:54.473813] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.311 [2024-04-26 12:21:54.477347] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.311 [2024-04-26 12:21:54.486078] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.311 [2024-04-26 12:21:54.486743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.311 [2024-04-26 12:21:54.487093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.311 [2024-04-26 12:21:54.487109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.311 [2024-04-26 12:21:54.487118] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.311 [2024-04-26 12:21:54.487355] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.311 [2024-04-26 12:21:54.487577] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.311 [2024-04-26 12:21:54.487587] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.311 [2024-04-26 12:21:54.487600] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.311 [2024-04-26 12:21:54.491130] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.311 [2024-04-26 12:21:54.499854] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.311 [2024-04-26 12:21:54.500389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.311 [2024-04-26 12:21:54.500714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.311 [2024-04-26 12:21:54.500725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.311 [2024-04-26 12:21:54.500733] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.311 [2024-04-26 12:21:54.500957] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.311 [2024-04-26 12:21:54.501177] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.311 [2024-04-26 12:21:54.501186] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.311 [2024-04-26 12:21:54.501193] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.311 [2024-04-26 12:21:54.504710] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.311 [2024-04-26 12:21:54.513640] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.311 [2024-04-26 12:21:54.514306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.311 [2024-04-26 12:21:54.514625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.311 [2024-04-26 12:21:54.514639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.311 [2024-04-26 12:21:54.514648] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.311 [2024-04-26 12:21:54.514892] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.311 [2024-04-26 12:21:54.515114] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.311 [2024-04-26 12:21:54.515123] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.311 [2024-04-26 12:21:54.515131] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.311 [2024-04-26 12:21:54.518657] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.573 [2024-04-26 12:21:54.527596] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.573 [2024-04-26 12:21:54.528836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.573 [2024-04-26 12:21:54.529110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.573 [2024-04-26 12:21:54.529124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.573 [2024-04-26 12:21:54.529134] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.573 [2024-04-26 12:21:54.529372] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.573 [2024-04-26 12:21:54.529595] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.573 [2024-04-26 12:21:54.529605] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.573 [2024-04-26 12:21:54.529613] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.573 [2024-04-26 12:21:54.533146] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.573 [2024-04-26 12:21:54.541462] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.573 [2024-04-26 12:21:54.542007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.573 [2024-04-26 12:21:54.542344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.574 [2024-04-26 12:21:54.542355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.574 [2024-04-26 12:21:54.542363] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.574 [2024-04-26 12:21:54.542581] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.574 [2024-04-26 12:21:54.542799] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.574 [2024-04-26 12:21:54.542808] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.574 [2024-04-26 12:21:54.542815] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.574 [2024-04-26 12:21:54.546351] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.574 [2024-04-26 12:21:54.555281] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.574 [2024-04-26 12:21:54.555804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.574 [2024-04-26 12:21:54.556113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.574 [2024-04-26 12:21:54.556125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.574 [2024-04-26 12:21:54.556132] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.574 [2024-04-26 12:21:54.556350] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.574 [2024-04-26 12:21:54.556568] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.574 [2024-04-26 12:21:54.556578] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.574 [2024-04-26 12:21:54.556585] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.574 [2024-04-26 12:21:54.560113] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.574 [2024-04-26 12:21:54.569127] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.574 [2024-04-26 12:21:54.569692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.574 [2024-04-26 12:21:54.569999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.574 [2024-04-26 12:21:54.570012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.574 [2024-04-26 12:21:54.570019] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.574 [2024-04-26 12:21:54.570237] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.574 [2024-04-26 12:21:54.570457] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.574 [2024-04-26 12:21:54.570465] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.574 [2024-04-26 12:21:54.570472] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.574 [2024-04-26 12:21:54.573996] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.574 [2024-04-26 12:21:54.582925] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.574 [2024-04-26 12:21:54.583590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.574 [2024-04-26 12:21:54.583846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.574 [2024-04-26 12:21:54.583862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.574 [2024-04-26 12:21:54.583873] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.574 [2024-04-26 12:21:54.584110] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.574 [2024-04-26 12:21:54.584332] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.574 [2024-04-26 12:21:54.584341] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.574 [2024-04-26 12:21:54.584348] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.574 [2024-04-26 12:21:54.587879] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.574 [2024-04-26 12:21:54.596814] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.574 [2024-04-26 12:21:54.597354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.574 [2024-04-26 12:21:54.597695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.574 [2024-04-26 12:21:54.597706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.574 [2024-04-26 12:21:54.597714] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.574 [2024-04-26 12:21:54.597936] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.574 [2024-04-26 12:21:54.598156] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.574 [2024-04-26 12:21:54.598165] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.574 [2024-04-26 12:21:54.598172] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.574 [2024-04-26 12:21:54.601693] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.574 [2024-04-26 12:21:54.610626] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.574 [2024-04-26 12:21:54.611171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.574 [2024-04-26 12:21:54.611479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.574 [2024-04-26 12:21:54.611489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.574 [2024-04-26 12:21:54.611497] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.574 [2024-04-26 12:21:54.611715] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.574 [2024-04-26 12:21:54.611938] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.574 [2024-04-26 12:21:54.611947] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.574 [2024-04-26 12:21:54.611954] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.574 [2024-04-26 12:21:54.615475] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.574 [2024-04-26 12:21:54.624404] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.574 [2024-04-26 12:21:54.624945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.574 [2024-04-26 12:21:54.625701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.574 [2024-04-26 12:21:54.625724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.574 [2024-04-26 12:21:54.625732] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.574 [2024-04-26 12:21:54.625963] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.574 [2024-04-26 12:21:54.626184] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.574 [2024-04-26 12:21:54.626193] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.574 [2024-04-26 12:21:54.626201] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.574 [2024-04-26 12:21:54.629724] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.574 [2024-04-26 12:21:54.638262] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.574 [2024-04-26 12:21:54.638661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.574 [2024-04-26 12:21:54.639041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.574 [2024-04-26 12:21:54.639053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.574 [2024-04-26 12:21:54.639061] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.574 [2024-04-26 12:21:54.639279] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.574 [2024-04-26 12:21:54.639498] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.574 [2024-04-26 12:21:54.639507] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.574 [2024-04-26 12:21:54.639514] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.574 [2024-04-26 12:21:54.643041] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.574 [2024-04-26 12:21:54.652191] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.574 [2024-04-26 12:21:54.652865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.574 [2024-04-26 12:21:54.653272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.574 [2024-04-26 12:21:54.653286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.574 [2024-04-26 12:21:54.653295] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.574 [2024-04-26 12:21:54.653533] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.574 [2024-04-26 12:21:54.653754] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.574 [2024-04-26 12:21:54.653764] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.574 [2024-04-26 12:21:54.653772] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.574 [2024-04-26 12:21:54.657304] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.574 [2024-04-26 12:21:54.666041] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.574 [2024-04-26 12:21:54.666731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.574 [2024-04-26 12:21:54.666872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.575 [2024-04-26 12:21:54.666891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.575 [2024-04-26 12:21:54.666902] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.575 [2024-04-26 12:21:54.667138] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.575 [2024-04-26 12:21:54.667361] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.575 [2024-04-26 12:21:54.667370] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.575 [2024-04-26 12:21:54.667378] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.575 [2024-04-26 12:21:54.670909] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.575 [2024-04-26 12:21:54.679843] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.575 [2024-04-26 12:21:54.680373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.575 [2024-04-26 12:21:54.680722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.575 [2024-04-26 12:21:54.680733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.575 [2024-04-26 12:21:54.680741] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.575 [2024-04-26 12:21:54.680965] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.575 [2024-04-26 12:21:54.681183] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.575 [2024-04-26 12:21:54.681193] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.575 [2024-04-26 12:21:54.681200] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.575 [2024-04-26 12:21:54.684720] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.575 [2024-04-26 12:21:54.693652] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.575 [2024-04-26 12:21:54.694248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.575 [2024-04-26 12:21:54.694593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.575 [2024-04-26 12:21:54.694603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.575 [2024-04-26 12:21:54.694611] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.575 [2024-04-26 12:21:54.694828] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.575 [2024-04-26 12:21:54.695051] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.575 [2024-04-26 12:21:54.695061] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.575 [2024-04-26 12:21:54.695068] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.575 [2024-04-26 12:21:54.698585] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.575 [2024-04-26 12:21:54.707516] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.575 [2024-04-26 12:21:54.708131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.575 [2024-04-26 12:21:54.708469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.575 [2024-04-26 12:21:54.708480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.575 [2024-04-26 12:21:54.708491] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.575 [2024-04-26 12:21:54.708709] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.575 [2024-04-26 12:21:54.708932] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.575 [2024-04-26 12:21:54.708941] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.575 [2024-04-26 12:21:54.708948] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.575 [2024-04-26 12:21:54.712471] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.575 [2024-04-26 12:21:54.721399] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.575 [2024-04-26 12:21:54.722093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.575 [2024-04-26 12:21:54.722488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.575 [2024-04-26 12:21:54.722502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.575 [2024-04-26 12:21:54.722511] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.575 [2024-04-26 12:21:54.722748] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.575 [2024-04-26 12:21:54.722974] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.575 [2024-04-26 12:21:54.722984] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.575 [2024-04-26 12:21:54.722992] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.575 [2024-04-26 12:21:54.726520] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.575 [2024-04-26 12:21:54.735246] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.575 [2024-04-26 12:21:54.735822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.575 [2024-04-26 12:21:54.736007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.575 [2024-04-26 12:21:54.736021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.575 [2024-04-26 12:21:54.736029] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.575 [2024-04-26 12:21:54.736247] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.575 [2024-04-26 12:21:54.736467] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.575 [2024-04-26 12:21:54.736476] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.575 [2024-04-26 12:21:54.736483] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.575 [2024-04-26 12:21:54.740009] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.575 [2024-04-26 12:21:54.749161] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.575 [2024-04-26 12:21:54.749698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.575 [2024-04-26 12:21:54.749963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.575 [2024-04-26 12:21:54.749975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.575 [2024-04-26 12:21:54.749983] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.575 [2024-04-26 12:21:54.750206] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.575 [2024-04-26 12:21:54.750425] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.575 [2024-04-26 12:21:54.750433] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.575 [2024-04-26 12:21:54.750440] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.575 [2024-04-26 12:21:54.753961] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.575 [2024-04-26 12:21:54.763105] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.575 [2024-04-26 12:21:54.763637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.575 [2024-04-26 12:21:54.763821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.575 [2024-04-26 12:21:54.763832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.575 [2024-04-26 12:21:54.763845] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.575 [2024-04-26 12:21:54.764063] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.575 [2024-04-26 12:21:54.764282] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.575 [2024-04-26 12:21:54.764291] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.575 [2024-04-26 12:21:54.764298] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.575 [2024-04-26 12:21:54.767818] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.575 [2024-04-26 12:21:54.776959] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.575 [2024-04-26 12:21:54.777616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.575 [2024-04-26 12:21:54.777961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.575 [2024-04-26 12:21:54.777977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.575 [2024-04-26 12:21:54.777986] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.575 [2024-04-26 12:21:54.778223] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.575 [2024-04-26 12:21:54.778445] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.575 [2024-04-26 12:21:54.778454] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.575 [2024-04-26 12:21:54.778462] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.575 [2024-04-26 12:21:54.781991] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.575 [2024-04-26 12:21:54.790926] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.575 [2024-04-26 12:21:54.791492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.838 [2024-04-26 12:21:54.791847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.838 [2024-04-26 12:21:54.791860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.838 [2024-04-26 12:21:54.791868] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.838 [2024-04-26 12:21:54.792087] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.838 [2024-04-26 12:21:54.792311] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.838 [2024-04-26 12:21:54.792320] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.838 [2024-04-26 12:21:54.792327] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.838 [2024-04-26 12:21:54.795851] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.838 [2024-04-26 12:21:54.804777] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.838 [2024-04-26 12:21:54.805366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.838 [2024-04-26 12:21:54.805570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.838 [2024-04-26 12:21:54.805583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.838 [2024-04-26 12:21:54.805592] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.838 [2024-04-26 12:21:54.805829] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.838 [2024-04-26 12:21:54.806059] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.838 [2024-04-26 12:21:54.806068] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.838 [2024-04-26 12:21:54.806076] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.838 [2024-04-26 12:21:54.809601] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.838 [2024-04-26 12:21:54.818740] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.838 [2024-04-26 12:21:54.819275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.838 [2024-04-26 12:21:54.819618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.838 [2024-04-26 12:21:54.819629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.838 [2024-04-26 12:21:54.819637] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.838 [2024-04-26 12:21:54.819860] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.838 [2024-04-26 12:21:54.820078] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.838 [2024-04-26 12:21:54.820087] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.838 [2024-04-26 12:21:54.820094] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.838 [2024-04-26 12:21:54.823615] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.838 [2024-04-26 12:21:54.832591] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.838 [2024-04-26 12:21:54.833143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.838 [2024-04-26 12:21:54.833262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.838 [2024-04-26 12:21:54.833271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.838 [2024-04-26 12:21:54.833280] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.838 [2024-04-26 12:21:54.833498] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.838 [2024-04-26 12:21:54.833717] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.838 [2024-04-26 12:21:54.833730] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.838 [2024-04-26 12:21:54.833737] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.838 [2024-04-26 12:21:54.837261] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.838 [2024-04-26 12:21:54.846411] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.838 [2024-04-26 12:21:54.847159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.838 [2024-04-26 12:21:54.847561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.838 [2024-04-26 12:21:54.847574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.838 [2024-04-26 12:21:54.847583] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.838 [2024-04-26 12:21:54.847820] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.838 [2024-04-26 12:21:54.848048] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.838 [2024-04-26 12:21:54.848058] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.838 [2024-04-26 12:21:54.848066] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.838 [2024-04-26 12:21:54.851593] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.838 [2024-04-26 12:21:54.860330] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.838 [2024-04-26 12:21:54.860902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.838 [2024-04-26 12:21:54.861153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.838 [2024-04-26 12:21:54.861168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.838 [2024-04-26 12:21:54.861178] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.838 [2024-04-26 12:21:54.861415] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.838 [2024-04-26 12:21:54.861637] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.838 [2024-04-26 12:21:54.861645] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.838 [2024-04-26 12:21:54.861653] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.838 [2024-04-26 12:21:54.865183] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.838 [2024-04-26 12:21:54.874119] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.838 [2024-04-26 12:21:54.874658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.838 [2024-04-26 12:21:54.875043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.838 [2024-04-26 12:21:54.875059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.838 [2024-04-26 12:21:54.875068] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.838 [2024-04-26 12:21:54.875304] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.838 [2024-04-26 12:21:54.875525] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.838 [2024-04-26 12:21:54.875534] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.838 [2024-04-26 12:21:54.875546] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.838 [2024-04-26 12:21:54.879078] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.838 [2024-04-26 12:21:54.888013] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.838 [2024-04-26 12:21:54.888580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.838 [2024-04-26 12:21:54.888919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.838 [2024-04-26 12:21:54.888930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.838 [2024-04-26 12:21:54.888938] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.838 [2024-04-26 12:21:54.889157] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.838 [2024-04-26 12:21:54.889375] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.838 [2024-04-26 12:21:54.889384] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.838 [2024-04-26 12:21:54.889391] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.838 [2024-04-26 12:21:54.892917] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.839 [2024-04-26 12:21:54.901850] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.839 [2024-04-26 12:21:54.902488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.839 [2024-04-26 12:21:54.903189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.839 [2024-04-26 12:21:54.903206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.839 [2024-04-26 12:21:54.903217] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.839 [2024-04-26 12:21:54.903459] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.839 [2024-04-26 12:21:54.903681] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.839 [2024-04-26 12:21:54.903691] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.839 [2024-04-26 12:21:54.903698] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.839 [2024-04-26 12:21:54.907229] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.839 [2024-04-26 12:21:54.915736] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.839 [2024-04-26 12:21:54.916295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.839 [2024-04-26 12:21:54.916637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.839 [2024-04-26 12:21:54.916649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.839 [2024-04-26 12:21:54.916657] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.839 [2024-04-26 12:21:54.916881] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.839 [2024-04-26 12:21:54.917100] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.839 [2024-04-26 12:21:54.917109] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.839 [2024-04-26 12:21:54.917116] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.839 [2024-04-26 12:21:54.920640] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.839 [2024-04-26 12:21:54.929573] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.839 [2024-04-26 12:21:54.930151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.839 [2024-04-26 12:21:54.930465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.839 [2024-04-26 12:21:54.930476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.839 [2024-04-26 12:21:54.930484] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.839 [2024-04-26 12:21:54.930702] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.839 [2024-04-26 12:21:54.930923] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.839 [2024-04-26 12:21:54.930932] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.839 [2024-04-26 12:21:54.930939] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.839 [2024-04-26 12:21:54.934458] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.839 [2024-04-26 12:21:54.943385] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.839 [2024-04-26 12:21:54.943944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.839 [2024-04-26 12:21:54.944298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.839 [2024-04-26 12:21:54.944309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.839 [2024-04-26 12:21:54.944316] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.839 [2024-04-26 12:21:54.944535] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.839 [2024-04-26 12:21:54.944754] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.839 [2024-04-26 12:21:54.944762] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.839 [2024-04-26 12:21:54.944769] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.839 [2024-04-26 12:21:54.948304] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.839 [2024-04-26 12:21:54.957234] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.839 [2024-04-26 12:21:54.957774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.839 [2024-04-26 12:21:54.958129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.839 [2024-04-26 12:21:54.958141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.839 [2024-04-26 12:21:54.958149] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.839 [2024-04-26 12:21:54.958366] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.839 [2024-04-26 12:21:54.958584] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.839 [2024-04-26 12:21:54.958592] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.839 [2024-04-26 12:21:54.958599] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.839 [2024-04-26 12:21:54.962132] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.839 [2024-04-26 12:21:54.971066] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.839 [2024-04-26 12:21:54.971593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.839 [2024-04-26 12:21:54.971917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.839 [2024-04-26 12:21:54.971929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.839 [2024-04-26 12:21:54.971937] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.839 [2024-04-26 12:21:54.972155] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.839 [2024-04-26 12:21:54.972373] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.839 [2024-04-26 12:21:54.972383] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.839 [2024-04-26 12:21:54.972390] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.839 [2024-04-26 12:21:54.975914] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.839 [2024-04-26 12:21:54.984849] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.839 [2024-04-26 12:21:54.985413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.839 [2024-04-26 12:21:54.985753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.839 [2024-04-26 12:21:54.985764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.839 [2024-04-26 12:21:54.985771] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.839 [2024-04-26 12:21:54.985994] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.839 [2024-04-26 12:21:54.986212] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.839 [2024-04-26 12:21:54.986222] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.839 [2024-04-26 12:21:54.986229] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.839 [2024-04-26 12:21:54.989746] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.839 [2024-04-26 12:21:54.998680] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.839 [2024-04-26 12:21:54.999255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.839 [2024-04-26 12:21:54.999603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.839 [2024-04-26 12:21:54.999614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.839 [2024-04-26 12:21:54.999621] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.839 [2024-04-26 12:21:54.999848] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.839 [2024-04-26 12:21:55.000066] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.839 [2024-04-26 12:21:55.000075] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.839 [2024-04-26 12:21:55.000082] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.839 [2024-04-26 12:21:55.003602] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.839 [2024-04-26 12:21:55.012530] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.839 [2024-04-26 12:21:55.013022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.839 [2024-04-26 12:21:55.013351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.839 [2024-04-26 12:21:55.013361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.839 [2024-04-26 12:21:55.013369] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.839 [2024-04-26 12:21:55.013587] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.839 [2024-04-26 12:21:55.013805] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.839 [2024-04-26 12:21:55.013814] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.839 [2024-04-26 12:21:55.013821] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.839 [2024-04-26 12:21:55.017346] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.839 [2024-04-26 12:21:55.026485] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.839 [2024-04-26 12:21:55.027149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.839 [2024-04-26 12:21:55.027481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.839 [2024-04-26 12:21:55.027495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.839 [2024-04-26 12:21:55.027504] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.839 [2024-04-26 12:21:55.027740] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.839 [2024-04-26 12:21:55.027967] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.839 [2024-04-26 12:21:55.027978] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.839 [2024-04-26 12:21:55.027986] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.839 [2024-04-26 12:21:55.031515] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.839 [2024-04-26 12:21:55.040455] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.839 [2024-04-26 12:21:55.041115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.839 [2024-04-26 12:21:55.041495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.839 [2024-04-26 12:21:55.041509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.839 [2024-04-26 12:21:55.041518] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.839 [2024-04-26 12:21:55.041755] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.839 [2024-04-26 12:21:55.041984] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.839 [2024-04-26 12:21:55.041995] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.839 [2024-04-26 12:21:55.042003] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.839 [2024-04-26 12:21:55.045541] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.839 [2024-04-26 12:21:55.054272] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.839 [2024-04-26 12:21:55.054821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.839 [2024-04-26 12:21:55.055172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.839 [2024-04-26 12:21:55.055188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:53.839 [2024-04-26 12:21:55.055197] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:53.839 [2024-04-26 12:21:55.055415] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:53.839 [2024-04-26 12:21:55.055633] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.839 [2024-04-26 12:21:55.055642] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.839 [2024-04-26 12:21:55.055649] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.101 [2024-04-26 12:21:55.059175] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.101 [2024-04-26 12:21:55.068119] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.101 [2024-04-26 12:21:55.068781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.101 [2024-04-26 12:21:55.069018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.101 [2024-04-26 12:21:55.069033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.101 [2024-04-26 12:21:55.069043] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.101 [2024-04-26 12:21:55.069281] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.101 [2024-04-26 12:21:55.069503] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.101 [2024-04-26 12:21:55.069512] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.101 [2024-04-26 12:21:55.069519] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.101 [2024-04-26 12:21:55.073051] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.101 [2024-04-26 12:21:55.081987] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.101 [2024-04-26 12:21:55.082559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.101 [2024-04-26 12:21:55.083438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.101 [2024-04-26 12:21:55.083461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.101 [2024-04-26 12:21:55.083469] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.101 [2024-04-26 12:21:55.083695] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.101 [2024-04-26 12:21:55.083921] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.101 [2024-04-26 12:21:55.083930] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.101 [2024-04-26 12:21:55.083938] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.101 [2024-04-26 12:21:55.087466] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.101 [2024-04-26 12:21:55.095803] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.101 [2024-04-26 12:21:55.096342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.101 [2024-04-26 12:21:55.096693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.101 [2024-04-26 12:21:55.096704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.101 [2024-04-26 12:21:55.096716] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.101 [2024-04-26 12:21:55.096939] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.101 [2024-04-26 12:21:55.097158] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.101 [2024-04-26 12:21:55.097166] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.101 [2024-04-26 12:21:55.097173] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.101 [2024-04-26 12:21:55.100692] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.101 [2024-04-26 12:21:55.109625] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.101 [2024-04-26 12:21:55.110306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.101 [2024-04-26 12:21:55.110641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.101 [2024-04-26 12:21:55.110655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.101 [2024-04-26 12:21:55.110664] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.101 [2024-04-26 12:21:55.110908] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.101 [2024-04-26 12:21:55.111130] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.101 [2024-04-26 12:21:55.111139] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.101 [2024-04-26 12:21:55.111147] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.101 [2024-04-26 12:21:55.114670] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.101 [2024-04-26 12:21:55.123392] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.101 [2024-04-26 12:21:55.124073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.101 [2024-04-26 12:21:55.124431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.101 [2024-04-26 12:21:55.124445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.101 [2024-04-26 12:21:55.124454] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.101 [2024-04-26 12:21:55.124690] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.102 [2024-04-26 12:21:55.124919] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.102 [2024-04-26 12:21:55.124929] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.102 [2024-04-26 12:21:55.124936] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.102 [2024-04-26 12:21:55.128463] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.102 [2024-04-26 12:21:55.137187] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.102 [2024-04-26 12:21:55.137874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.102 [2024-04-26 12:21:55.138246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.102 [2024-04-26 12:21:55.138260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.102 [2024-04-26 12:21:55.138270] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.102 [2024-04-26 12:21:55.138510] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.102 [2024-04-26 12:21:55.138732] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.102 [2024-04-26 12:21:55.138740] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.102 [2024-04-26 12:21:55.138748] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.102 [2024-04-26 12:21:55.142283] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.102 [2024-04-26 12:21:55.151020] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.102 [2024-04-26 12:21:55.151602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.102 [2024-04-26 12:21:55.151942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.102 [2024-04-26 12:21:55.151957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.102 [2024-04-26 12:21:55.151965] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.102 [2024-04-26 12:21:55.152183] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.102 [2024-04-26 12:21:55.152402] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.102 [2024-04-26 12:21:55.152411] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.102 [2024-04-26 12:21:55.152418] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.102 [2024-04-26 12:21:55.155943] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.102 [2024-04-26 12:21:55.164883] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.102 [2024-04-26 12:21:55.165550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.102 [2024-04-26 12:21:55.165889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.102 [2024-04-26 12:21:55.165904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.102 [2024-04-26 12:21:55.165914] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.102 [2024-04-26 12:21:55.166151] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.102 [2024-04-26 12:21:55.166373] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.102 [2024-04-26 12:21:55.166382] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.102 [2024-04-26 12:21:55.166390] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.102 [2024-04-26 12:21:55.169919] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.102 [2024-04-26 12:21:55.178847] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.102 [2024-04-26 12:21:55.179516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.102 [2024-04-26 12:21:55.179861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.102 [2024-04-26 12:21:55.179876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.102 [2024-04-26 12:21:55.179885] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.102 [2024-04-26 12:21:55.180122] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.102 [2024-04-26 12:21:55.180348] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.102 [2024-04-26 12:21:55.180358] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.102 [2024-04-26 12:21:55.180366] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.102 [2024-04-26 12:21:55.183896] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.102 [2024-04-26 12:21:55.192620] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.102 [2024-04-26 12:21:55.193151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.102 [2024-04-26 12:21:55.193498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.102 [2024-04-26 12:21:55.193512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.102 [2024-04-26 12:21:55.193521] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.102 [2024-04-26 12:21:55.193758] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.102 [2024-04-26 12:21:55.193988] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.102 [2024-04-26 12:21:55.193998] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.102 [2024-04-26 12:21:55.194006] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.102 [2024-04-26 12:21:55.197531] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.102 [2024-04-26 12:21:55.206458] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.102 [2024-04-26 12:21:55.207154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.102 [2024-04-26 12:21:55.207502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.102 [2024-04-26 12:21:55.207516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.102 [2024-04-26 12:21:55.207525] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.102 [2024-04-26 12:21:55.207762] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.102 [2024-04-26 12:21:55.207996] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.102 [2024-04-26 12:21:55.208006] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.102 [2024-04-26 12:21:55.208014] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.102 [2024-04-26 12:21:55.211539] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.102 [2024-04-26 12:21:55.220263] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.102 [2024-04-26 12:21:55.220935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.102 [2024-04-26 12:21:55.221286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.102 [2024-04-26 12:21:55.221299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.102 [2024-04-26 12:21:55.221309] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.102 [2024-04-26 12:21:55.221545] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.102 [2024-04-26 12:21:55.221767] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.102 [2024-04-26 12:21:55.221780] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.102 [2024-04-26 12:21:55.221787] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.102 [2024-04-26 12:21:55.225321] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.102 [2024-04-26 12:21:55.234043] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.102 [2024-04-26 12:21:55.234644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.102 [2024-04-26 12:21:55.234990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.102 [2024-04-26 12:21:55.235005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.102 [2024-04-26 12:21:55.235014] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.102 [2024-04-26 12:21:55.235251] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.102 [2024-04-26 12:21:55.235472] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.102 [2024-04-26 12:21:55.235481] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.102 [2024-04-26 12:21:55.235488] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.102 [2024-04-26 12:21:55.239021] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.102 [2024-04-26 12:21:55.247964] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.102 [2024-04-26 12:21:55.248626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.102 [2024-04-26 12:21:55.249009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.102 [2024-04-26 12:21:55.249025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.102 [2024-04-26 12:21:55.249035] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.102 [2024-04-26 12:21:55.249271] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.102 [2024-04-26 12:21:55.249493] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.102 [2024-04-26 12:21:55.249502] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.102 [2024-04-26 12:21:55.249509] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.103 [2024-04-26 12:21:55.253039] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.103 [2024-04-26 12:21:55.261765] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.103 [2024-04-26 12:21:55.262433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.103 [2024-04-26 12:21:55.262813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.103 [2024-04-26 12:21:55.262827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.103 [2024-04-26 12:21:55.262845] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.103 [2024-04-26 12:21:55.263082] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.103 [2024-04-26 12:21:55.263304] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.103 [2024-04-26 12:21:55.263313] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.103 [2024-04-26 12:21:55.263325] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.103 [2024-04-26 12:21:55.266855] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.103 [2024-04-26 12:21:55.275630] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.103 [2024-04-26 12:21:55.276281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.103 [2024-04-26 12:21:55.276518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.103 [2024-04-26 12:21:55.276533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.103 [2024-04-26 12:21:55.276543] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.103 [2024-04-26 12:21:55.276780] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.103 [2024-04-26 12:21:55.277009] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.103 [2024-04-26 12:21:55.277018] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.103 [2024-04-26 12:21:55.277026] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.103 [2024-04-26 12:21:55.280554] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.103 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3559774 Killed "${NVMF_APP[@]}" "$@" 00:25:54.103 12:21:55 -- host/bdevperf.sh@36 -- # tgt_init 00:25:54.103 12:21:55 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:25:54.103 12:21:55 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:54.103 12:21:55 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:54.103 12:21:55 -- common/autotest_common.sh@10 -- # set +x 00:25:54.103 [2024-04-26 12:21:55.289488] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.103 [2024-04-26 12:21:55.290062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.103 [2024-04-26 12:21:55.290415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.103 [2024-04-26 12:21:55.290426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.103 [2024-04-26 12:21:55.290434] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.103 [2024-04-26 12:21:55.290652] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.103 [2024-04-26 12:21:55.290874] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.103 [2024-04-26 12:21:55.290884] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.103 [2024-04-26 12:21:55.290892] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.103 [2024-04-26 12:21:55.294418] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.103 12:21:55 -- nvmf/common.sh@470 -- # nvmfpid=3561475 00:25:54.103 12:21:55 -- nvmf/common.sh@471 -- # waitforlisten 3561475 00:25:54.103 12:21:55 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:54.103 12:21:55 -- common/autotest_common.sh@817 -- # '[' -z 3561475 ']' 00:25:54.103 12:21:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:54.103 12:21:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:54.103 12:21:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:54.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:54.103 12:21:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:54.103 12:21:55 -- common/autotest_common.sh@10 -- # set +x 00:25:54.103 [2024-04-26 12:21:55.303355] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.103 [2024-04-26 12:21:55.303948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.103 [2024-04-26 12:21:55.304345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.103 [2024-04-26 12:21:55.304360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.103 [2024-04-26 12:21:55.304370] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.103 [2024-04-26 12:21:55.304608] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.103 [2024-04-26 12:21:55.304830] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.103 [2024-04-26 12:21:55.304846] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.103 [2024-04-26 12:21:55.304855] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.103 [2024-04-26 12:21:55.308382] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.103 [2024-04-26 12:21:55.317323] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.103 [2024-04-26 12:21:55.317794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.103 [2024-04-26 12:21:55.318137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.103 [2024-04-26 12:21:55.318148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.103 [2024-04-26 12:21:55.318156] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.103 [2024-04-26 12:21:55.318375] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.103 [2024-04-26 12:21:55.318592] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.103 [2024-04-26 12:21:55.318600] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.103 [2024-04-26 12:21:55.318607] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.368 [2024-04-26 12:21:55.322135] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.368 [2024-04-26 12:21:55.331274] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.368 [2024-04-26 12:21:55.331876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.368 [2024-04-26 12:21:55.332230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.368 [2024-04-26 12:21:55.332243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.368 [2024-04-26 12:21:55.332252] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.368 [2024-04-26 12:21:55.332489] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.368 [2024-04-26 12:21:55.332710] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.368 [2024-04-26 12:21:55.332718] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.368 [2024-04-26 12:21:55.332726] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.368 [2024-04-26 12:21:55.336258] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.368 [2024-04-26 12:21:55.345201] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.368 [2024-04-26 12:21:55.345775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.368 [2024-04-26 12:21:55.345903] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:25:54.368 [2024-04-26 12:21:55.345954] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:54.368 [2024-04-26 12:21:55.346101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.368 [2024-04-26 12:21:55.346114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.368 [2024-04-26 12:21:55.346122] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.368 [2024-04-26 12:21:55.346341] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.369 [2024-04-26 12:21:55.346558] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.369 [2024-04-26 12:21:55.346565] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.369 [2024-04-26 12:21:55.346573] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.369 [2024-04-26 12:21:55.350097] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.369 [2024-04-26 12:21:55.359027] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.369 [2024-04-26 12:21:55.359552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.369 [2024-04-26 12:21:55.359884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.369 [2024-04-26 12:21:55.359897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.369 [2024-04-26 12:21:55.359906] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.369 [2024-04-26 12:21:55.360129] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.369 [2024-04-26 12:21:55.360349] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.369 [2024-04-26 12:21:55.360356] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.369 [2024-04-26 12:21:55.360363] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.369 [2024-04-26 12:21:55.363887] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.369 [2024-04-26 12:21:55.372812] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.369 [2024-04-26 12:21:55.373471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.369 [2024-04-26 12:21:55.373849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.369 [2024-04-26 12:21:55.373863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.369 [2024-04-26 12:21:55.373872] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.369 [2024-04-26 12:21:55.374109] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.369 [2024-04-26 12:21:55.374330] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.369 [2024-04-26 12:21:55.374338] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.369 [2024-04-26 12:21:55.374346] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.369 [2024-04-26 12:21:55.377882] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.369 EAL: No free 2048 kB hugepages reported on node 1 00:25:54.369 [2024-04-26 12:21:55.386609] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.369 [2024-04-26 12:21:55.387262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.369 [2024-04-26 12:21:55.387627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.369 [2024-04-26 12:21:55.387639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.369 [2024-04-26 12:21:55.387649] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.369 [2024-04-26 12:21:55.387892] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.369 [2024-04-26 12:21:55.388113] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.369 [2024-04-26 12:21:55.388121] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.369 [2024-04-26 12:21:55.388128] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.369 [2024-04-26 12:21:55.391656] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.369 [2024-04-26 12:21:55.400379] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.369 [2024-04-26 12:21:55.400978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.369 [2024-04-26 12:21:55.401333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.369 [2024-04-26 12:21:55.401346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.369 [2024-04-26 12:21:55.401356] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.369 [2024-04-26 12:21:55.401593] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.369 [2024-04-26 12:21:55.401813] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.369 [2024-04-26 12:21:55.401821] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.369 [2024-04-26 12:21:55.401829] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.369 [2024-04-26 12:21:55.405360] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.369 [2024-04-26 12:21:55.414292] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.369 [2024-04-26 12:21:55.414933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.369 [2024-04-26 12:21:55.415302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.369 [2024-04-26 12:21:55.415315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.369 [2024-04-26 12:21:55.415324] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.369 [2024-04-26 12:21:55.415561] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.369 [2024-04-26 12:21:55.415782] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.369 [2024-04-26 12:21:55.415790] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.369 [2024-04-26 12:21:55.415798] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.369 [2024-04-26 12:21:55.419336] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.369 [2024-04-26 12:21:55.428069] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.369 [2024-04-26 12:21:55.428627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.369 [2024-04-26 12:21:55.429009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.369 [2024-04-26 12:21:55.429024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.369 [2024-04-26 12:21:55.429034] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.369 [2024-04-26 12:21:55.429271] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.369 [2024-04-26 12:21:55.429400] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:54.369 [2024-04-26 12:21:55.429492] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.369 [2024-04-26 12:21:55.429500] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.369 [2024-04-26 12:21:55.429508] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.369 [2024-04-26 12:21:55.433040] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.369 [2024-04-26 12:21:55.441978] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.369 [2024-04-26 12:21:55.442667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.369 [2024-04-26 12:21:55.442973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.370 [2024-04-26 12:21:55.442988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.370 [2024-04-26 12:21:55.442998] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.370 [2024-04-26 12:21:55.443235] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.370 [2024-04-26 12:21:55.443457] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.370 [2024-04-26 12:21:55.443465] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.370 [2024-04-26 12:21:55.443473] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.370 [2024-04-26 12:21:55.447016] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.370 [2024-04-26 12:21:55.456057] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.370 [2024-04-26 12:21:55.456721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.370 [2024-04-26 12:21:55.457093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.370 [2024-04-26 12:21:55.457106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.370 [2024-04-26 12:21:55.457116] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.370 [2024-04-26 12:21:55.457353] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.370 [2024-04-26 12:21:55.457574] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.370 [2024-04-26 12:21:55.457582] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.370 [2024-04-26 12:21:55.457589] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.370 [2024-04-26 12:21:55.461129] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.370 [2024-04-26 12:21:55.469859] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.370 [2024-04-26 12:21:55.470420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.370 [2024-04-26 12:21:55.470795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.370 [2024-04-26 12:21:55.470804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.370 [2024-04-26 12:21:55.470812] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.370 [2024-04-26 12:21:55.471036] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.370 [2024-04-26 12:21:55.471254] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.370 [2024-04-26 12:21:55.471262] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.370 [2024-04-26 12:21:55.471269] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.370 [2024-04-26 12:21:55.474789] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.370 [2024-04-26 12:21:55.481510] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:54.370 [2024-04-26 12:21:55.481533] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:54.370 [2024-04-26 12:21:55.481538] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:54.370 [2024-04-26 12:21:55.481543] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:54.370 [2024-04-26 12:21:55.481547] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:54.370 [2024-04-26 12:21:55.481718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:54.370 [2024-04-26 12:21:55.481842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:54.370 [2024-04-26 12:21:55.481852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:54.370 [2024-04-26 12:21:55.483716] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.370 [2024-04-26 12:21:55.484393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.370 [2024-04-26 12:21:55.484735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.370 [2024-04-26 12:21:55.484747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.370 [2024-04-26 12:21:55.484757] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.370 [2024-04-26 12:21:55.485003] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.370 [2024-04-26 12:21:55.485225] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.370 [2024-04-26 12:21:55.485233] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.370 [2024-04-26 12:21:55.485240] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.370 [2024-04-26 12:21:55.488768] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.370 [2024-04-26 12:21:55.497490] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.370 [2024-04-26 12:21:55.498155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.370 [2024-04-26 12:21:55.498390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.370 [2024-04-26 12:21:55.498403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.370 [2024-04-26 12:21:55.498413] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.370 [2024-04-26 12:21:55.498656] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.370 [2024-04-26 12:21:55.498883] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.370 [2024-04-26 12:21:55.498892] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.370 [2024-04-26 12:21:55.498900] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.370 [2024-04-26 12:21:55.502427] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.370 [2024-04-26 12:21:55.511355] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.370 [2024-04-26 12:21:55.511971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.370 [2024-04-26 12:21:55.512322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.370 [2024-04-26 12:21:55.512335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.370 [2024-04-26 12:21:55.512344] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.370 [2024-04-26 12:21:55.512582] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.370 [2024-04-26 12:21:55.512802] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.370 [2024-04-26 12:21:55.512811] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.370 [2024-04-26 12:21:55.512819] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.370 [2024-04-26 12:21:55.516349] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.370 [2024-04-26 12:21:55.525282] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.370 [2024-04-26 12:21:55.525733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.370 [2024-04-26 12:21:55.525913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.370 [2024-04-26 12:21:55.525924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.371 [2024-04-26 12:21:55.525932] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.371 [2024-04-26 12:21:55.526150] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.371 [2024-04-26 12:21:55.526368] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.371 [2024-04-26 12:21:55.526375] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.371 [2024-04-26 12:21:55.526383] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.371 [2024-04-26 12:21:55.529905] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.371 [2024-04-26 12:21:55.539240] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.371 [2024-04-26 12:21:55.539827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.371 [2024-04-26 12:21:55.540035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.371 [2024-04-26 12:21:55.540047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.371 [2024-04-26 12:21:55.540055] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.371 [2024-04-26 12:21:55.540274] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.371 [2024-04-26 12:21:55.540497] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.371 [2024-04-26 12:21:55.540504] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.371 [2024-04-26 12:21:55.540511] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.371 [2024-04-26 12:21:55.544041] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.371 [2024-04-26 12:21:55.553188] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.371 [2024-04-26 12:21:55.553624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.371 [2024-04-26 12:21:55.553955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.371 [2024-04-26 12:21:55.553965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.371 [2024-04-26 12:21:55.553972] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.371 [2024-04-26 12:21:55.554191] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.371 [2024-04-26 12:21:55.554407] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.371 [2024-04-26 12:21:55.554415] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.371 [2024-04-26 12:21:55.554422] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.371 [2024-04-26 12:21:55.557942] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.371 [2024-04-26 12:21:55.567085] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.371 [2024-04-26 12:21:55.567634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.371 [2024-04-26 12:21:55.567948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.371 [2024-04-26 12:21:55.567961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.371 [2024-04-26 12:21:55.567968] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.371 [2024-04-26 12:21:55.568187] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.371 [2024-04-26 12:21:55.568404] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.371 [2024-04-26 12:21:55.568411] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.371 [2024-04-26 12:21:55.568418] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.371 [2024-04-26 12:21:55.571940] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.371 [2024-04-26 12:21:55.580875] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.371 [2024-04-26 12:21:55.581424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.371 [2024-04-26 12:21:55.581749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.371 [2024-04-26 12:21:55.581758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.371 [2024-04-26 12:21:55.581766] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.371 [2024-04-26 12:21:55.581988] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.371 [2024-04-26 12:21:55.582207] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.371 [2024-04-26 12:21:55.582220] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.371 [2024-04-26 12:21:55.582227] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.682 [2024-04-26 12:21:55.585745] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.682 [2024-04-26 12:21:55.594673] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.682 [2024-04-26 12:21:55.595119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.682 [2024-04-26 12:21:55.595330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.682 [2024-04-26 12:21:55.595339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.682 [2024-04-26 12:21:55.595346] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.683 [2024-04-26 12:21:55.595565] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.683 [2024-04-26 12:21:55.595782] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.683 [2024-04-26 12:21:55.595790] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.683 [2024-04-26 12:21:55.595796] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.683 [2024-04-26 12:21:55.599324] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.683 [2024-04-26 12:21:55.608458] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.683 [2024-04-26 12:21:55.609002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.683 [2024-04-26 12:21:55.609229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.683 [2024-04-26 12:21:55.609239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.683 [2024-04-26 12:21:55.609247] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.683 [2024-04-26 12:21:55.609465] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.683 [2024-04-26 12:21:55.609682] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.683 [2024-04-26 12:21:55.609689] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.683 [2024-04-26 12:21:55.609696] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.683 [2024-04-26 12:21:55.613218] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.683 [2024-04-26 12:21:55.622350] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.683 [2024-04-26 12:21:55.622956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.683 [2024-04-26 12:21:55.623311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.683 [2024-04-26 12:21:55.623324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.683 [2024-04-26 12:21:55.623333] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.683 [2024-04-26 12:21:55.623570] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.683 [2024-04-26 12:21:55.623791] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.683 [2024-04-26 12:21:55.623799] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.683 [2024-04-26 12:21:55.623811] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.683 [2024-04-26 12:21:55.627348] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.683 [2024-04-26 12:21:55.636275] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.683 [2024-04-26 12:21:55.636875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.683 [2024-04-26 12:21:55.637147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.683 [2024-04-26 12:21:55.637158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.683 [2024-04-26 12:21:55.637166] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.683 [2024-04-26 12:21:55.637384] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.683 [2024-04-26 12:21:55.637601] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.683 [2024-04-26 12:21:55.637608] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.683 [2024-04-26 12:21:55.637615] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.683 [2024-04-26 12:21:55.641138] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.683 [2024-04-26 12:21:55.650071] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.683 [2024-04-26 12:21:55.650748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.683 [2024-04-26 12:21:55.651111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.683 [2024-04-26 12:21:55.651125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.683 [2024-04-26 12:21:55.651134] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.683 [2024-04-26 12:21:55.651371] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.683 [2024-04-26 12:21:55.651591] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.683 [2024-04-26 12:21:55.651599] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.683 [2024-04-26 12:21:55.651607] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.683 [2024-04-26 12:21:55.655135] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.683 [2024-04-26 12:21:55.663862] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.683 [2024-04-26 12:21:55.664546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.683 [2024-04-26 12:21:55.664968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.683 [2024-04-26 12:21:55.664983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.683 [2024-04-26 12:21:55.664992] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.683 [2024-04-26 12:21:55.665228] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.683 [2024-04-26 12:21:55.665449] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.683 [2024-04-26 12:21:55.665457] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.683 [2024-04-26 12:21:55.665465] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.683 [2024-04-26 12:21:55.668998] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.683 [2024-04-26 12:21:55.677713] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.683 [2024-04-26 12:21:55.678383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.683 [2024-04-26 12:21:55.678720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.683 [2024-04-26 12:21:55.678732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.683 [2024-04-26 12:21:55.678741] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.683 [2024-04-26 12:21:55.678984] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.683 [2024-04-26 12:21:55.679205] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.683 [2024-04-26 12:21:55.679214] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.683 [2024-04-26 12:21:55.679221] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.683 [2024-04-26 12:21:55.682744] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.683 [2024-04-26 12:21:55.691673] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.683 [2024-04-26 12:21:55.692309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.683 [2024-04-26 12:21:55.692654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.683 [2024-04-26 12:21:55.692667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.683 [2024-04-26 12:21:55.692676] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.683 [2024-04-26 12:21:55.692920] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.683 [2024-04-26 12:21:55.693141] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.683 [2024-04-26 12:21:55.693149] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.683 [2024-04-26 12:21:55.693156] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.683 [2024-04-26 12:21:55.696681] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.683 [2024-04-26 12:21:55.705606] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.683 [2024-04-26 12:21:55.706259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.683 [2024-04-26 12:21:55.706430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.683 [2024-04-26 12:21:55.706443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.683 [2024-04-26 12:21:55.706452] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.683 [2024-04-26 12:21:55.706688] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.683 [2024-04-26 12:21:55.706916] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.683 [2024-04-26 12:21:55.706925] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.683 [2024-04-26 12:21:55.706932] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.683 [2024-04-26 12:21:55.710458] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.683 [2024-04-26 12:21:55.719398] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.683 [2024-04-26 12:21:55.720066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.683 [2024-04-26 12:21:55.720278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.683 [2024-04-26 12:21:55.720291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.683 [2024-04-26 12:21:55.720300] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.683 [2024-04-26 12:21:55.720536] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.684 [2024-04-26 12:21:55.720757] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.684 [2024-04-26 12:21:55.720765] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.684 [2024-04-26 12:21:55.720773] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.684 [2024-04-26 12:21:55.724302] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.684 [2024-04-26 12:21:55.733240] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.684 [2024-04-26 12:21:55.733924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.684 [2024-04-26 12:21:55.734285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.684 [2024-04-26 12:21:55.734298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.684 [2024-04-26 12:21:55.734307] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.684 [2024-04-26 12:21:55.734544] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.684 [2024-04-26 12:21:55.734764] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.684 [2024-04-26 12:21:55.734773] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.684 [2024-04-26 12:21:55.734780] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.684 [2024-04-26 12:21:55.738315] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.684 [2024-04-26 12:21:55.747049] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.684 [2024-04-26 12:21:55.747736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.684 [2024-04-26 12:21:55.748082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.684 [2024-04-26 12:21:55.748097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.684 [2024-04-26 12:21:55.748106] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.684 [2024-04-26 12:21:55.748343] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.684 [2024-04-26 12:21:55.748563] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.684 [2024-04-26 12:21:55.748572] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.684 [2024-04-26 12:21:55.748579] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.684 [2024-04-26 12:21:55.752107] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.684 [2024-04-26 12:21:55.760833] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.684 [2024-04-26 12:21:55.761473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.684 [2024-04-26 12:21:55.761827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.684 [2024-04-26 12:21:55.761848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.684 [2024-04-26 12:21:55.761858] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.684 [2024-04-26 12:21:55.762096] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.684 [2024-04-26 12:21:55.762316] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.684 [2024-04-26 12:21:55.762324] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.684 [2024-04-26 12:21:55.762332] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.684 [2024-04-26 12:21:55.765859] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.684 [2024-04-26 12:21:55.774786] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.684 [2024-04-26 12:21:55.775372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.684 [2024-04-26 12:21:55.775595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.684 [2024-04-26 12:21:55.775605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.684 [2024-04-26 12:21:55.775613] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.684 [2024-04-26 12:21:55.775830] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.684 [2024-04-26 12:21:55.776056] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.684 [2024-04-26 12:21:55.776064] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.684 [2024-04-26 12:21:55.776071] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.684 [2024-04-26 12:21:55.779616] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.684 [2024-04-26 12:21:55.788618] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.684 [2024-04-26 12:21:55.789260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.684 [2024-04-26 12:21:55.789604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.684 [2024-04-26 12:21:55.789618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.684 [2024-04-26 12:21:55.789627] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.684 [2024-04-26 12:21:55.789871] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.684 [2024-04-26 12:21:55.790092] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.684 [2024-04-26 12:21:55.790101] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.684 [2024-04-26 12:21:55.790109] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.684 [2024-04-26 12:21:55.793634] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.684 [2024-04-26 12:21:55.802562] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.684 [2024-04-26 12:21:55.803258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.684 [2024-04-26 12:21:55.803611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.684 [2024-04-26 12:21:55.803628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.684 [2024-04-26 12:21:55.803637] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.684 [2024-04-26 12:21:55.803880] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.684 [2024-04-26 12:21:55.804101] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.684 [2024-04-26 12:21:55.804109] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.684 [2024-04-26 12:21:55.804116] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.684 [2024-04-26 12:21:55.807641] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.684 [2024-04-26 12:21:55.816364] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.684 [2024-04-26 12:21:55.816914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.684 [2024-04-26 12:21:55.817324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.684 [2024-04-26 12:21:55.817337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.684 [2024-04-26 12:21:55.817346] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.684 [2024-04-26 12:21:55.817583] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.684 [2024-04-26 12:21:55.817803] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.684 [2024-04-26 12:21:55.817811] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.684 [2024-04-26 12:21:55.817819] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.684 [2024-04-26 12:21:55.821349] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.684 [2024-04-26 12:21:55.830276] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.684 [2024-04-26 12:21:55.830816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.684 [2024-04-26 12:21:55.831148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.684 [2024-04-26 12:21:55.831159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.684 [2024-04-26 12:21:55.831166] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.684 [2024-04-26 12:21:55.831384] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.684 [2024-04-26 12:21:55.831602] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.684 [2024-04-26 12:21:55.831609] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.684 [2024-04-26 12:21:55.831616] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.684 [2024-04-26 12:21:55.835137] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.684 [2024-04-26 12:21:55.844060] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.684 [2024-04-26 12:21:55.844597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.684 [2024-04-26 12:21:55.844791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.684 [2024-04-26 12:21:55.844801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.684 [2024-04-26 12:21:55.844812] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.684 [2024-04-26 12:21:55.845036] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.685 [2024-04-26 12:21:55.845254] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.685 [2024-04-26 12:21:55.845262] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.685 [2024-04-26 12:21:55.845268] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.685 [2024-04-26 12:21:55.848916] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.685 [2024-04-26 12:21:55.857847] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.685 [2024-04-26 12:21:55.858509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.685 [2024-04-26 12:21:55.858856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.685 [2024-04-26 12:21:55.858871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.685 [2024-04-26 12:21:55.858880] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.685 [2024-04-26 12:21:55.859116] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.685 [2024-04-26 12:21:55.859337] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.685 [2024-04-26 12:21:55.859345] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.685 [2024-04-26 12:21:55.859353] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.685 [2024-04-26 12:21:55.862889] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.685 [2024-04-26 12:21:55.871618] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.685 [2024-04-26 12:21:55.872066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.685 [2024-04-26 12:21:55.872401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.685 [2024-04-26 12:21:55.872410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.685 [2024-04-26 12:21:55.872418] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.685 [2024-04-26 12:21:55.872637] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.685 [2024-04-26 12:21:55.872858] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.685 [2024-04-26 12:21:55.872867] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.685 [2024-04-26 12:21:55.872874] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.685 [2024-04-26 12:21:55.876397] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.685 [2024-04-26 12:21:55.885529] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.685 [2024-04-26 12:21:55.886182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.685 [2024-04-26 12:21:55.886507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.685 [2024-04-26 12:21:55.886520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.685 [2024-04-26 12:21:55.886529] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.685 [2024-04-26 12:21:55.886770] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.685 [2024-04-26 12:21:55.886998] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.685 [2024-04-26 12:21:55.887006] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.685 [2024-04-26 12:21:55.887014] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.685 [2024-04-26 12:21:55.890538] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.960 [2024-04-26 12:21:55.899474] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.960 [2024-04-26 12:21:55.899894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.960 [2024-04-26 12:21:55.900097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.960 [2024-04-26 12:21:55.900107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.960 [2024-04-26 12:21:55.900115] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.960 [2024-04-26 12:21:55.900333] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.960 [2024-04-26 12:21:55.900550] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.960 [2024-04-26 12:21:55.900557] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.960 [2024-04-26 12:21:55.900564] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.960 [2024-04-26 12:21:55.904087] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.960 [2024-04-26 12:21:55.913239] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.960 [2024-04-26 12:21:55.913625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.960 [2024-04-26 12:21:55.913944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.960 [2024-04-26 12:21:55.913955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.960 [2024-04-26 12:21:55.913962] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.960 [2024-04-26 12:21:55.914180] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.960 [2024-04-26 12:21:55.914397] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.960 [2024-04-26 12:21:55.914405] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.960 [2024-04-26 12:21:55.914412] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.960 [2024-04-26 12:21:55.917933] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.960 [2024-04-26 12:21:55.927066] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.960 [2024-04-26 12:21:55.927749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.960 [2024-04-26 12:21:55.928095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.960 [2024-04-26 12:21:55.928110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.960 [2024-04-26 12:21:55.928120] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.960 [2024-04-26 12:21:55.928356] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.960 [2024-04-26 12:21:55.928582] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.960 [2024-04-26 12:21:55.928590] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.960 [2024-04-26 12:21:55.928598] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.960 [2024-04-26 12:21:55.932128] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.960 [2024-04-26 12:21:55.940850] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.960 [2024-04-26 12:21:55.941491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.960 [2024-04-26 12:21:55.941828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.960 [2024-04-26 12:21:55.941848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.960 [2024-04-26 12:21:55.941857] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.960 [2024-04-26 12:21:55.942094] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.960 [2024-04-26 12:21:55.942314] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.960 [2024-04-26 12:21:55.942322] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.960 [2024-04-26 12:21:55.942330] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.960 [2024-04-26 12:21:55.945864] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.960 [2024-04-26 12:21:55.954799] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.960 [2024-04-26 12:21:55.955241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.960 [2024-04-26 12:21:55.955601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.960 [2024-04-26 12:21:55.955610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.960 [2024-04-26 12:21:55.955618] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.960 [2024-04-26 12:21:55.955835] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.960 [2024-04-26 12:21:55.956058] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.960 [2024-04-26 12:21:55.956066] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.960 [2024-04-26 12:21:55.956073] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.960 [2024-04-26 12:21:55.959593] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.960 [2024-04-26 12:21:55.968565] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.960 [2024-04-26 12:21:55.969179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.960 [2024-04-26 12:21:55.969517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.960 [2024-04-26 12:21:55.969530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.960 [2024-04-26 12:21:55.969539] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.960 [2024-04-26 12:21:55.969776] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.960 [2024-04-26 12:21:55.970003] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.960 [2024-04-26 12:21:55.970017] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.960 [2024-04-26 12:21:55.970024] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.960 [2024-04-26 12:21:55.973549] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.960 [2024-04-26 12:21:55.982516] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.960 [2024-04-26 12:21:55.983079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.960 [2024-04-26 12:21:55.983290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.960 [2024-04-26 12:21:55.983303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.960 [2024-04-26 12:21:55.983312] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.960 [2024-04-26 12:21:55.983550] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.960 [2024-04-26 12:21:55.983771] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.960 [2024-04-26 12:21:55.983779] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.960 [2024-04-26 12:21:55.983787] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.961 [2024-04-26 12:21:55.987318] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.961 [2024-04-26 12:21:55.996459] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.961 [2024-04-26 12:21:55.997149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.961 [2024-04-26 12:21:55.997491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.961 [2024-04-26 12:21:55.997504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.961 [2024-04-26 12:21:55.997513] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.961 [2024-04-26 12:21:55.997750] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.961 [2024-04-26 12:21:55.997978] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.961 [2024-04-26 12:21:55.997988] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.961 [2024-04-26 12:21:55.997995] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.961 [2024-04-26 12:21:56.001520] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.961 [2024-04-26 12:21:56.010242] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.961 [2024-04-26 12:21:56.010898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.961 [2024-04-26 12:21:56.011180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.961 [2024-04-26 12:21:56.011193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.961 [2024-04-26 12:21:56.011202] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.961 [2024-04-26 12:21:56.011438] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.961 [2024-04-26 12:21:56.011659] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.961 [2024-04-26 12:21:56.011667] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.961 [2024-04-26 12:21:56.011678] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.961 [2024-04-26 12:21:56.015210] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.961 [2024-04-26 12:21:56.024143] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.961 [2024-04-26 12:21:56.024806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.961 [2024-04-26 12:21:56.025208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.961 [2024-04-26 12:21:56.025222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.961 [2024-04-26 12:21:56.025231] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.961 [2024-04-26 12:21:56.025468] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.961 [2024-04-26 12:21:56.025688] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.961 [2024-04-26 12:21:56.025696] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.961 [2024-04-26 12:21:56.025703] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.961 [2024-04-26 12:21:56.029232] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.961 [2024-04-26 12:21:56.037959] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.961 [2024-04-26 12:21:56.038641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.961 [2024-04-26 12:21:56.038981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.961 [2024-04-26 12:21:56.038995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.961 [2024-04-26 12:21:56.039005] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.961 [2024-04-26 12:21:56.039241] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.961 [2024-04-26 12:21:56.039461] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.961 [2024-04-26 12:21:56.039470] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.961 [2024-04-26 12:21:56.039478] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.961 [2024-04-26 12:21:56.043010] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.961 [2024-04-26 12:21:56.051744] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.961 [2024-04-26 12:21:56.052447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.961 [2024-04-26 12:21:56.052798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.961 [2024-04-26 12:21:56.052812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.961 [2024-04-26 12:21:56.052821] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.961 [2024-04-26 12:21:56.053065] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.961 [2024-04-26 12:21:56.053286] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.961 [2024-04-26 12:21:56.053295] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.961 [2024-04-26 12:21:56.053304] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.961 [2024-04-26 12:21:56.056839] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.961 [2024-04-26 12:21:56.065570] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.961 [2024-04-26 12:21:56.066238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.961 [2024-04-26 12:21:56.066589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.961 [2024-04-26 12:21:56.066602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.961 [2024-04-26 12:21:56.066612] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.961 [2024-04-26 12:21:56.066857] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.961 [2024-04-26 12:21:56.067078] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.961 [2024-04-26 12:21:56.067087] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.961 [2024-04-26 12:21:56.067095] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.961 [2024-04-26 12:21:56.070619] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.961 [2024-04-26 12:21:56.079347] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.961 [2024-04-26 12:21:56.080065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.961 [2024-04-26 12:21:56.080417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.961 [2024-04-26 12:21:56.080430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.961 [2024-04-26 12:21:56.080440] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.961 [2024-04-26 12:21:56.080677] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.961 [2024-04-26 12:21:56.080903] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.961 [2024-04-26 12:21:56.080912] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.961 [2024-04-26 12:21:56.080919] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.961 [2024-04-26 12:21:56.084444] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.961 [2024-04-26 12:21:56.093168] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.961 [2024-04-26 12:21:56.093603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.961 [2024-04-26 12:21:56.093852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.961 [2024-04-26 12:21:56.093863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.961 [2024-04-26 12:21:56.093870] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.961 [2024-04-26 12:21:56.094088] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.961 [2024-04-26 12:21:56.094306] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.961 [2024-04-26 12:21:56.094313] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.961 [2024-04-26 12:21:56.094320] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.961 [2024-04-26 12:21:56.097841] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.961 [2024-04-26 12:21:56.106985] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.961 [2024-04-26 12:21:56.107642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.961 [2024-04-26 12:21:56.108017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.961 [2024-04-26 12:21:56.108032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.961 [2024-04-26 12:21:56.108041] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.961 [2024-04-26 12:21:56.108278] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.961 [2024-04-26 12:21:56.108498] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.961 [2024-04-26 12:21:56.108507] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.961 [2024-04-26 12:21:56.108514] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.961 [2024-04-26 12:21:56.112044] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.961 12:21:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:54.961 [2024-04-26 12:21:56.120766] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.961 12:21:56 -- common/autotest_common.sh@850 -- # return 0 00:25:54.961 12:21:56 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:54.962 [2024-04-26 12:21:56.121344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.962 12:21:56 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:54.962 [2024-04-26 12:21:56.121665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.962 [2024-04-26 12:21:56.121675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.962 [2024-04-26 12:21:56.121682] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.962 12:21:56 -- common/autotest_common.sh@10 -- # set +x 00:25:54.962 [2024-04-26 12:21:56.121905] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.962 [2024-04-26 12:21:56.122123] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.962 [2024-04-26 12:21:56.122131] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.962 [2024-04-26 12:21:56.122137] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.962 [2024-04-26 12:21:56.125661] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.962 [2024-04-26 12:21:56.134594] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.962 [2024-04-26 12:21:56.134885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.962 [2024-04-26 12:21:56.135211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.962 [2024-04-26 12:21:56.135222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.962 [2024-04-26 12:21:56.135230] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.962 [2024-04-26 12:21:56.135450] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.962 [2024-04-26 12:21:56.135669] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.962 [2024-04-26 12:21:56.135676] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.962 [2024-04-26 12:21:56.135683] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.962 [2024-04-26 12:21:56.139213] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.962 [2024-04-26 12:21:56.148367] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.962 [2024-04-26 12:21:56.149067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.962 [2024-04-26 12:21:56.149410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.962 [2024-04-26 12:21:56.149423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.962 [2024-04-26 12:21:56.149432] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.962 [2024-04-26 12:21:56.149670] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.962 [2024-04-26 12:21:56.149895] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.962 [2024-04-26 12:21:56.149906] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.962 [2024-04-26 12:21:56.149914] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.962 [2024-04-26 12:21:56.153441] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.962 12:21:56 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:54.962 12:21:56 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:54.962 12:21:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:54.962 [2024-04-26 12:21:56.162174] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.962 12:21:56 -- common/autotest_common.sh@10 -- # set +x 00:25:54.962 [2024-04-26 12:21:56.162730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.962 [2024-04-26 12:21:56.162942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.962 [2024-04-26 12:21:56.162954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.962 [2024-04-26 12:21:56.162962] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.962 [2024-04-26 12:21:56.163180] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.962 [2024-04-26 12:21:56.163398] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.962 [2024-04-26 12:21:56.163407] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.962 [2024-04-26 12:21:56.163414] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.962 [2024-04-26 12:21:56.166937] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.962 [2024-04-26 12:21:56.168648] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:54.962 12:21:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:54.962 12:21:56 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:54.962 12:21:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:54.962 12:21:56 -- common/autotest_common.sh@10 -- # set +x 00:25:54.962 [2024-04-26 12:21:56.176072] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.962 [2024-04-26 12:21:56.176723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.962 [2024-04-26 12:21:56.176986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.962 [2024-04-26 12:21:56.177003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:54.962 [2024-04-26 12:21:56.177012] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:54.962 [2024-04-26 12:21:56.177254] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:54.962 [2024-04-26 12:21:56.177475] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.962 [2024-04-26 12:21:56.177483] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.962 [2024-04-26 12:21:56.177490] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:55.223 [2024-04-26 12:21:56.181022] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:55.223 [2024-04-26 12:21:56.189953] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:55.223 [2024-04-26 12:21:56.190523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.223 [2024-04-26 12:21:56.190834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.223 [2024-04-26 12:21:56.190850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:55.223 [2024-04-26 12:21:56.190857] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:55.223 [2024-04-26 12:21:56.191076] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:55.223 [2024-04-26 12:21:56.191294] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:55.223 [2024-04-26 12:21:56.191302] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:55.223 [2024-04-26 12:21:56.191309] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:55.223 [2024-04-26 12:21:56.194881] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:55.223 Malloc0 00:25:55.223 12:21:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:55.223 12:21:56 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:55.223 12:21:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:55.223 12:21:56 -- common/autotest_common.sh@10 -- # set +x 00:25:55.223 [2024-04-26 12:21:56.203821] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:55.223 [2024-04-26 12:21:56.204512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.223 [2024-04-26 12:21:56.204766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.223 [2024-04-26 12:21:56.204779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:55.223 [2024-04-26 12:21:56.204789] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:55.223 [2024-04-26 12:21:56.205033] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:55.223 [2024-04-26 12:21:56.205255] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:55.223 [2024-04-26 12:21:56.205263] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:55.223 [2024-04-26 12:21:56.205271] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:55.223 [2024-04-26 12:21:56.208794] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:55.223 12:21:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:55.223 12:21:56 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:55.223 12:21:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:55.223 12:21:56 -- common/autotest_common.sh@10 -- # set +x 00:25:55.223 [2024-04-26 12:21:56.217726] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:55.223 [2024-04-26 12:21:56.218416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.223 [2024-04-26 12:21:56.218757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.223 [2024-04-26 12:21:56.218770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:55.223 [2024-04-26 12:21:56.218779] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:55.223 [2024-04-26 12:21:56.219023] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:55.223 [2024-04-26 12:21:56.219245] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:55.223 [2024-04-26 12:21:56.219253] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:55.223 [2024-04-26 12:21:56.219260] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:55.223 [2024-04-26 12:21:56.222785] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:55.223 12:21:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:55.223 12:21:56 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:55.223 12:21:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:55.223 12:21:56 -- common/autotest_common.sh@10 -- # set +x 00:25:55.223 [2024-04-26 12:21:56.231513] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:55.223 [2024-04-26 12:21:56.232231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.223 [2024-04-26 12:21:56.232582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.223 [2024-04-26 12:21:56.232595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee3620 with addr=10.0.0.2, port=4420 00:25:55.223 [2024-04-26 12:21:56.232604] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3620 is same with the state(5) to be set 00:25:55.223 [2024-04-26 12:21:56.232848] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee3620 (9): Bad file descriptor 00:25:55.223 [2024-04-26 12:21:56.233069] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:55.223 [2024-04-26 12:21:56.233078] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:55.223 [2024-04-26 12:21:56.233085] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:55.223 [2024-04-26 12:21:56.233223] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:55.223 [2024-04-26 12:21:56.236609] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:55.223 12:21:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:55.223 12:21:56 -- host/bdevperf.sh@38 -- # wait 3560150 00:25:55.223 [2024-04-26 12:21:56.245338] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:55.223 [2024-04-26 12:21:56.412784] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:05.227 00:26:05.227 Latency(us) 00:26:05.227 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:05.227 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:05.227 Verification LBA range: start 0x0 length 0x4000 00:26:05.227 Nvme1n1 : 15.01 8198.26 32.02 9896.21 0.00 7048.54 774.83 13817.17 00:26:05.227 =================================================================================================================== 00:26:05.227 Total : 8198.26 32.02 9896.21 0.00 7048.54 774.83 13817.17 00:26:05.227 12:22:04 -- host/bdevperf.sh@39 -- # sync 00:26:05.227 12:22:04 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:05.227 12:22:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:05.227 12:22:04 -- common/autotest_common.sh@10 -- # set +x 00:26:05.227 12:22:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:05.227 12:22:04 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:26:05.227 12:22:04 -- host/bdevperf.sh@44 -- # nvmftestfini 00:26:05.227 12:22:04 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:05.227 12:22:04 -- nvmf/common.sh@117 -- # sync 00:26:05.227 12:22:04 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:05.227 12:22:04 -- nvmf/common.sh@120 -- # set +e 00:26:05.228 12:22:04 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:05.228 12:22:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:05.228 rmmod nvme_tcp 00:26:05.228 rmmod nvme_fabrics 00:26:05.228 rmmod nvme_keyring 00:26:05.228 12:22:05 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:05.228 12:22:05 -- nvmf/common.sh@124 -- # set -e 00:26:05.228 12:22:05 -- nvmf/common.sh@125 -- # return 0 00:26:05.228 12:22:05 -- nvmf/common.sh@478 -- # '[' -n 3561475 ']' 00:26:05.228 12:22:05 -- nvmf/common.sh@479 -- # killprocess 3561475 00:26:05.228 12:22:05 -- common/autotest_common.sh@936 -- # '[' -z 3561475 ']' 00:26:05.228 12:22:05 -- common/autotest_common.sh@940 -- # kill -0 3561475 00:26:05.228 12:22:05 -- common/autotest_common.sh@941 -- # uname 00:26:05.228 12:22:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:05.228 12:22:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3561475 00:26:05.228 12:22:05 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:05.228 12:22:05 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:05.228 12:22:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3561475' 00:26:05.228 killing process with pid 3561475 00:26:05.228 12:22:05 -- common/autotest_common.sh@955 -- # kill 3561475 00:26:05.228 12:22:05 -- common/autotest_common.sh@960 -- # wait 3561475 00:26:05.228 12:22:05 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:26:05.228 12:22:05 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:05.228 12:22:05 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:05.228 12:22:05 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:05.228 12:22:05 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:05.228 12:22:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:05.228 12:22:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:05.228 12:22:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:06.171 12:22:07 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:06.171 00:26:06.171 real 0m27.279s 00:26:06.171 user 1m3.233s 00:26:06.171 sys 0m6.665s 00:26:06.171 12:22:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:06.171 12:22:07 -- common/autotest_common.sh@10 -- # set +x 00:26:06.171 ************************************ 00:26:06.171 END TEST nvmf_bdevperf 00:26:06.171 ************************************ 00:26:06.171 12:22:07 -- nvmf/nvmf.sh@120 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:06.171 12:22:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:06.171 12:22:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:06.171 12:22:07 -- common/autotest_common.sh@10 -- # set +x 00:26:06.433 ************************************ 00:26:06.433 START TEST nvmf_target_disconnect 00:26:06.433 ************************************ 00:26:06.433 12:22:07 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:06.433 * Looking for test storage... 00:26:06.433 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:06.433 12:22:07 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:06.433 12:22:07 -- nvmf/common.sh@7 -- # uname -s 00:26:06.433 12:22:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:06.433 12:22:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:06.433 12:22:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:06.433 12:22:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:06.433 12:22:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:06.433 12:22:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:06.433 12:22:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:06.433 12:22:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:06.433 12:22:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:06.433 12:22:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:06.433 12:22:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:06.433 12:22:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:06.433 12:22:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:06.433 12:22:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:06.433 12:22:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:06.433 12:22:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:06.433 12:22:07 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:06.433 12:22:07 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:06.433 12:22:07 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:06.433 12:22:07 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:06.433 12:22:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.433 12:22:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.433 12:22:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.433 12:22:07 -- paths/export.sh@5 -- # export PATH 00:26:06.433 12:22:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.433 12:22:07 -- nvmf/common.sh@47 -- # : 0 00:26:06.433 12:22:07 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:06.433 12:22:07 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:06.433 12:22:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:06.433 12:22:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:06.433 12:22:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:06.433 12:22:07 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:06.433 12:22:07 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:06.433 12:22:07 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:06.433 12:22:07 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:26:06.433 12:22:07 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:26:06.433 12:22:07 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:26:06.433 12:22:07 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:26:06.433 12:22:07 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:06.433 12:22:07 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:06.433 12:22:07 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:06.433 12:22:07 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:06.433 12:22:07 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:06.433 12:22:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:06.433 12:22:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:06.433 12:22:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:06.433 12:22:07 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:26:06.433 12:22:07 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:26:06.433 12:22:07 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:06.433 12:22:07 -- common/autotest_common.sh@10 -- # set +x 00:26:14.582 12:22:14 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:14.582 12:22:14 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:14.582 12:22:14 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:14.582 12:22:14 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:14.582 12:22:14 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:14.582 12:22:14 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:14.582 12:22:14 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:14.582 12:22:14 -- nvmf/common.sh@295 -- # net_devs=() 00:26:14.582 12:22:14 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:14.582 12:22:14 -- nvmf/common.sh@296 -- # e810=() 00:26:14.582 12:22:14 -- nvmf/common.sh@296 -- # local -ga e810 00:26:14.582 12:22:14 -- nvmf/common.sh@297 -- # x722=() 00:26:14.582 12:22:14 -- nvmf/common.sh@297 -- # local -ga x722 00:26:14.582 12:22:14 -- nvmf/common.sh@298 -- # mlx=() 00:26:14.582 12:22:14 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:14.582 12:22:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:14.582 12:22:14 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:14.582 12:22:14 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:14.582 12:22:14 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:14.582 12:22:14 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:14.582 12:22:14 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:14.582 12:22:14 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:14.582 12:22:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:14.582 12:22:14 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:14.582 12:22:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:14.582 12:22:14 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:14.582 12:22:14 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:14.582 12:22:14 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:14.582 12:22:14 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:14.582 12:22:14 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:14.582 12:22:14 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:14.582 12:22:14 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:14.582 12:22:14 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:14.582 12:22:14 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:14.582 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:14.582 12:22:14 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:14.582 12:22:14 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:14.582 12:22:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:14.582 12:22:14 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:14.582 12:22:14 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:14.582 12:22:14 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:14.582 12:22:14 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:14.582 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:14.582 12:22:14 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:14.582 12:22:14 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:14.582 12:22:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:14.582 12:22:14 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:14.582 12:22:14 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:14.582 12:22:14 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:14.582 12:22:14 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:14.582 12:22:14 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:14.582 12:22:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:14.582 12:22:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:14.582 12:22:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:14.582 12:22:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:14.582 12:22:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:14.582 Found net devices under 0000:31:00.0: cvl_0_0 00:26:14.582 12:22:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:14.582 12:22:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:14.582 12:22:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:14.582 12:22:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:14.582 12:22:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:14.582 12:22:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:14.582 Found net devices under 0000:31:00.1: cvl_0_1 00:26:14.582 12:22:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:14.582 12:22:14 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:26:14.582 12:22:14 -- nvmf/common.sh@403 -- # is_hw=yes 00:26:14.582 12:22:14 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:26:14.582 12:22:14 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:26:14.582 12:22:14 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:26:14.582 12:22:14 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:14.582 12:22:14 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:14.582 12:22:14 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:14.582 12:22:14 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:14.582 12:22:14 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:14.582 12:22:14 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:14.582 12:22:14 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:14.582 12:22:14 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:14.582 12:22:14 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:14.582 12:22:14 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:14.582 12:22:14 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:14.582 12:22:14 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:14.582 12:22:14 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:14.582 12:22:14 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:14.582 12:22:14 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:14.582 12:22:14 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:14.582 12:22:14 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:14.582 12:22:14 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:14.582 12:22:14 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:14.582 12:22:14 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:14.582 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:14.582 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.731 ms 00:26:14.582 00:26:14.582 --- 10.0.0.2 ping statistics --- 00:26:14.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:14.582 rtt min/avg/max/mdev = 0.731/0.731/0.731/0.000 ms 00:26:14.582 12:22:14 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:14.582 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:14.582 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.301 ms 00:26:14.582 00:26:14.582 --- 10.0.0.1 ping statistics --- 00:26:14.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:14.582 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:26:14.582 12:22:14 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:14.582 12:22:14 -- nvmf/common.sh@411 -- # return 0 00:26:14.582 12:22:14 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:26:14.582 12:22:14 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:14.582 12:22:14 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:26:14.582 12:22:14 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:26:14.582 12:22:14 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:14.582 12:22:14 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:26:14.582 12:22:14 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:26:14.582 12:22:14 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:26:14.582 12:22:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:14.582 12:22:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:14.582 12:22:14 -- common/autotest_common.sh@10 -- # set +x 00:26:14.582 ************************************ 00:26:14.582 START TEST nvmf_target_disconnect_tc1 00:26:14.582 ************************************ 00:26:14.582 12:22:15 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc1 00:26:14.582 12:22:15 -- host/target_disconnect.sh@32 -- # set +e 00:26:14.582 12:22:15 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:14.582 EAL: No free 2048 kB hugepages reported on node 1 00:26:14.582 [2024-04-26 12:22:15.214137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.582 [2024-04-26 12:22:15.214487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.582 [2024-04-26 12:22:15.214500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a75f0 with addr=10.0.0.2, port=4420 00:26:14.582 [2024-04-26 12:22:15.214527] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:14.582 [2024-04-26 12:22:15.214539] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:14.582 [2024-04-26 12:22:15.214547] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:26:14.582 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:26:14.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:26:14.582 Initializing NVMe Controllers 00:26:14.582 12:22:15 -- host/target_disconnect.sh@33 -- # trap - ERR 00:26:14.582 12:22:15 -- host/target_disconnect.sh@33 -- # print_backtrace 00:26:14.582 12:22:15 -- common/autotest_common.sh@1139 -- # [[ hxBET =~ e ]] 00:26:14.582 12:22:15 -- common/autotest_common.sh@1139 -- # return 0 00:26:14.582 12:22:15 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:26:14.582 12:22:15 -- host/target_disconnect.sh@41 -- # set -e 00:26:14.582 00:26:14.582 real 0m0.099s 00:26:14.582 user 0m0.047s 00:26:14.582 sys 0m0.051s 00:26:14.582 12:22:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:14.582 12:22:15 -- common/autotest_common.sh@10 -- # set +x 00:26:14.582 ************************************ 00:26:14.582 END TEST nvmf_target_disconnect_tc1 00:26:14.582 ************************************ 00:26:14.582 12:22:15 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:26:14.582 12:22:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:14.582 12:22:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:14.582 12:22:15 -- common/autotest_common.sh@10 -- # set +x 00:26:14.582 ************************************ 00:26:14.582 START TEST nvmf_target_disconnect_tc2 00:26:14.582 ************************************ 00:26:14.582 12:22:15 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc2 00:26:14.582 12:22:15 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:26:14.582 12:22:15 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:14.582 12:22:15 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:14.582 12:22:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:14.582 12:22:15 -- common/autotest_common.sh@10 -- # set +x 00:26:14.582 12:22:15 -- nvmf/common.sh@470 -- # nvmfpid=3567608 00:26:14.582 12:22:15 -- nvmf/common.sh@471 -- # waitforlisten 3567608 00:26:14.583 12:22:15 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:14.583 12:22:15 -- common/autotest_common.sh@817 -- # '[' -z 3567608 ']' 00:26:14.583 12:22:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:14.583 12:22:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:14.583 12:22:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:14.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:14.583 12:22:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:14.583 12:22:15 -- common/autotest_common.sh@10 -- # set +x 00:26:14.583 [2024-04-26 12:22:15.468641] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:26:14.583 [2024-04-26 12:22:15.468709] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:14.583 EAL: No free 2048 kB hugepages reported on node 1 00:26:14.583 [2024-04-26 12:22:15.557068] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:14.583 [2024-04-26 12:22:15.650076] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:14.583 [2024-04-26 12:22:15.650138] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:14.583 [2024-04-26 12:22:15.650146] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:14.583 [2024-04-26 12:22:15.650153] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:14.583 [2024-04-26 12:22:15.650165] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:14.583 [2024-04-26 12:22:15.650328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:26:14.583 [2024-04-26 12:22:15.650485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:26:14.583 [2024-04-26 12:22:15.650649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:26:14.583 [2024-04-26 12:22:15.650649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:26:15.154 12:22:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:15.154 12:22:16 -- common/autotest_common.sh@850 -- # return 0 00:26:15.154 12:22:16 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:15.154 12:22:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:15.154 12:22:16 -- common/autotest_common.sh@10 -- # set +x 00:26:15.154 12:22:16 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:15.154 12:22:16 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:15.154 12:22:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:15.154 12:22:16 -- common/autotest_common.sh@10 -- # set +x 00:26:15.154 Malloc0 00:26:15.154 12:22:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:15.154 12:22:16 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:15.154 12:22:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:15.154 12:22:16 -- common/autotest_common.sh@10 -- # set +x 00:26:15.154 [2024-04-26 12:22:16.325849] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:15.154 12:22:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:15.154 12:22:16 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:15.154 12:22:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:15.154 12:22:16 -- common/autotest_common.sh@10 -- # set +x 00:26:15.154 12:22:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:15.154 12:22:16 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:15.154 12:22:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:15.154 12:22:16 -- common/autotest_common.sh@10 -- # set +x 00:26:15.154 12:22:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:15.154 12:22:16 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:15.154 12:22:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:15.154 12:22:16 -- common/autotest_common.sh@10 -- # set +x 00:26:15.154 [2024-04-26 12:22:16.366263] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:15.154 12:22:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:15.154 12:22:16 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:15.154 12:22:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:15.154 12:22:16 -- common/autotest_common.sh@10 -- # set +x 00:26:15.415 12:22:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:15.415 12:22:16 -- host/target_disconnect.sh@50 -- # reconnectpid=3567953 00:26:15.415 12:22:16 -- host/target_disconnect.sh@52 -- # sleep 2 00:26:15.415 12:22:16 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:15.415 EAL: No free 2048 kB hugepages reported on node 1 00:26:17.338 12:22:18 -- host/target_disconnect.sh@53 -- # kill -9 3567608 00:26:17.338 12:22:18 -- host/target_disconnect.sh@55 -- # sleep 2 00:26:17.338 Read completed with error (sct=0, sc=8) 00:26:17.338 starting I/O failed 00:26:17.338 Read completed with error (sct=0, sc=8) 00:26:17.338 starting I/O failed 00:26:17.338 Read completed with error (sct=0, sc=8) 00:26:17.338 starting I/O failed 00:26:17.338 Read completed with error (sct=0, sc=8) 00:26:17.338 starting I/O failed 00:26:17.338 Read completed with error (sct=0, sc=8) 00:26:17.338 starting I/O failed 00:26:17.338 Read completed with error (sct=0, sc=8) 00:26:17.338 starting I/O failed 00:26:17.338 Read completed with error (sct=0, sc=8) 00:26:17.338 starting I/O failed 00:26:17.338 Read completed with error (sct=0, sc=8) 00:26:17.338 starting I/O failed 00:26:17.338 Write completed with error (sct=0, sc=8) 00:26:17.338 starting I/O failed 00:26:17.338 Read completed with error (sct=0, sc=8) 00:26:17.338 starting I/O failed 00:26:17.338 Read completed with error (sct=0, sc=8) 00:26:17.338 starting I/O failed 00:26:17.338 Read completed with error (sct=0, sc=8) 00:26:17.338 starting I/O failed 00:26:17.338 Read completed with error (sct=0, sc=8) 00:26:17.338 starting I/O failed 00:26:17.338 Read completed with error (sct=0, sc=8) 00:26:17.338 starting I/O failed 00:26:17.338 Read completed with error (sct=0, sc=8) 00:26:17.338 starting I/O failed 00:26:17.338 Write completed with error (sct=0, sc=8) 00:26:17.338 starting I/O failed 00:26:17.338 Write completed with error (sct=0, sc=8) 00:26:17.338 starting I/O failed 00:26:17.338 Read completed with error (sct=0, sc=8) 00:26:17.338 starting I/O failed 00:26:17.338 Write completed with error (sct=0, sc=8) 00:26:17.338 starting I/O failed 00:26:17.338 Read completed with error (sct=0, sc=8) 00:26:17.338 starting I/O failed 00:26:17.338 Read completed with error (sct=0, sc=8) 00:26:17.338 starting I/O failed 00:26:17.338 Read completed with error (sct=0, sc=8) 00:26:17.338 starting I/O failed 00:26:17.338 Read completed with error (sct=0, sc=8) 00:26:17.338 starting I/O failed 00:26:17.338 Write completed with error (sct=0, sc=8) 00:26:17.338 starting I/O failed 00:26:17.338 Read completed with error (sct=0, sc=8) 00:26:17.338 starting I/O failed 00:26:17.338 Write completed with error (sct=0, sc=8) 00:26:17.338 starting I/O failed 00:26:17.338 Read completed with error (sct=0, sc=8) 00:26:17.338 starting I/O failed 00:26:17.338 Read completed with error (sct=0, sc=8) 00:26:17.338 starting I/O failed 00:26:17.338 Write completed with error (sct=0, sc=8) 00:26:17.338 starting I/O failed 00:26:17.338 Write completed with error (sct=0, sc=8) 00:26:17.338 starting I/O failed 00:26:17.338 Read completed with error (sct=0, sc=8) 00:26:17.338 starting I/O failed 00:26:17.338 Write completed with error (sct=0, sc=8) 00:26:17.338 starting I/O failed 00:26:17.338 [2024-04-26 12:22:18.406127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:17.338 [2024-04-26 12:22:18.406427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.338 [2024-04-26 12:22:18.406638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.338 [2024-04-26 12:22:18.406652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.338 qpair failed and we were unable to recover it. 00:26:17.338 [2024-04-26 12:22:18.407135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.338 [2024-04-26 12:22:18.408183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.338 [2024-04-26 12:22:18.408225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.338 qpair failed and we were unable to recover it. 00:26:17.338 [2024-04-26 12:22:18.408565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.338 [2024-04-26 12:22:18.408793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.338 [2024-04-26 12:22:18.408803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.338 qpair failed and we were unable to recover it. 00:26:17.338 [2024-04-26 12:22:18.409132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.338 [2024-04-26 12:22:18.409472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.338 [2024-04-26 12:22:18.409486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.338 qpair failed and we were unable to recover it. 00:26:17.338 [2024-04-26 12:22:18.409790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.338 [2024-04-26 12:22:18.409979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.338 [2024-04-26 12:22:18.409989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.339 qpair failed and we were unable to recover it. 00:26:17.339 [2024-04-26 12:22:18.410280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.339 [2024-04-26 12:22:18.410467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.339 [2024-04-26 12:22:18.410477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.339 qpair failed and we were unable to recover it. 00:26:17.339 [2024-04-26 12:22:18.410829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.339 [2024-04-26 12:22:18.410960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.339 [2024-04-26 12:22:18.410974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.339 qpair failed and we were unable to recover it. 00:26:17.339 [2024-04-26 12:22:18.411326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.339 [2024-04-26 12:22:18.411607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.339 [2024-04-26 12:22:18.411617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.339 qpair failed and we were unable to recover it. 00:26:17.339 [2024-04-26 12:22:18.411803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.339 [2024-04-26 12:22:18.412100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.339 [2024-04-26 12:22:18.412111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.339 qpair failed and we were unable to recover it. 00:26:17.339 [2024-04-26 12:22:18.412443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.339 [2024-04-26 12:22:18.412794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.339 [2024-04-26 12:22:18.412803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.339 qpair failed and we were unable to recover it. 00:26:17.339 [2024-04-26 12:22:18.413041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.339 [2024-04-26 12:22:18.413323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.339 [2024-04-26 12:22:18.413333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.339 qpair failed and we were unable to recover it. 00:26:17.339 [2024-04-26 12:22:18.413644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.339 [2024-04-26 12:22:18.413841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.339 [2024-04-26 12:22:18.413851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.339 qpair failed and we were unable to recover it. 00:26:17.339 [2024-04-26 12:22:18.414235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.339 [2024-04-26 12:22:18.414562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.339 [2024-04-26 12:22:18.414572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.339 qpair failed and we were unable to recover it. 00:26:17.339 [2024-04-26 12:22:18.414748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.339 [2024-04-26 12:22:18.415099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.339 [2024-04-26 12:22:18.415109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.339 qpair failed and we were unable to recover it. 00:26:17.339 [2024-04-26 12:22:18.415406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.339 [2024-04-26 12:22:18.415710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.339 [2024-04-26 12:22:18.415719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.339 qpair failed and we were unable to recover it. 00:26:17.339 [2024-04-26 12:22:18.416033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.339 [2024-04-26 12:22:18.416365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.339 [2024-04-26 12:22:18.416375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.339 qpair failed and we were unable to recover it. 00:26:17.339 [2024-04-26 12:22:18.416531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.339 [2024-04-26 12:22:18.416762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.339 [2024-04-26 12:22:18.416775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.339 qpair failed and we were unable to recover it. 00:26:17.339 [2024-04-26 12:22:18.417901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.339 [2024-04-26 12:22:18.418191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.339 [2024-04-26 12:22:18.418200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.339 qpair failed and we were unable to recover it. 00:26:17.339 [2024-04-26 12:22:18.418522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.339 [2024-04-26 12:22:18.418869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.339 [2024-04-26 12:22:18.418879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.339 qpair failed and we were unable to recover it. 00:26:17.339 [2024-04-26 12:22:18.419078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.339 [2024-04-26 12:22:18.419416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.339 [2024-04-26 12:22:18.419426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.339 qpair failed and we were unable to recover it. 00:26:17.339 [2024-04-26 12:22:18.419703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.339 [2024-04-26 12:22:18.420029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.339 [2024-04-26 12:22:18.420039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.339 qpair failed and we were unable to recover it. 00:26:17.339 [2024-04-26 12:22:18.420190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.339 [2024-04-26 12:22:18.420511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.339 [2024-04-26 12:22:18.420521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.339 qpair failed and we were unable to recover it. 00:26:17.339 [2024-04-26 12:22:18.420800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.339 [2024-04-26 12:22:18.421200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.339 [2024-04-26 12:22:18.421210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.339 qpair failed and we were unable to recover it. 00:26:17.339 [2024-04-26 12:22:18.421525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.339 [2024-04-26 12:22:18.421710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.339 [2024-04-26 12:22:18.421721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.339 qpair failed and we were unable to recover it. 00:26:17.339 [2024-04-26 12:22:18.421924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.339 [2024-04-26 12:22:18.422153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.339 [2024-04-26 12:22:18.422163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.339 qpair failed and we were unable to recover it. 00:26:17.339 [2024-04-26 12:22:18.422487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.339 [2024-04-26 12:22:18.422800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.339 [2024-04-26 12:22:18.422810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.339 qpair failed and we were unable to recover it. 00:26:17.339 [2024-04-26 12:22:18.423129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.339 [2024-04-26 12:22:18.423329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.339 [2024-04-26 12:22:18.423339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.339 qpair failed and we were unable to recover it. 00:26:17.339 [2024-04-26 12:22:18.423557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.339 [2024-04-26 12:22:18.423848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.339 [2024-04-26 12:22:18.423858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.339 qpair failed and we were unable to recover it. 00:26:17.339 [2024-04-26 12:22:18.424129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.339 [2024-04-26 12:22:18.424444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.339 [2024-04-26 12:22:18.424453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.339 qpair failed and we were unable to recover it. 00:26:17.339 [2024-04-26 12:22:18.424771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.339 [2024-04-26 12:22:18.425109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.339 [2024-04-26 12:22:18.425118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.339 qpair failed and we were unable to recover it. 00:26:17.339 [2024-04-26 12:22:18.425474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.339 [2024-04-26 12:22:18.425798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.339 [2024-04-26 12:22:18.425807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.339 qpair failed and we were unable to recover it. 00:26:17.339 [2024-04-26 12:22:18.426104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.339 [2024-04-26 12:22:18.426357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.339 [2024-04-26 12:22:18.426366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.339 qpair failed and we were unable to recover it. 00:26:17.339 [2024-04-26 12:22:18.426666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.339 [2024-04-26 12:22:18.426896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.339 [2024-04-26 12:22:18.426906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.339 qpair failed and we were unable to recover it. 00:26:17.339 [2024-04-26 12:22:18.427221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.340 [2024-04-26 12:22:18.427423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.340 [2024-04-26 12:22:18.427432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.340 qpair failed and we were unable to recover it. 00:26:17.340 [2024-04-26 12:22:18.427732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.340 [2024-04-26 12:22:18.427930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.340 [2024-04-26 12:22:18.427940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.340 qpair failed and we were unable to recover it. 00:26:17.340 [2024-04-26 12:22:18.428245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.340 [2024-04-26 12:22:18.428572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.340 [2024-04-26 12:22:18.428581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.340 qpair failed and we were unable to recover it. 00:26:17.340 [2024-04-26 12:22:18.428870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.340 [2024-04-26 12:22:18.429150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.340 [2024-04-26 12:22:18.429159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.340 qpair failed and we were unable to recover it. 00:26:17.340 [2024-04-26 12:22:18.429505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.340 [2024-04-26 12:22:18.429766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.340 [2024-04-26 12:22:18.429775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.340 qpair failed and we were unable to recover it. 00:26:17.340 [2024-04-26 12:22:18.430093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.340 [2024-04-26 12:22:18.430385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.340 [2024-04-26 12:22:18.430394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.340 qpair failed and we were unable to recover it. 00:26:17.340 [2024-04-26 12:22:18.430692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.340 [2024-04-26 12:22:18.430904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.340 [2024-04-26 12:22:18.430913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.340 qpair failed and we were unable to recover it. 00:26:17.340 [2024-04-26 12:22:18.431231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.340 [2024-04-26 12:22:18.431523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.340 [2024-04-26 12:22:18.431531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.340 qpair failed and we were unable to recover it. 00:26:17.340 [2024-04-26 12:22:18.431719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.340 [2024-04-26 12:22:18.431998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.340 [2024-04-26 12:22:18.432007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.340 qpair failed and we were unable to recover it. 00:26:17.340 [2024-04-26 12:22:18.432231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.340 [2024-04-26 12:22:18.432587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.340 [2024-04-26 12:22:18.432596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.340 qpair failed and we were unable to recover it. 00:26:17.340 [2024-04-26 12:22:18.432933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.340 [2024-04-26 12:22:18.433216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.340 [2024-04-26 12:22:18.433224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.340 qpair failed and we were unable to recover it. 00:26:17.340 [2024-04-26 12:22:18.433554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.340 [2024-04-26 12:22:18.433868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.340 [2024-04-26 12:22:18.433877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.340 qpair failed and we were unable to recover it. 00:26:17.340 [2024-04-26 12:22:18.434067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.340 [2024-04-26 12:22:18.434282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.340 [2024-04-26 12:22:18.434291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.340 qpair failed and we were unable to recover it. 00:26:17.340 [2024-04-26 12:22:18.434555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.340 [2024-04-26 12:22:18.434826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.340 [2024-04-26 12:22:18.434835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.340 qpair failed and we were unable to recover it. 00:26:17.340 [2024-04-26 12:22:18.435028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.340 [2024-04-26 12:22:18.435371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.340 [2024-04-26 12:22:18.435380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.340 qpair failed and we were unable to recover it. 00:26:17.340 [2024-04-26 12:22:18.435655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.340 [2024-04-26 12:22:18.435943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.340 [2024-04-26 12:22:18.435953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.340 qpair failed and we were unable to recover it. 00:26:17.340 [2024-04-26 12:22:18.436256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.340 [2024-04-26 12:22:18.436601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.340 [2024-04-26 12:22:18.436610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.340 qpair failed and we were unable to recover it. 00:26:17.340 [2024-04-26 12:22:18.436798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.340 [2024-04-26 12:22:18.437068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.340 [2024-04-26 12:22:18.437078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.340 qpair failed and we were unable to recover it. 00:26:17.340 [2024-04-26 12:22:18.437255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.340 [2024-04-26 12:22:18.437582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.340 [2024-04-26 12:22:18.437591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.340 qpair failed and we were unable to recover it. 00:26:17.340 [2024-04-26 12:22:18.437738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.340 [2024-04-26 12:22:18.438029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.340 [2024-04-26 12:22:18.438039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.340 qpair failed and we were unable to recover it. 00:26:17.340 [2024-04-26 12:22:18.438211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.340 [2024-04-26 12:22:18.438576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.340 [2024-04-26 12:22:18.438585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.340 qpair failed and we were unable to recover it. 00:26:17.340 [2024-04-26 12:22:18.438913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.340 [2024-04-26 12:22:18.439218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.340 [2024-04-26 12:22:18.439227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.340 qpair failed and we were unable to recover it. 00:26:17.340 [2024-04-26 12:22:18.439531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.340 [2024-04-26 12:22:18.439823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.340 [2024-04-26 12:22:18.439832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.340 qpair failed and we were unable to recover it. 00:26:17.340 [2024-04-26 12:22:18.440035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.340 [2024-04-26 12:22:18.440332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.340 [2024-04-26 12:22:18.440342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.340 qpair failed and we were unable to recover it. 00:26:17.340 [2024-04-26 12:22:18.440684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.340 [2024-04-26 12:22:18.441005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.340 [2024-04-26 12:22:18.441018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.340 qpair failed and we were unable to recover it. 00:26:17.340 [2024-04-26 12:22:18.441313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.340 [2024-04-26 12:22:18.441643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.340 [2024-04-26 12:22:18.441654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.340 qpair failed and we were unable to recover it. 00:26:17.340 [2024-04-26 12:22:18.441882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.340 [2024-04-26 12:22:18.442214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.340 [2024-04-26 12:22:18.442225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.340 qpair failed and we were unable to recover it. 00:26:17.340 [2024-04-26 12:22:18.442562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.340 [2024-04-26 12:22:18.442854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.340 [2024-04-26 12:22:18.442866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.340 qpair failed and we were unable to recover it. 00:26:17.340 [2024-04-26 12:22:18.443186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.340 [2024-04-26 12:22:18.443488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 12:22:18.443500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.341 qpair failed and we were unable to recover it. 00:26:17.341 [2024-04-26 12:22:18.443796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 12:22:18.444094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 12:22:18.444106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.341 qpair failed and we were unable to recover it. 00:26:17.341 [2024-04-26 12:22:18.444441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 12:22:18.444745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 12:22:18.444757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.341 qpair failed and we were unable to recover it. 00:26:17.341 [2024-04-26 12:22:18.445063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 12:22:18.445242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 12:22:18.445254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.341 qpair failed and we were unable to recover it. 00:26:17.341 [2024-04-26 12:22:18.445583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 12:22:18.445776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 12:22:18.445789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.341 qpair failed and we were unable to recover it. 00:26:17.341 [2024-04-26 12:22:18.446083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 12:22:18.446389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 12:22:18.446401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.341 qpair failed and we were unable to recover it. 00:26:17.341 [2024-04-26 12:22:18.446743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 12:22:18.447028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 12:22:18.447040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.341 qpair failed and we were unable to recover it. 00:26:17.341 [2024-04-26 12:22:18.447353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 12:22:18.447694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 12:22:18.447705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.341 qpair failed and we were unable to recover it. 00:26:17.341 [2024-04-26 12:22:18.447995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 12:22:18.448280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 12:22:18.448291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.341 qpair failed and we were unable to recover it. 00:26:17.341 [2024-04-26 12:22:18.448628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 12:22:18.448939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 12:22:18.448951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.341 qpair failed and we were unable to recover it. 00:26:17.341 [2024-04-26 12:22:18.449244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 12:22:18.449561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 12:22:18.449573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.341 qpair failed and we were unable to recover it. 00:26:17.341 [2024-04-26 12:22:18.449876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 12:22:18.450062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 12:22:18.450074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.341 qpair failed and we were unable to recover it. 00:26:17.341 [2024-04-26 12:22:18.450449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 12:22:18.450785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 12:22:18.450796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.341 qpair failed and we were unable to recover it. 00:26:17.341 [2024-04-26 12:22:18.451118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 12:22:18.451423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 12:22:18.451439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.341 qpair failed and we were unable to recover it. 00:26:17.341 [2024-04-26 12:22:18.451768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 12:22:18.452142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 12:22:18.452158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.341 qpair failed and we were unable to recover it. 00:26:17.341 [2024-04-26 12:22:18.452482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 12:22:18.452812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 12:22:18.452828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.341 qpair failed and we were unable to recover it. 00:26:17.341 [2024-04-26 12:22:18.453133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 12:22:18.453453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 12:22:18.453468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.341 qpair failed and we were unable to recover it. 00:26:17.341 [2024-04-26 12:22:18.453723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 12:22:18.454060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 12:22:18.454076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.341 qpair failed and we were unable to recover it. 00:26:17.341 [2024-04-26 12:22:18.454416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 12:22:18.454749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 12:22:18.454764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.341 qpair failed and we were unable to recover it. 00:26:17.341 [2024-04-26 12:22:18.455100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 12:22:18.455428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 12:22:18.455443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.341 qpair failed and we were unable to recover it. 00:26:17.341 [2024-04-26 12:22:18.455761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 12:22:18.456056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 12:22:18.456071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.341 qpair failed and we were unable to recover it. 00:26:17.341 [2024-04-26 12:22:18.456474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 12:22:18.456784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 12:22:18.456799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.341 qpair failed and we were unable to recover it. 00:26:17.341 [2024-04-26 12:22:18.457103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 12:22:18.457371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 12:22:18.457385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.341 qpair failed and we were unable to recover it. 00:26:17.341 [2024-04-26 12:22:18.457722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 12:22:18.458010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 12:22:18.458025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.341 qpair failed and we were unable to recover it. 00:26:17.341 [2024-04-26 12:22:18.458370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 12:22:18.458673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 12:22:18.458688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.341 qpair failed and we were unable to recover it. 00:26:17.341 [2024-04-26 12:22:18.459018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 12:22:18.459314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 12:22:18.459330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.341 qpair failed and we were unable to recover it. 00:26:17.341 [2024-04-26 12:22:18.459633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 12:22:18.459966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 12:22:18.459983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.341 qpair failed and we were unable to recover it. 00:26:17.341 [2024-04-26 12:22:18.460282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 12:22:18.460490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 12:22:18.460507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.341 qpair failed and we were unable to recover it. 00:26:17.341 [2024-04-26 12:22:18.460818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 12:22:18.461153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 12:22:18.461169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.341 qpair failed and we were unable to recover it. 00:26:17.342 [2024-04-26 12:22:18.461383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 12:22:18.461715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 12:22:18.461731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.342 qpair failed and we were unable to recover it. 00:26:17.342 [2024-04-26 12:22:18.462110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 12:22:18.462437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 12:22:18.462453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.342 qpair failed and we were unable to recover it. 00:26:17.342 [2024-04-26 12:22:18.462696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 12:22:18.462984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 12:22:18.463003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.342 qpair failed and we were unable to recover it. 00:26:17.342 [2024-04-26 12:22:18.463399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 12:22:18.463696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 12:22:18.463715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.342 qpair failed and we were unable to recover it. 00:26:17.342 [2024-04-26 12:22:18.464034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 12:22:18.464389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 12:22:18.464408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.342 qpair failed and we were unable to recover it. 00:26:17.342 [2024-04-26 12:22:18.464722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 12:22:18.465064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 12:22:18.465084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.342 qpair failed and we were unable to recover it. 00:26:17.342 [2024-04-26 12:22:18.465459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 12:22:18.465852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 12:22:18.465873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.342 qpair failed and we were unable to recover it. 00:26:17.342 [2024-04-26 12:22:18.466207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 12:22:18.466531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 12:22:18.466550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.342 qpair failed and we were unable to recover it. 00:26:17.342 [2024-04-26 12:22:18.466881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 12:22:18.467220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 12:22:18.467239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.342 qpair failed and we were unable to recover it. 00:26:17.342 [2024-04-26 12:22:18.467552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 12:22:18.467880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 12:22:18.467900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.342 qpair failed and we were unable to recover it. 00:26:17.342 [2024-04-26 12:22:18.468223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 12:22:18.468533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 12:22:18.468553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.342 qpair failed and we were unable to recover it. 00:26:17.342 [2024-04-26 12:22:18.468889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 12:22:18.469252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 12:22:18.469271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.342 qpair failed and we were unable to recover it. 00:26:17.342 [2024-04-26 12:22:18.469604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 12:22:18.469864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 12:22:18.469883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.342 qpair failed and we were unable to recover it. 00:26:17.342 [2024-04-26 12:22:18.470240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 12:22:18.470594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 12:22:18.470613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.342 qpair failed and we were unable to recover it. 00:26:17.342 [2024-04-26 12:22:18.470833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 12:22:18.471159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 12:22:18.471179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.342 qpair failed and we were unable to recover it. 00:26:17.342 [2024-04-26 12:22:18.471486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 12:22:18.471835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 12:22:18.471861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.342 qpair failed and we were unable to recover it. 00:26:17.342 [2024-04-26 12:22:18.472116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 12:22:18.472461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 12:22:18.472480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.342 qpair failed and we were unable to recover it. 00:26:17.342 [2024-04-26 12:22:18.472797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 12:22:18.473109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 12:22:18.473130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.342 qpair failed and we were unable to recover it. 00:26:17.342 [2024-04-26 12:22:18.473464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 12:22:18.473808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 12:22:18.473828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.342 qpair failed and we were unable to recover it. 00:26:17.342 [2024-04-26 12:22:18.474176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 12:22:18.474507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 12:22:18.474527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.342 qpair failed and we were unable to recover it. 00:26:17.342 [2024-04-26 12:22:18.474864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 12:22:18.475230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 12:22:18.475250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.342 qpair failed and we were unable to recover it. 00:26:17.342 [2024-04-26 12:22:18.475563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 12:22:18.475778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 12:22:18.475800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.342 qpair failed and we were unable to recover it. 00:26:17.342 [2024-04-26 12:22:18.476148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 12:22:18.476440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 12:22:18.476466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.342 qpair failed and we were unable to recover it. 00:26:17.342 [2024-04-26 12:22:18.476801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 12:22:18.477136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 12:22:18.477163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.342 qpair failed and we were unable to recover it. 00:26:17.342 [2024-04-26 12:22:18.477578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 12:22:18.477920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 12:22:18.477948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.342 qpair failed and we were unable to recover it. 00:26:17.342 [2024-04-26 12:22:18.478335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 12:22:18.478644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 12:22:18.478671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.342 qpair failed and we were unable to recover it. 00:26:17.342 [2024-04-26 12:22:18.479063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 12:22:18.479409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 12:22:18.479436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.342 qpair failed and we were unable to recover it. 00:26:17.342 [2024-04-26 12:22:18.479747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 12:22:18.479960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 12:22:18.479991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.342 qpair failed and we were unable to recover it. 00:26:17.342 [2024-04-26 12:22:18.480213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 12:22:18.480447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 12:22:18.480475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.343 qpair failed and we were unable to recover it. 00:26:17.343 [2024-04-26 12:22:18.480817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 12:22:18.481236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 12:22:18.481263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.343 qpair failed and we were unable to recover it. 00:26:17.343 [2024-04-26 12:22:18.481507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 12:22:18.481836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 12:22:18.481875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.343 qpair failed and we were unable to recover it. 00:26:17.343 [2024-04-26 12:22:18.482230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 12:22:18.482553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 12:22:18.482579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.343 qpair failed and we were unable to recover it. 00:26:17.343 [2024-04-26 12:22:18.482915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 12:22:18.483133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 12:22:18.483159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.343 qpair failed and we were unable to recover it. 00:26:17.343 [2024-04-26 12:22:18.483541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 12:22:18.483868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 12:22:18.483896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.343 qpair failed and we were unable to recover it. 00:26:17.343 [2024-04-26 12:22:18.484228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 12:22:18.484574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 12:22:18.484601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.343 qpair failed and we were unable to recover it. 00:26:17.343 [2024-04-26 12:22:18.484942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 12:22:18.485299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 12:22:18.485325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.343 qpair failed and we were unable to recover it. 00:26:17.343 [2024-04-26 12:22:18.485575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 12:22:18.485853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 12:22:18.485881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.343 qpair failed and we were unable to recover it. 00:26:17.343 [2024-04-26 12:22:18.486141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 12:22:18.486490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 12:22:18.486517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.343 qpair failed and we were unable to recover it. 00:26:17.343 [2024-04-26 12:22:18.486871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 12:22:18.487304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 12:22:18.487332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.343 qpair failed and we were unable to recover it. 00:26:17.343 [2024-04-26 12:22:18.487559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 12:22:18.487907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 12:22:18.487935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.343 qpair failed and we were unable to recover it. 00:26:17.343 [2024-04-26 12:22:18.488324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 12:22:18.488680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 12:22:18.488706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.343 qpair failed and we were unable to recover it. 00:26:17.343 [2024-04-26 12:22:18.489047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 12:22:18.489381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 12:22:18.489408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.343 qpair failed and we were unable to recover it. 00:26:17.343 [2024-04-26 12:22:18.489758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 12:22:18.490114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 12:22:18.490141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.343 qpair failed and we were unable to recover it. 00:26:17.343 [2024-04-26 12:22:18.490509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 12:22:18.490861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 12:22:18.490890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.343 qpair failed and we were unable to recover it. 00:26:17.343 [2024-04-26 12:22:18.491235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 12:22:18.491583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 12:22:18.491610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.343 qpair failed and we were unable to recover it. 00:26:17.343 [2024-04-26 12:22:18.491928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 12:22:18.492229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 12:22:18.492256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.343 qpair failed and we were unable to recover it. 00:26:17.343 [2024-04-26 12:22:18.492640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 12:22:18.492965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 12:22:18.492992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.343 qpair failed and we were unable to recover it. 00:26:17.343 [2024-04-26 12:22:18.493346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 12:22:18.493671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 12:22:18.493703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.343 qpair failed and we were unable to recover it. 00:26:17.343 [2024-04-26 12:22:18.494043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 12:22:18.494385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 12:22:18.494411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.343 qpair failed and we were unable to recover it. 00:26:17.343 [2024-04-26 12:22:18.494868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 12:22:18.495180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 12:22:18.495206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.343 qpair failed and we were unable to recover it. 00:26:17.343 [2024-04-26 12:22:18.495555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 12:22:18.495905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 12:22:18.495933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.343 qpair failed and we were unable to recover it. 00:26:17.343 [2024-04-26 12:22:18.496273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 12:22:18.496624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 12:22:18.496651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.343 qpair failed and we were unable to recover it. 00:26:17.343 [2024-04-26 12:22:18.497020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 12:22:18.497371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 12:22:18.497397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.344 qpair failed and we were unable to recover it. 00:26:17.344 [2024-04-26 12:22:18.497644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 12:22:18.498006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 12:22:18.498034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.344 qpair failed and we were unable to recover it. 00:26:17.344 [2024-04-26 12:22:18.498291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 12:22:18.498618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 12:22:18.498645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.344 qpair failed and we were unable to recover it. 00:26:17.344 [2024-04-26 12:22:18.498897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 12:22:18.499232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 12:22:18.499260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.344 qpair failed and we were unable to recover it. 00:26:17.344 [2024-04-26 12:22:18.499604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 12:22:18.499939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 12:22:18.499967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.344 qpair failed and we were unable to recover it. 00:26:17.344 [2024-04-26 12:22:18.500305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 12:22:18.500630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 12:22:18.500662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.344 qpair failed and we were unable to recover it. 00:26:17.344 [2024-04-26 12:22:18.501009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 12:22:18.501242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 12:22:18.501271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.344 qpair failed and we were unable to recover it. 00:26:17.344 [2024-04-26 12:22:18.501635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 12:22:18.501987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 12:22:18.502016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.344 qpair failed and we were unable to recover it. 00:26:17.344 [2024-04-26 12:22:18.502374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 12:22:18.502686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 12:22:18.502712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.344 qpair failed and we were unable to recover it. 00:26:17.344 [2024-04-26 12:22:18.502955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 12:22:18.503319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 12:22:18.503347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.344 qpair failed and we were unable to recover it. 00:26:17.344 [2024-04-26 12:22:18.503720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 12:22:18.504068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 12:22:18.504098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.344 qpair failed and we were unable to recover it. 00:26:17.344 [2024-04-26 12:22:18.504452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 12:22:18.504854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 12:22:18.504882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.344 qpair failed and we were unable to recover it. 00:26:17.344 [2024-04-26 12:22:18.505277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 12:22:18.505494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 12:22:18.505521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.344 qpair failed and we were unable to recover it. 00:26:17.344 [2024-04-26 12:22:18.505859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 12:22:18.506205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 12:22:18.506232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.344 qpair failed and we were unable to recover it. 00:26:17.344 [2024-04-26 12:22:18.506561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 12:22:18.506895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 12:22:18.506922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.344 qpair failed and we were unable to recover it. 00:26:17.344 [2024-04-26 12:22:18.507282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 12:22:18.507653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 12:22:18.507684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.344 qpair failed and we were unable to recover it. 00:26:17.344 [2024-04-26 12:22:18.507907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 12:22:18.508299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 12:22:18.508326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.344 qpair failed and we were unable to recover it. 00:26:17.344 [2024-04-26 12:22:18.508671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 12:22:18.509006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 12:22:18.509034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.344 qpair failed and we were unable to recover it. 00:26:17.344 [2024-04-26 12:22:18.509376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 12:22:18.509607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 12:22:18.509637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.344 qpair failed and we were unable to recover it. 00:26:17.344 [2024-04-26 12:22:18.509899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 12:22:18.510250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 12:22:18.510277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.344 qpair failed and we were unable to recover it. 00:26:17.344 [2024-04-26 12:22:18.510629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 12:22:18.510845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 12:22:18.510873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.344 qpair failed and we were unable to recover it. 00:26:17.344 [2024-04-26 12:22:18.511299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 12:22:18.511661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 12:22:18.511689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.344 qpair failed and we were unable to recover it. 00:26:17.344 [2024-04-26 12:22:18.512110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 12:22:18.512437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 12:22:18.512463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.344 qpair failed and we were unable to recover it. 00:26:17.344 [2024-04-26 12:22:18.512809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 12:22:18.513236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 12:22:18.513264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.344 qpair failed and we were unable to recover it. 00:26:17.344 [2024-04-26 12:22:18.513591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 12:22:18.513811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 12:22:18.513845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.344 qpair failed and we were unable to recover it. 00:26:17.344 [2024-04-26 12:22:18.514187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 12:22:18.514447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 12:22:18.514479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.344 qpair failed and we were unable to recover it. 00:26:17.344 [2024-04-26 12:22:18.514850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 12:22:18.515265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 12:22:18.515291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.344 qpair failed and we were unable to recover it. 00:26:17.344 [2024-04-26 12:22:18.515642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 12:22:18.515896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 12:22:18.515923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.344 qpair failed and we were unable to recover it. 00:26:17.344 [2024-04-26 12:22:18.516249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 12:22:18.516549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 12:22:18.516575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.345 qpair failed and we were unable to recover it. 00:26:17.345 [2024-04-26 12:22:18.516801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 12:22:18.517132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 12:22:18.517159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.345 qpair failed and we were unable to recover it. 00:26:17.345 [2024-04-26 12:22:18.517506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 12:22:18.517857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 12:22:18.517885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.345 qpair failed and we were unable to recover it. 00:26:17.345 [2024-04-26 12:22:18.518248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 12:22:18.518568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 12:22:18.518595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.345 qpair failed and we were unable to recover it. 00:26:17.345 [2024-04-26 12:22:18.518890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 12:22:18.519218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 12:22:18.519244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.345 qpair failed and we were unable to recover it. 00:26:17.345 [2024-04-26 12:22:18.519609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 12:22:18.519904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 12:22:18.519931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.345 qpair failed and we were unable to recover it. 00:26:17.345 [2024-04-26 12:22:18.520081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 12:22:18.520400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 12:22:18.520427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.345 qpair failed and we were unable to recover it. 00:26:17.345 [2024-04-26 12:22:18.520737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 12:22:18.521073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 12:22:18.521100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.345 qpair failed and we were unable to recover it. 00:26:17.345 [2024-04-26 12:22:18.521430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 12:22:18.521763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 12:22:18.521789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.345 qpair failed and we were unable to recover it. 00:26:17.345 [2024-04-26 12:22:18.522137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 12:22:18.522358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 12:22:18.522387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.345 qpair failed and we were unable to recover it. 00:26:17.345 [2024-04-26 12:22:18.522709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 12:22:18.523049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 12:22:18.523077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.345 qpair failed and we were unable to recover it. 00:26:17.345 [2024-04-26 12:22:18.523423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 12:22:18.523812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 12:22:18.523862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.345 qpair failed and we were unable to recover it. 00:26:17.345 [2024-04-26 12:22:18.524288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 12:22:18.524614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 12:22:18.524643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.345 qpair failed and we were unable to recover it. 00:26:17.345 [2024-04-26 12:22:18.524937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 12:22:18.525338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 12:22:18.525366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.345 qpair failed and we were unable to recover it. 00:26:17.345 [2024-04-26 12:22:18.525729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 12:22:18.526072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 12:22:18.526099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.345 qpair failed and we were unable to recover it. 00:26:17.345 [2024-04-26 12:22:18.526464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 12:22:18.526832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 12:22:18.526868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.345 qpair failed and we were unable to recover it. 00:26:17.345 [2024-04-26 12:22:18.527269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 12:22:18.527600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 12:22:18.527627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.345 qpair failed and we were unable to recover it. 00:26:17.345 [2024-04-26 12:22:18.527873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 12:22:18.528219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 12:22:18.528246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.345 qpair failed and we were unable to recover it. 00:26:17.345 [2024-04-26 12:22:18.528590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 12:22:18.528921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 12:22:18.528949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.345 qpair failed and we were unable to recover it. 00:26:17.345 [2024-04-26 12:22:18.529400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 12:22:18.529736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 12:22:18.529762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.345 qpair failed and we were unable to recover it. 00:26:17.345 [2024-04-26 12:22:18.530136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 12:22:18.530479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 12:22:18.530505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.345 qpair failed and we were unable to recover it. 00:26:17.345 [2024-04-26 12:22:18.530867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 12:22:18.531223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 12:22:18.531249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.345 qpair failed and we were unable to recover it. 00:26:17.345 [2024-04-26 12:22:18.531617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 12:22:18.531969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 12:22:18.531997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.345 qpair failed and we were unable to recover it. 00:26:17.345 [2024-04-26 12:22:18.532354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 12:22:18.532688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 12:22:18.532714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.345 qpair failed and we were unable to recover it. 00:26:17.345 [2024-04-26 12:22:18.533051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 12:22:18.533370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 12:22:18.533396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.345 qpair failed and we were unable to recover it. 00:26:17.345 [2024-04-26 12:22:18.533748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 12:22:18.534110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 12:22:18.534137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.345 qpair failed and we were unable to recover it. 00:26:17.345 [2024-04-26 12:22:18.534511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 12:22:18.534861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 12:22:18.534888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.345 qpair failed and we were unable to recover it. 00:26:17.345 [2024-04-26 12:22:18.535273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 12:22:18.535622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 12:22:18.535649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.345 qpair failed and we were unable to recover it. 00:26:17.345 [2024-04-26 12:22:18.535986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 12:22:18.536344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 12:22:18.536371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.345 qpair failed and we were unable to recover it. 00:26:17.346 [2024-04-26 12:22:18.536740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 12:22:18.536966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 12:22:18.536994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.346 qpair failed and we were unable to recover it. 00:26:17.346 [2024-04-26 12:22:18.537307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 12:22:18.537645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 12:22:18.537671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.346 qpair failed and we were unable to recover it. 00:26:17.346 [2024-04-26 12:22:18.538050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 12:22:18.538266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 12:22:18.538291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.346 qpair failed and we were unable to recover it. 00:26:17.346 [2024-04-26 12:22:18.538651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 12:22:18.539000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 12:22:18.539027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.346 qpair failed and we were unable to recover it. 00:26:17.346 [2024-04-26 12:22:18.539372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 12:22:18.539582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 12:22:18.539612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.346 qpair failed and we were unable to recover it. 00:26:17.346 [2024-04-26 12:22:18.539970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 12:22:18.540274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 12:22:18.540300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.346 qpair failed and we were unable to recover it. 00:26:17.346 [2024-04-26 12:22:18.540654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 12:22:18.541005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 12:22:18.541034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.346 qpair failed and we were unable to recover it. 00:26:17.346 [2024-04-26 12:22:18.541272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 12:22:18.541620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 12:22:18.541647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.346 qpair failed and we were unable to recover it. 00:26:17.346 [2024-04-26 12:22:18.542016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 12:22:18.542343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 12:22:18.542370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.346 qpair failed and we were unable to recover it. 00:26:17.346 [2024-04-26 12:22:18.542730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 12:22:18.543071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 12:22:18.543098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.346 qpair failed and we were unable to recover it. 00:26:17.346 [2024-04-26 12:22:18.543284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 12:22:18.543547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 12:22:18.543574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.346 qpair failed and we were unable to recover it. 00:26:17.346 [2024-04-26 12:22:18.543901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 12:22:18.544286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 12:22:18.544312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.346 qpair failed and we were unable to recover it. 00:26:17.346 [2024-04-26 12:22:18.544675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 12:22:18.545003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 12:22:18.545030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.346 qpair failed and we were unable to recover it. 00:26:17.346 [2024-04-26 12:22:18.545368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 12:22:18.545714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 12:22:18.545741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.346 qpair failed and we were unable to recover it. 00:26:17.346 [2024-04-26 12:22:18.546084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 12:22:18.546400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 12:22:18.546427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.346 qpair failed and we were unable to recover it. 00:26:17.346 [2024-04-26 12:22:18.546767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 12:22:18.547128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 12:22:18.547155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.346 qpair failed and we were unable to recover it. 00:26:17.346 [2024-04-26 12:22:18.547502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 12:22:18.547824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 12:22:18.547866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.346 qpair failed and we were unable to recover it. 00:26:17.346 [2024-04-26 12:22:18.548083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 12:22:18.548425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 12:22:18.548451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.346 qpair failed and we were unable to recover it. 00:26:17.346 [2024-04-26 12:22:18.548818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 12:22:18.549239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 12:22:18.549267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.346 qpair failed and we were unable to recover it. 00:26:17.346 [2024-04-26 12:22:18.549498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 12:22:18.549851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 12:22:18.549880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.346 qpair failed and we were unable to recover it. 00:26:17.346 [2024-04-26 12:22:18.550242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 12:22:18.550576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 12:22:18.550602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.346 qpair failed and we were unable to recover it. 00:26:17.346 [2024-04-26 12:22:18.550925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 12:22:18.551289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 12:22:18.551325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.346 qpair failed and we were unable to recover it. 00:26:17.346 [2024-04-26 12:22:18.551669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 12:22:18.552023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 12:22:18.552050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.346 qpair failed and we were unable to recover it. 00:26:17.346 [2024-04-26 12:22:18.552418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 12:22:18.552746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 12:22:18.552772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.346 qpair failed and we were unable to recover it. 00:26:17.346 [2024-04-26 12:22:18.552919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.616 [2024-04-26 12:22:18.553314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.616 [2024-04-26 12:22:18.553341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.616 qpair failed and we were unable to recover it. 00:26:17.616 [2024-04-26 12:22:18.553697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.616 [2024-04-26 12:22:18.554017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.616 [2024-04-26 12:22:18.554045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.616 qpair failed and we were unable to recover it. 00:26:17.616 [2024-04-26 12:22:18.554417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.616 [2024-04-26 12:22:18.554735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.616 [2024-04-26 12:22:18.554761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.616 qpair failed and we were unable to recover it. 00:26:17.616 [2024-04-26 12:22:18.555007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.616 [2024-04-26 12:22:18.555363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.616 [2024-04-26 12:22:18.555390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.616 qpair failed and we were unable to recover it. 00:26:17.616 [2024-04-26 12:22:18.555743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.616 [2024-04-26 12:22:18.555965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.616 [2024-04-26 12:22:18.555993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.616 qpair failed and we were unable to recover it. 00:26:17.616 [2024-04-26 12:22:18.556372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.616 [2024-04-26 12:22:18.556604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.616 [2024-04-26 12:22:18.556631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.616 qpair failed and we were unable to recover it. 00:26:17.616 [2024-04-26 12:22:18.556871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.616 [2024-04-26 12:22:18.557186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.616 [2024-04-26 12:22:18.557213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.616 qpair failed and we were unable to recover it. 00:26:17.616 [2024-04-26 12:22:18.557562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.616 [2024-04-26 12:22:18.557893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.616 [2024-04-26 12:22:18.557921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.616 qpair failed and we were unable to recover it. 00:26:17.616 [2024-04-26 12:22:18.558276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.616 [2024-04-26 12:22:18.558611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.616 [2024-04-26 12:22:18.558637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.616 qpair failed and we were unable to recover it. 00:26:17.616 [2024-04-26 12:22:18.558982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.616 [2024-04-26 12:22:18.559310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.616 [2024-04-26 12:22:18.559337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.616 qpair failed and we were unable to recover it. 00:26:17.616 [2024-04-26 12:22:18.559570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.616 [2024-04-26 12:22:18.559895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.616 [2024-04-26 12:22:18.559923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.616 qpair failed and we were unable to recover it. 00:26:17.616 [2024-04-26 12:22:18.560266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.616 [2024-04-26 12:22:18.560605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.616 [2024-04-26 12:22:18.560632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.616 qpair failed and we were unable to recover it. 00:26:17.616 [2024-04-26 12:22:18.560981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.616 [2024-04-26 12:22:18.561225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.616 [2024-04-26 12:22:18.561251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.616 qpair failed and we were unable to recover it. 00:26:17.616 [2024-04-26 12:22:18.561631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.616 [2024-04-26 12:22:18.561965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.616 [2024-04-26 12:22:18.561992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.616 qpair failed and we were unable to recover it. 00:26:17.616 [2024-04-26 12:22:18.562233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.616 [2024-04-26 12:22:18.562398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.616 [2024-04-26 12:22:18.562424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.616 qpair failed and we were unable to recover it. 00:26:17.616 [2024-04-26 12:22:18.562782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.616 [2024-04-26 12:22:18.562974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.616 [2024-04-26 12:22:18.563004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.616 qpair failed and we were unable to recover it. 00:26:17.616 [2024-04-26 12:22:18.563370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.616 [2024-04-26 12:22:18.563704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.616 [2024-04-26 12:22:18.563730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.616 qpair failed and we were unable to recover it. 00:26:17.616 [2024-04-26 12:22:18.564120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.616 [2024-04-26 12:22:18.564491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.616 [2024-04-26 12:22:18.564518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.616 qpair failed and we were unable to recover it. 00:26:17.616 [2024-04-26 12:22:18.564878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.616 [2024-04-26 12:22:18.565232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.616 [2024-04-26 12:22:18.565258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.616 qpair failed and we were unable to recover it. 00:26:17.616 [2024-04-26 12:22:18.565606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.616 [2024-04-26 12:22:18.565933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.616 [2024-04-26 12:22:18.565960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.616 qpair failed and we were unable to recover it. 00:26:17.616 [2024-04-26 12:22:18.566312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.616 [2024-04-26 12:22:18.566658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.616 [2024-04-26 12:22:18.566684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.616 qpair failed and we were unable to recover it. 00:26:17.617 [2024-04-26 12:22:18.567028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 12:22:18.567384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 12:22:18.567411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.617 qpair failed and we were unable to recover it. 00:26:17.617 [2024-04-26 12:22:18.567772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 12:22:18.568185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 12:22:18.568212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.617 qpair failed and we were unable to recover it. 00:26:17.617 [2024-04-26 12:22:18.568585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 12:22:18.568929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 12:22:18.568956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.617 qpair failed and we were unable to recover it. 00:26:17.617 [2024-04-26 12:22:18.569303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 12:22:18.569652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 12:22:18.569679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.617 qpair failed and we were unable to recover it. 00:26:17.617 [2024-04-26 12:22:18.569926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 12:22:18.570268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 12:22:18.570295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.617 qpair failed and we were unable to recover it. 00:26:17.617 [2024-04-26 12:22:18.570544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 12:22:18.570883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 12:22:18.570911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.617 qpair failed and we were unable to recover it. 00:26:17.617 [2024-04-26 12:22:18.571178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 12:22:18.571541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 12:22:18.571568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.617 qpair failed and we were unable to recover it. 00:26:17.617 [2024-04-26 12:22:18.571918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 12:22:18.572185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 12:22:18.572210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.617 qpair failed and we were unable to recover it. 00:26:17.617 [2024-04-26 12:22:18.572571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 12:22:18.572931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 12:22:18.572960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.617 qpair failed and we were unable to recover it. 00:26:17.617 [2024-04-26 12:22:18.573328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 12:22:18.573690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 12:22:18.573717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.617 qpair failed and we were unable to recover it. 00:26:17.617 [2024-04-26 12:22:18.574079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 12:22:18.574489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 12:22:18.574515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.617 qpair failed and we were unable to recover it. 00:26:17.617 [2024-04-26 12:22:18.574743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 12:22:18.575091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 12:22:18.575119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.617 qpair failed and we were unable to recover it. 00:26:17.617 [2024-04-26 12:22:18.575477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 12:22:18.575816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 12:22:18.575852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.617 qpair failed and we were unable to recover it. 00:26:17.617 [2024-04-26 12:22:18.576244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 12:22:18.576589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 12:22:18.576615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.617 qpair failed and we were unable to recover it. 00:26:17.617 [2024-04-26 12:22:18.576985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 12:22:18.577329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 12:22:18.577355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.617 qpair failed and we were unable to recover it. 00:26:17.617 [2024-04-26 12:22:18.577559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 12:22:18.577967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 12:22:18.577994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.617 qpair failed and we were unable to recover it. 00:26:17.617 [2024-04-26 12:22:18.578363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 12:22:18.578712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 12:22:18.578738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.617 qpair failed and we were unable to recover it. 00:26:17.617 [2024-04-26 12:22:18.579133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 12:22:18.579481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 12:22:18.579508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.617 qpair failed and we were unable to recover it. 00:26:17.617 [2024-04-26 12:22:18.579863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 12:22:18.580223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 12:22:18.580249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.617 qpair failed and we were unable to recover it. 00:26:17.617 [2024-04-26 12:22:18.580587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 12:22:18.580935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 12:22:18.580963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.617 qpair failed and we were unable to recover it. 00:26:17.617 [2024-04-26 12:22:18.581287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 12:22:18.581659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 12:22:18.581686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.617 qpair failed and we were unable to recover it. 00:26:17.617 [2024-04-26 12:22:18.582092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 12:22:18.582447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 12:22:18.582473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.617 qpair failed and we were unable to recover it. 00:26:17.617 [2024-04-26 12:22:18.582678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 12:22:18.583019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 12:22:18.583048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.617 qpair failed and we were unable to recover it. 00:26:17.617 [2024-04-26 12:22:18.583405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 12:22:18.583745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 12:22:18.583771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.617 qpair failed and we were unable to recover it. 00:26:17.617 [2024-04-26 12:22:18.583993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 12:22:18.584336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 12:22:18.584363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.617 qpair failed and we were unable to recover it. 00:26:17.617 [2024-04-26 12:22:18.584723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 12:22:18.585059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 12:22:18.585087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.617 qpair failed and we were unable to recover it. 00:26:17.617 [2024-04-26 12:22:18.585444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 12:22:18.585800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 12:22:18.585827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.617 qpair failed and we were unable to recover it. 00:26:17.617 [2024-04-26 12:22:18.586184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 12:22:18.586511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 12:22:18.586538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.617 qpair failed and we were unable to recover it. 00:26:17.617 [2024-04-26 12:22:18.586874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 12:22:18.587214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.618 [2024-04-26 12:22:18.587239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.618 qpair failed and we were unable to recover it. 00:26:17.618 [2024-04-26 12:22:18.587486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.618 [2024-04-26 12:22:18.587854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.618 [2024-04-26 12:22:18.587882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.618 qpair failed and we were unable to recover it. 00:26:17.618 [2024-04-26 12:22:18.588143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.618 [2024-04-26 12:22:18.588336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.618 [2024-04-26 12:22:18.588365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.618 qpair failed and we were unable to recover it. 00:26:17.618 [2024-04-26 12:22:18.588727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.618 [2024-04-26 12:22:18.589101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.618 [2024-04-26 12:22:18.589129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.618 qpair failed and we were unable to recover it. 00:26:17.618 [2024-04-26 12:22:18.589458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.618 [2024-04-26 12:22:18.589836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.618 [2024-04-26 12:22:18.589871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.618 qpair failed and we were unable to recover it. 00:26:17.618 [2024-04-26 12:22:18.590204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.618 [2024-04-26 12:22:18.590526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.618 [2024-04-26 12:22:18.590552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.618 qpair failed and we were unable to recover it. 00:26:17.618 [2024-04-26 12:22:18.590894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.618 [2024-04-26 12:22:18.591272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.618 [2024-04-26 12:22:18.591298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.618 qpair failed and we were unable to recover it. 00:26:17.618 [2024-04-26 12:22:18.591631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.618 [2024-04-26 12:22:18.591975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.618 [2024-04-26 12:22:18.592003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.618 qpair failed and we were unable to recover it. 00:26:17.618 [2024-04-26 12:22:18.592360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.618 [2024-04-26 12:22:18.592713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.618 [2024-04-26 12:22:18.592740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.618 qpair failed and we were unable to recover it. 00:26:17.618 [2024-04-26 12:22:18.593092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.618 [2024-04-26 12:22:18.593417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.618 [2024-04-26 12:22:18.593443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.618 qpair failed and we were unable to recover it. 00:26:17.618 [2024-04-26 12:22:18.593834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.618 [2024-04-26 12:22:18.594184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.618 [2024-04-26 12:22:18.594210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.618 qpair failed and we were unable to recover it. 00:26:17.618 [2024-04-26 12:22:18.594657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.618 [2024-04-26 12:22:18.595003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.618 [2024-04-26 12:22:18.595031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.618 qpair failed and we were unable to recover it. 00:26:17.618 [2024-04-26 12:22:18.595257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.618 [2024-04-26 12:22:18.595625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.618 [2024-04-26 12:22:18.595652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.618 qpair failed and we were unable to recover it. 00:26:17.618 [2024-04-26 12:22:18.595983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.618 [2024-04-26 12:22:18.596334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.618 [2024-04-26 12:22:18.596361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.618 qpair failed and we were unable to recover it. 00:26:17.618 [2024-04-26 12:22:18.596710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.618 [2024-04-26 12:22:18.597062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.618 [2024-04-26 12:22:18.597089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.618 qpair failed and we were unable to recover it. 00:26:17.618 [2024-04-26 12:22:18.597447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.618 [2024-04-26 12:22:18.597795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.618 [2024-04-26 12:22:18.597821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.618 qpair failed and we were unable to recover it. 00:26:17.618 [2024-04-26 12:22:18.598240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.618 [2024-04-26 12:22:18.598589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.618 [2024-04-26 12:22:18.598616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.618 qpair failed and we were unable to recover it. 00:26:17.618 [2024-04-26 12:22:18.598976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.618 [2024-04-26 12:22:18.599315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.618 [2024-04-26 12:22:18.599341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.618 qpair failed and we were unable to recover it. 00:26:17.618 [2024-04-26 12:22:18.599705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.618 [2024-04-26 12:22:18.600022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.618 [2024-04-26 12:22:18.600049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.618 qpair failed and we were unable to recover it. 00:26:17.618 [2024-04-26 12:22:18.600412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.618 [2024-04-26 12:22:18.600651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.618 [2024-04-26 12:22:18.600676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.618 qpair failed and we were unable to recover it. 00:26:17.618 [2024-04-26 12:22:18.601043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.618 [2024-04-26 12:22:18.601365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.618 [2024-04-26 12:22:18.601392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.618 qpair failed and we were unable to recover it. 00:26:17.618 [2024-04-26 12:22:18.601804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.618 [2024-04-26 12:22:18.602143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.618 [2024-04-26 12:22:18.602170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.618 qpair failed and we were unable to recover it. 00:26:17.618 [2024-04-26 12:22:18.602545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.618 [2024-04-26 12:22:18.602865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.618 [2024-04-26 12:22:18.602893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.618 qpair failed and we were unable to recover it. 00:26:17.618 [2024-04-26 12:22:18.603228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.618 [2024-04-26 12:22:18.603551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.618 [2024-04-26 12:22:18.603577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.618 qpair failed and we were unable to recover it. 00:26:17.618 [2024-04-26 12:22:18.603936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.618 [2024-04-26 12:22:18.604287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.618 [2024-04-26 12:22:18.604314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.618 qpair failed and we were unable to recover it. 00:26:17.618 [2024-04-26 12:22:18.604635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.618 [2024-04-26 12:22:18.604998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.618 [2024-04-26 12:22:18.605025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.618 qpair failed and we were unable to recover it. 00:26:17.618 [2024-04-26 12:22:18.605376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.618 [2024-04-26 12:22:18.605698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.618 [2024-04-26 12:22:18.605730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.618 qpair failed and we were unable to recover it. 00:26:17.618 [2024-04-26 12:22:18.606072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.618 [2024-04-26 12:22:18.606381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.618 [2024-04-26 12:22:18.606407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.618 qpair failed and we were unable to recover it. 00:26:17.618 [2024-04-26 12:22:18.606727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.618 [2024-04-26 12:22:18.607065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.618 [2024-04-26 12:22:18.607093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.619 qpair failed and we were unable to recover it. 00:26:17.619 [2024-04-26 12:22:18.607442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.619 [2024-04-26 12:22:18.607797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.619 [2024-04-26 12:22:18.607824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.619 qpair failed and we were unable to recover it. 00:26:17.619 [2024-04-26 12:22:18.608284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.619 [2024-04-26 12:22:18.608520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.619 [2024-04-26 12:22:18.608547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.619 qpair failed and we were unable to recover it. 00:26:17.619 [2024-04-26 12:22:18.608920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.619 [2024-04-26 12:22:18.609265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.619 [2024-04-26 12:22:18.609291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.619 qpair failed and we were unable to recover it. 00:26:17.619 [2024-04-26 12:22:18.609661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.619 [2024-04-26 12:22:18.609912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.619 [2024-04-26 12:22:18.609939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.619 qpair failed and we were unable to recover it. 00:26:17.619 [2024-04-26 12:22:18.610315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.619 [2024-04-26 12:22:18.610660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.619 [2024-04-26 12:22:18.610686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.619 qpair failed and we were unable to recover it. 00:26:17.619 [2024-04-26 12:22:18.611016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.619 [2024-04-26 12:22:18.611332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.619 [2024-04-26 12:22:18.611358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.619 qpair failed and we were unable to recover it. 00:26:17.619 [2024-04-26 12:22:18.611694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.619 [2024-04-26 12:22:18.612018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.619 [2024-04-26 12:22:18.612045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.619 qpair failed and we were unable to recover it. 00:26:17.619 [2024-04-26 12:22:18.612401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.619 [2024-04-26 12:22:18.612758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.619 [2024-04-26 12:22:18.612789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.619 qpair failed and we were unable to recover it. 00:26:17.619 [2024-04-26 12:22:18.613152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.619 [2024-04-26 12:22:18.613570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.619 [2024-04-26 12:22:18.613596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.619 qpair failed and we were unable to recover it. 00:26:17.619 [2024-04-26 12:22:18.613957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.619 [2024-04-26 12:22:18.614324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.619 [2024-04-26 12:22:18.614351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.619 qpair failed and we were unable to recover it. 00:26:17.619 [2024-04-26 12:22:18.614576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.619 [2024-04-26 12:22:18.614917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.619 [2024-04-26 12:22:18.614946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.619 qpair failed and we were unable to recover it. 00:26:17.619 [2024-04-26 12:22:18.615289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.619 [2024-04-26 12:22:18.615654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.619 [2024-04-26 12:22:18.615680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.619 qpair failed and we were unable to recover it. 00:26:17.619 [2024-04-26 12:22:18.616056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.619 [2024-04-26 12:22:18.616386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.619 [2024-04-26 12:22:18.616412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.619 qpair failed and we were unable to recover it. 00:26:17.619 [2024-04-26 12:22:18.616768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.619 [2024-04-26 12:22:18.617111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.619 [2024-04-26 12:22:18.617138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.619 qpair failed and we were unable to recover it. 00:26:17.619 [2024-04-26 12:22:18.617507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.619 [2024-04-26 12:22:18.617832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.619 [2024-04-26 12:22:18.617867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.619 qpair failed and we were unable to recover it. 00:26:17.619 [2024-04-26 12:22:18.618204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.619 [2024-04-26 12:22:18.618530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.619 [2024-04-26 12:22:18.618556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.619 qpair failed and we were unable to recover it. 00:26:17.619 [2024-04-26 12:22:18.618895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.619 [2024-04-26 12:22:18.619134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.619 [2024-04-26 12:22:18.619160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.619 qpair failed and we were unable to recover it. 00:26:17.619 [2024-04-26 12:22:18.619560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.619 [2024-04-26 12:22:18.619920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.619 [2024-04-26 12:22:18.619953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.619 qpair failed and we were unable to recover it. 00:26:17.619 [2024-04-26 12:22:18.620339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.619 [2024-04-26 12:22:18.620694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.619 [2024-04-26 12:22:18.620720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.619 qpair failed and we were unable to recover it. 00:26:17.619 [2024-04-26 12:22:18.621083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.619 [2024-04-26 12:22:18.621360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.619 [2024-04-26 12:22:18.621387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.619 qpair failed and we were unable to recover it. 00:26:17.619 [2024-04-26 12:22:18.621724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.619 [2024-04-26 12:22:18.622069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.619 [2024-04-26 12:22:18.622096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.619 qpair failed and we were unable to recover it. 00:26:17.619 [2024-04-26 12:22:18.622463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.619 [2024-04-26 12:22:18.622779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.619 [2024-04-26 12:22:18.622805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.619 qpair failed and we were unable to recover it. 00:26:17.619 [2024-04-26 12:22:18.623183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.619 [2024-04-26 12:22:18.623583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.619 [2024-04-26 12:22:18.623609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.619 qpair failed and we were unable to recover it. 00:26:17.619 [2024-04-26 12:22:18.623942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.619 [2024-04-26 12:22:18.624267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.619 [2024-04-26 12:22:18.624293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.619 qpair failed and we were unable to recover it. 00:26:17.619 [2024-04-26 12:22:18.624660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.619 [2024-04-26 12:22:18.624961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.619 [2024-04-26 12:22:18.624988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.619 qpair failed and we were unable to recover it. 00:26:17.619 [2024-04-26 12:22:18.625342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.619 [2024-04-26 12:22:18.625697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.619 [2024-04-26 12:22:18.625724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.619 qpair failed and we were unable to recover it. 00:26:17.619 [2024-04-26 12:22:18.626013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.619 [2024-04-26 12:22:18.626350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.619 [2024-04-26 12:22:18.626377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.619 qpair failed and we were unable to recover it. 00:26:17.619 [2024-04-26 12:22:18.626726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.619 [2024-04-26 12:22:18.627091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.619 [2024-04-26 12:22:18.627124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.619 qpair failed and we were unable to recover it. 00:26:17.619 [2024-04-26 12:22:18.627383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.619 [2024-04-26 12:22:18.627755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.620 [2024-04-26 12:22:18.627782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.620 qpair failed and we were unable to recover it. 00:26:17.620 [2024-04-26 12:22:18.628134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.620 [2024-04-26 12:22:18.628367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.620 [2024-04-26 12:22:18.628392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.620 qpair failed and we were unable to recover it. 00:26:17.620 [2024-04-26 12:22:18.628750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.620 [2024-04-26 12:22:18.629098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.620 [2024-04-26 12:22:18.629125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.620 qpair failed and we were unable to recover it. 00:26:17.620 [2024-04-26 12:22:18.629494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.620 [2024-04-26 12:22:18.629833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.620 [2024-04-26 12:22:18.629867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.620 qpair failed and we were unable to recover it. 00:26:17.620 [2024-04-26 12:22:18.630274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.620 [2024-04-26 12:22:18.630501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.620 [2024-04-26 12:22:18.630529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.620 qpair failed and we were unable to recover it. 00:26:17.620 [2024-04-26 12:22:18.630896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.620 [2024-04-26 12:22:18.631226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.620 [2024-04-26 12:22:18.631252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.620 qpair failed and we were unable to recover it. 00:26:17.620 [2024-04-26 12:22:18.631593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.620 [2024-04-26 12:22:18.631916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.620 [2024-04-26 12:22:18.631944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.620 qpair failed and we were unable to recover it. 00:26:17.620 [2024-04-26 12:22:18.632343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.620 [2024-04-26 12:22:18.632689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.620 [2024-04-26 12:22:18.632715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.620 qpair failed and we were unable to recover it. 00:26:17.620 [2024-04-26 12:22:18.633067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.620 [2024-04-26 12:22:18.633393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.620 [2024-04-26 12:22:18.633419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.620 qpair failed and we were unable to recover it. 00:26:17.620 [2024-04-26 12:22:18.633763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.620 [2024-04-26 12:22:18.634096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.620 [2024-04-26 12:22:18.634123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.620 qpair failed and we were unable to recover it. 00:26:17.620 [2024-04-26 12:22:18.634463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.620 [2024-04-26 12:22:18.634813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.620 [2024-04-26 12:22:18.634856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.620 qpair failed and we were unable to recover it. 00:26:17.620 [2024-04-26 12:22:18.635262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.620 [2024-04-26 12:22:18.635578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.620 [2024-04-26 12:22:18.635604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.620 qpair failed and we were unable to recover it. 00:26:17.620 [2024-04-26 12:22:18.635967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.620 [2024-04-26 12:22:18.636224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.620 [2024-04-26 12:22:18.636250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.620 qpair failed and we were unable to recover it. 00:26:17.620 [2024-04-26 12:22:18.636611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.620 [2024-04-26 12:22:18.636935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.620 [2024-04-26 12:22:18.636962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.620 qpair failed and we were unable to recover it. 00:26:17.620 [2024-04-26 12:22:18.637339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.620 [2024-04-26 12:22:18.637678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.620 [2024-04-26 12:22:18.637704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.620 qpair failed and we were unable to recover it. 00:26:17.620 [2024-04-26 12:22:18.638071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.620 [2024-04-26 12:22:18.638280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.620 [2024-04-26 12:22:18.638309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.620 qpair failed and we were unable to recover it. 00:26:17.620 [2024-04-26 12:22:18.638585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.620 [2024-04-26 12:22:18.638928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.620 [2024-04-26 12:22:18.638956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.620 qpair failed and we were unable to recover it. 00:26:17.620 [2024-04-26 12:22:18.639182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.620 [2024-04-26 12:22:18.639567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.620 [2024-04-26 12:22:18.639593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.620 qpair failed and we were unable to recover it. 00:26:17.620 [2024-04-26 12:22:18.639929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.620 [2024-04-26 12:22:18.640306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.620 [2024-04-26 12:22:18.640332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.620 qpair failed and we were unable to recover it. 00:26:17.620 [2024-04-26 12:22:18.640704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.620 [2024-04-26 12:22:18.640956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.620 [2024-04-26 12:22:18.640983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.620 qpair failed and we were unable to recover it. 00:26:17.620 [2024-04-26 12:22:18.641221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.620 [2024-04-26 12:22:18.641571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.620 [2024-04-26 12:22:18.641596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.620 qpair failed and we were unable to recover it. 00:26:17.620 [2024-04-26 12:22:18.641856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.620 [2024-04-26 12:22:18.642225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.620 [2024-04-26 12:22:18.642251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.620 qpair failed and we were unable to recover it. 00:26:17.620 [2024-04-26 12:22:18.642667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.620 [2024-04-26 12:22:18.642911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.620 [2024-04-26 12:22:18.642941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.620 qpair failed and we were unable to recover it. 00:26:17.620 [2024-04-26 12:22:18.643299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.620 [2024-04-26 12:22:18.643630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.621 [2024-04-26 12:22:18.643656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.621 qpair failed and we were unable to recover it. 00:26:17.621 [2024-04-26 12:22:18.644000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.621 [2024-04-26 12:22:18.644368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.621 [2024-04-26 12:22:18.644394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.621 qpair failed and we were unable to recover it. 00:26:17.621 [2024-04-26 12:22:18.644738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.621 [2024-04-26 12:22:18.644890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.621 [2024-04-26 12:22:18.644920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.621 qpair failed and we were unable to recover it. 00:26:17.621 [2024-04-26 12:22:18.645186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.621 [2024-04-26 12:22:18.645416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.621 [2024-04-26 12:22:18.645445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.621 qpair failed and we were unable to recover it. 00:26:17.621 [2024-04-26 12:22:18.645821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.621 [2024-04-26 12:22:18.646144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.621 [2024-04-26 12:22:18.646171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.621 qpair failed and we were unable to recover it. 00:26:17.621 [2024-04-26 12:22:18.646437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.621 [2024-04-26 12:22:18.646776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.621 [2024-04-26 12:22:18.646802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.621 qpair failed and we were unable to recover it. 00:26:17.621 [2024-04-26 12:22:18.647169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.621 [2024-04-26 12:22:18.647499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.621 [2024-04-26 12:22:18.647526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.621 qpair failed and we were unable to recover it. 00:26:17.621 [2024-04-26 12:22:18.647770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.621 [2024-04-26 12:22:18.648014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.621 [2024-04-26 12:22:18.648044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.621 qpair failed and we were unable to recover it. 00:26:17.621 [2024-04-26 12:22:18.648410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.621 [2024-04-26 12:22:18.648758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.621 [2024-04-26 12:22:18.648784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.621 qpair failed and we were unable to recover it. 00:26:17.621 [2024-04-26 12:22:18.649146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.621 [2024-04-26 12:22:18.649496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.621 [2024-04-26 12:22:18.649523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.621 qpair failed and we were unable to recover it. 00:26:17.621 [2024-04-26 12:22:18.649872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.621 [2024-04-26 12:22:18.650221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.621 [2024-04-26 12:22:18.650248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.621 qpair failed and we were unable to recover it. 00:26:17.621 [2024-04-26 12:22:18.650613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.621 [2024-04-26 12:22:18.650959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.621 [2024-04-26 12:22:18.650987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.621 qpair failed and we were unable to recover it. 00:26:17.621 [2024-04-26 12:22:18.651343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.621 [2024-04-26 12:22:18.651691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.621 [2024-04-26 12:22:18.651718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.621 qpair failed and we were unable to recover it. 00:26:17.621 [2024-04-26 12:22:18.651958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.621 [2024-04-26 12:22:18.652328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.621 [2024-04-26 12:22:18.652354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.621 qpair failed and we were unable to recover it. 00:26:17.621 [2024-04-26 12:22:18.652717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.621 [2024-04-26 12:22:18.653057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.621 [2024-04-26 12:22:18.653084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.621 qpair failed and we were unable to recover it. 00:26:17.621 [2024-04-26 12:22:18.653452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.621 [2024-04-26 12:22:18.653811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.621 [2024-04-26 12:22:18.653844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.621 qpair failed and we were unable to recover it. 00:26:17.621 [2024-04-26 12:22:18.654234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.621 [2024-04-26 12:22:18.654565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.621 [2024-04-26 12:22:18.654591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.621 qpair failed and we were unable to recover it. 00:26:17.621 [2024-04-26 12:22:18.654991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.621 [2024-04-26 12:22:18.655312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.621 [2024-04-26 12:22:18.655338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.621 qpair failed and we were unable to recover it. 00:26:17.621 [2024-04-26 12:22:18.655669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.621 [2024-04-26 12:22:18.655968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.621 [2024-04-26 12:22:18.655997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.621 qpair failed and we were unable to recover it. 00:26:17.621 [2024-04-26 12:22:18.656362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.621 [2024-04-26 12:22:18.656709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.621 [2024-04-26 12:22:18.656737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.621 qpair failed and we were unable to recover it. 00:26:17.621 [2024-04-26 12:22:18.657083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.621 [2024-04-26 12:22:18.657438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.621 [2024-04-26 12:22:18.657464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.621 qpair failed and we were unable to recover it. 00:26:17.621 [2024-04-26 12:22:18.657724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.621 [2024-04-26 12:22:18.658031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.621 [2024-04-26 12:22:18.658058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.621 qpair failed and we were unable to recover it. 00:26:17.621 [2024-04-26 12:22:18.658328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.621 [2024-04-26 12:22:18.658678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.621 [2024-04-26 12:22:18.658704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.621 qpair failed and we were unable to recover it. 00:26:17.621 [2024-04-26 12:22:18.658957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.621 [2024-04-26 12:22:18.659174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.621 [2024-04-26 12:22:18.659204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.621 qpair failed and we were unable to recover it. 00:26:17.621 [2024-04-26 12:22:18.659546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.621 [2024-04-26 12:22:18.659915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.621 [2024-04-26 12:22:18.659944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.621 qpair failed and we were unable to recover it. 00:26:17.621 [2024-04-26 12:22:18.660155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.621 [2024-04-26 12:22:18.660504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.621 [2024-04-26 12:22:18.660531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.621 qpair failed and we were unable to recover it. 00:26:17.621 [2024-04-26 12:22:18.660875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.621 [2024-04-26 12:22:18.661204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.621 [2024-04-26 12:22:18.661230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.621 qpair failed and we were unable to recover it. 00:26:17.621 [2024-04-26 12:22:18.661497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.621 [2024-04-26 12:22:18.661862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.621 [2024-04-26 12:22:18.661889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.621 qpair failed and we were unable to recover it. 00:26:17.621 [2024-04-26 12:22:18.662243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.621 [2024-04-26 12:22:18.662599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.621 [2024-04-26 12:22:18.662625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.621 qpair failed and we were unable to recover it. 00:26:17.621 [2024-04-26 12:22:18.662901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.622 [2024-04-26 12:22:18.663191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.622 [2024-04-26 12:22:18.663217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.622 qpair failed and we were unable to recover it. 00:26:17.622 [2024-04-26 12:22:18.663616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.622 [2024-04-26 12:22:18.663941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.622 [2024-04-26 12:22:18.663969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.622 qpair failed and we were unable to recover it. 00:26:17.622 [2024-04-26 12:22:18.664325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.622 [2024-04-26 12:22:18.664700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.622 [2024-04-26 12:22:18.664726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.622 qpair failed and we were unable to recover it. 00:26:17.622 [2024-04-26 12:22:18.665111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.622 [2024-04-26 12:22:18.665467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.622 [2024-04-26 12:22:18.665494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.622 qpair failed and we were unable to recover it. 00:26:17.622 [2024-04-26 12:22:18.665820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.622 [2024-04-26 12:22:18.666157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.622 [2024-04-26 12:22:18.666184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.622 qpair failed and we were unable to recover it. 00:26:17.622 [2024-04-26 12:22:18.666583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.622 [2024-04-26 12:22:18.666952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.622 [2024-04-26 12:22:18.666979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.622 qpair failed and we were unable to recover it. 00:26:17.622 [2024-04-26 12:22:18.667211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.622 [2024-04-26 12:22:18.667449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.622 [2024-04-26 12:22:18.667478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.622 qpair failed and we were unable to recover it. 00:26:17.622 [2024-04-26 12:22:18.667871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.622 [2024-04-26 12:22:18.668290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.622 [2024-04-26 12:22:18.668316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.622 qpair failed and we were unable to recover it. 00:26:17.622 [2024-04-26 12:22:18.668592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.622 [2024-04-26 12:22:18.668848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.622 [2024-04-26 12:22:18.668875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.622 qpair failed and we were unable to recover it. 00:26:17.622 [2024-04-26 12:22:18.669249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.622 [2024-04-26 12:22:18.669636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.622 [2024-04-26 12:22:18.669662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.622 qpair failed and we were unable to recover it. 00:26:17.622 [2024-04-26 12:22:18.670023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.622 [2024-04-26 12:22:18.670384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.622 [2024-04-26 12:22:18.670411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.622 qpair failed and we were unable to recover it. 00:26:17.622 [2024-04-26 12:22:18.670756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.622 [2024-04-26 12:22:18.671141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.622 [2024-04-26 12:22:18.671169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.622 qpair failed and we were unable to recover it. 00:26:17.622 [2024-04-26 12:22:18.671403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.622 [2024-04-26 12:22:18.671660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.622 [2024-04-26 12:22:18.671687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.622 qpair failed and we were unable to recover it. 00:26:17.622 [2024-04-26 12:22:18.671942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.622 [2024-04-26 12:22:18.672302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.622 [2024-04-26 12:22:18.672329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.622 qpair failed and we were unable to recover it. 00:26:17.622 [2024-04-26 12:22:18.672712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.622 [2024-04-26 12:22:18.672966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.622 [2024-04-26 12:22:18.672992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.622 qpair failed and we were unable to recover it. 00:26:17.622 [2024-04-26 12:22:18.673373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.622 [2024-04-26 12:22:18.673701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.622 [2024-04-26 12:22:18.673727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.622 qpair failed and we were unable to recover it. 00:26:17.622 [2024-04-26 12:22:18.674082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.622 [2024-04-26 12:22:18.674434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.622 [2024-04-26 12:22:18.674461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.622 qpair failed and we were unable to recover it. 00:26:17.622 [2024-04-26 12:22:18.674849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.622 [2024-04-26 12:22:18.675197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.622 [2024-04-26 12:22:18.675223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.622 qpair failed and we were unable to recover it. 00:26:17.622 [2024-04-26 12:22:18.675590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.622 [2024-04-26 12:22:18.675921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.622 [2024-04-26 12:22:18.675950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.622 qpair failed and we were unable to recover it. 00:26:17.622 [2024-04-26 12:22:18.676224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.622 [2024-04-26 12:22:18.676602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.622 [2024-04-26 12:22:18.676628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.622 qpair failed and we were unable to recover it. 00:26:17.622 [2024-04-26 12:22:18.677000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.622 [2024-04-26 12:22:18.677227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.622 [2024-04-26 12:22:18.677257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.622 qpair failed and we were unable to recover it. 00:26:17.622 [2024-04-26 12:22:18.677511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.622 [2024-04-26 12:22:18.677762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.622 [2024-04-26 12:22:18.677788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.622 qpair failed and we were unable to recover it. 00:26:17.622 [2024-04-26 12:22:18.678160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.622 [2024-04-26 12:22:18.678387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.622 [2024-04-26 12:22:18.678413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.622 qpair failed and we were unable to recover it. 00:26:17.622 [2024-04-26 12:22:18.678759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.622 [2024-04-26 12:22:18.678973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.622 [2024-04-26 12:22:18.679002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.622 qpair failed and we were unable to recover it. 00:26:17.622 [2024-04-26 12:22:18.679353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.622 [2024-04-26 12:22:18.679696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.622 [2024-04-26 12:22:18.679722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.622 qpair failed and we were unable to recover it. 00:26:17.622 [2024-04-26 12:22:18.680072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.622 [2024-04-26 12:22:18.680424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.622 [2024-04-26 12:22:18.680450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.622 qpair failed and we were unable to recover it. 00:26:17.622 [2024-04-26 12:22:18.680778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.622 [2024-04-26 12:22:18.681200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.622 [2024-04-26 12:22:18.681228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.622 qpair failed and we were unable to recover it. 00:26:17.622 [2024-04-26 12:22:18.681591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.622 [2024-04-26 12:22:18.681917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.622 [2024-04-26 12:22:18.681943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.622 qpair failed and we were unable to recover it. 00:26:17.622 [2024-04-26 12:22:18.682299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.622 [2024-04-26 12:22:18.682656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.622 [2024-04-26 12:22:18.682684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.622 qpair failed and we were unable to recover it. 00:26:17.623 [2024-04-26 12:22:18.683040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.623 [2024-04-26 12:22:18.683402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.623 [2024-04-26 12:22:18.683429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.623 qpair failed and we were unable to recover it. 00:26:17.623 [2024-04-26 12:22:18.683811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.623 [2024-04-26 12:22:18.684145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.623 [2024-04-26 12:22:18.684172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.623 qpair failed and we were unable to recover it. 00:26:17.623 [2024-04-26 12:22:18.684521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.623 [2024-04-26 12:22:18.684862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.623 [2024-04-26 12:22:18.684890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.623 qpair failed and we were unable to recover it. 00:26:17.623 [2024-04-26 12:22:18.685169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.623 [2024-04-26 12:22:18.685547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.623 [2024-04-26 12:22:18.685573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.623 qpair failed and we were unable to recover it. 00:26:17.623 [2024-04-26 12:22:18.685931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.623 [2024-04-26 12:22:18.686279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.623 [2024-04-26 12:22:18.686305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.623 qpair failed and we were unable to recover it. 00:26:17.623 [2024-04-26 12:22:18.686665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.623 [2024-04-26 12:22:18.687000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.623 [2024-04-26 12:22:18.687028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.623 qpair failed and we were unable to recover it. 00:26:17.623 [2024-04-26 12:22:18.687385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.623 [2024-04-26 12:22:18.687754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.623 [2024-04-26 12:22:18.687781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.623 qpair failed and we were unable to recover it. 00:26:17.623 [2024-04-26 12:22:18.687988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.623 [2024-04-26 12:22:18.688212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.623 [2024-04-26 12:22:18.688241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.623 qpair failed and we were unable to recover it. 00:26:17.623 [2024-04-26 12:22:18.688579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.623 [2024-04-26 12:22:18.688910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.623 [2024-04-26 12:22:18.688938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.623 qpair failed and we were unable to recover it. 00:26:17.623 [2024-04-26 12:22:18.689315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.623 [2024-04-26 12:22:18.689643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.623 [2024-04-26 12:22:18.689671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.623 qpair failed and we were unable to recover it. 00:26:17.623 [2024-04-26 12:22:18.689992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.623 [2024-04-26 12:22:18.690316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.623 [2024-04-26 12:22:18.690342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.623 qpair failed and we were unable to recover it. 00:26:17.623 [2024-04-26 12:22:18.690644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.623 [2024-04-26 12:22:18.690875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.623 [2024-04-26 12:22:18.690902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.623 qpair failed and we were unable to recover it. 00:26:17.623 [2024-04-26 12:22:18.691161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.623 [2024-04-26 12:22:18.691516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.623 [2024-04-26 12:22:18.691542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.623 qpair failed and we were unable to recover it. 00:26:17.623 [2024-04-26 12:22:18.691790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.623 [2024-04-26 12:22:18.692040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.623 [2024-04-26 12:22:18.692069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.623 qpair failed and we were unable to recover it. 00:26:17.623 [2024-04-26 12:22:18.692283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.623 [2024-04-26 12:22:18.692657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.623 [2024-04-26 12:22:18.692683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.623 qpair failed and we were unable to recover it. 00:26:17.623 [2024-04-26 12:22:18.693027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.623 [2024-04-26 12:22:18.693378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.623 [2024-04-26 12:22:18.693404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.623 qpair failed and we were unable to recover it. 00:26:17.623 [2024-04-26 12:22:18.693677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.623 [2024-04-26 12:22:18.693903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.623 [2024-04-26 12:22:18.693929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.623 qpair failed and we were unable to recover it. 00:26:17.623 [2024-04-26 12:22:18.694291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.623 [2024-04-26 12:22:18.694509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.623 [2024-04-26 12:22:18.694536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.623 qpair failed and we were unable to recover it. 00:26:17.623 [2024-04-26 12:22:18.694905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.623 [2024-04-26 12:22:18.695258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.623 [2024-04-26 12:22:18.695284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.623 qpair failed and we were unable to recover it. 00:26:17.623 [2024-04-26 12:22:18.695631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.623 [2024-04-26 12:22:18.695993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.623 [2024-04-26 12:22:18.696023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.623 qpair failed and we were unable to recover it. 00:26:17.623 [2024-04-26 12:22:18.696412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.623 [2024-04-26 12:22:18.696733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.623 [2024-04-26 12:22:18.696760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.623 qpair failed and we were unable to recover it. 00:26:17.623 [2024-04-26 12:22:18.696986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.623 [2024-04-26 12:22:18.697368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.623 [2024-04-26 12:22:18.697394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.623 qpair failed and we were unable to recover it. 00:26:17.623 [2024-04-26 12:22:18.697761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.623 [2024-04-26 12:22:18.698122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.623 [2024-04-26 12:22:18.698149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.623 qpair failed and we were unable to recover it. 00:26:17.623 [2024-04-26 12:22:18.698373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.623 [2024-04-26 12:22:18.698655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.623 [2024-04-26 12:22:18.698685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.623 qpair failed and we were unable to recover it. 00:26:17.623 [2024-04-26 12:22:18.698963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.623 [2024-04-26 12:22:18.699328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.623 [2024-04-26 12:22:18.699354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.623 qpair failed and we were unable to recover it. 00:26:17.623 [2024-04-26 12:22:18.699476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.623 [2024-04-26 12:22:18.699708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.623 [2024-04-26 12:22:18.699733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.623 qpair failed and we were unable to recover it. 00:26:17.623 [2024-04-26 12:22:18.700082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.623 [2024-04-26 12:22:18.700414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.623 [2024-04-26 12:22:18.700441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.623 qpair failed and we were unable to recover it. 00:26:17.623 [2024-04-26 12:22:18.700803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.623 [2024-04-26 12:22:18.701136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.623 [2024-04-26 12:22:18.701162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.623 qpair failed and we were unable to recover it. 00:26:17.623 [2024-04-26 12:22:18.701513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.623 [2024-04-26 12:22:18.701739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.623 [2024-04-26 12:22:18.701766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.624 qpair failed and we were unable to recover it. 00:26:17.624 [2024-04-26 12:22:18.702018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.624 [2024-04-26 12:22:18.702376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.624 [2024-04-26 12:22:18.702403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.624 qpair failed and we were unable to recover it. 00:26:17.624 [2024-04-26 12:22:18.702765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.624 [2024-04-26 12:22:18.703100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.624 [2024-04-26 12:22:18.703127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.624 qpair failed and we were unable to recover it. 00:26:17.624 [2024-04-26 12:22:18.703485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.624 [2024-04-26 12:22:18.703734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.624 [2024-04-26 12:22:18.703760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.624 qpair failed and we were unable to recover it. 00:26:17.624 [2024-04-26 12:22:18.704104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.624 [2024-04-26 12:22:18.704438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.624 [2024-04-26 12:22:18.704464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.624 qpair failed and we were unable to recover it. 00:26:17.624 [2024-04-26 12:22:18.704853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.624 [2024-04-26 12:22:18.705110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.624 [2024-04-26 12:22:18.705136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.624 qpair failed and we were unable to recover it. 00:26:17.624 [2024-04-26 12:22:18.705485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.624 [2024-04-26 12:22:18.705831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.624 [2024-04-26 12:22:18.705865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.624 qpair failed and we were unable to recover it. 00:26:17.624 [2024-04-26 12:22:18.706242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.624 [2024-04-26 12:22:18.706582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.624 [2024-04-26 12:22:18.706608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.624 qpair failed and we were unable to recover it. 00:26:17.624 [2024-04-26 12:22:18.707006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.624 [2024-04-26 12:22:18.707344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.624 [2024-04-26 12:22:18.707370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.624 qpair failed and we were unable to recover it. 00:26:17.624 [2024-04-26 12:22:18.707686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.624 [2024-04-26 12:22:18.707937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.624 [2024-04-26 12:22:18.707964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.624 qpair failed and we were unable to recover it. 00:26:17.624 [2024-04-26 12:22:18.708197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.624 [2024-04-26 12:22:18.708549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.624 [2024-04-26 12:22:18.708575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.624 qpair failed and we were unable to recover it. 00:26:17.624 [2024-04-26 12:22:18.708909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.624 [2024-04-26 12:22:18.709246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.624 [2024-04-26 12:22:18.709273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.624 qpair failed and we were unable to recover it. 00:26:17.624 [2024-04-26 12:22:18.709664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.624 [2024-04-26 12:22:18.709879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.624 [2024-04-26 12:22:18.709905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.624 qpair failed and we were unable to recover it. 00:26:17.624 [2024-04-26 12:22:18.710254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.624 [2024-04-26 12:22:18.710577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.624 [2024-04-26 12:22:18.710602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.624 qpair failed and we were unable to recover it. 00:26:17.624 [2024-04-26 12:22:18.710961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.624 [2024-04-26 12:22:18.711310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.624 [2024-04-26 12:22:18.711336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.624 qpair failed and we were unable to recover it. 00:26:17.624 [2024-04-26 12:22:18.711573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.624 [2024-04-26 12:22:18.711927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.624 [2024-04-26 12:22:18.711955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.624 qpair failed and we were unable to recover it. 00:26:17.624 [2024-04-26 12:22:18.712292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.624 [2024-04-26 12:22:18.712646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.624 [2024-04-26 12:22:18.712672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.624 qpair failed and we were unable to recover it. 00:26:17.624 [2024-04-26 12:22:18.713023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.624 [2024-04-26 12:22:18.713250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.624 [2024-04-26 12:22:18.713276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.624 qpair failed and we were unable to recover it. 00:26:17.624 [2024-04-26 12:22:18.713605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.624 [2024-04-26 12:22:18.713965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.624 [2024-04-26 12:22:18.713991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.624 qpair failed and we were unable to recover it. 00:26:17.624 [2024-04-26 12:22:18.714235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.624 [2024-04-26 12:22:18.714568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.624 [2024-04-26 12:22:18.714595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.624 qpair failed and we were unable to recover it. 00:26:17.624 [2024-04-26 12:22:18.714941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.624 [2024-04-26 12:22:18.715339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.624 [2024-04-26 12:22:18.715365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.624 qpair failed and we were unable to recover it. 00:26:17.624 [2024-04-26 12:22:18.715699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.624 [2024-04-26 12:22:18.716064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.624 [2024-04-26 12:22:18.716097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.624 qpair failed and we were unable to recover it. 00:26:17.624 [2024-04-26 12:22:18.716319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.624 [2024-04-26 12:22:18.716626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.624 [2024-04-26 12:22:18.716653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.624 qpair failed and we were unable to recover it. 00:26:17.624 [2024-04-26 12:22:18.717011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.624 [2024-04-26 12:22:18.717361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.624 [2024-04-26 12:22:18.717387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.624 qpair failed and we were unable to recover it. 00:26:17.624 [2024-04-26 12:22:18.717758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.624 [2024-04-26 12:22:18.718057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.624 [2024-04-26 12:22:18.718084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.624 qpair failed and we were unable to recover it. 00:26:17.624 [2024-04-26 12:22:18.718416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.624 [2024-04-26 12:22:18.718645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.624 [2024-04-26 12:22:18.718673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.624 qpair failed and we were unable to recover it. 00:26:17.624 [2024-04-26 12:22:18.718926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.624 [2024-04-26 12:22:18.719269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.624 [2024-04-26 12:22:18.719295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.624 qpair failed and we were unable to recover it. 00:26:17.624 [2024-04-26 12:22:18.719657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.624 [2024-04-26 12:22:18.720008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.624 [2024-04-26 12:22:18.720035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.624 qpair failed and we were unable to recover it. 00:26:17.624 [2024-04-26 12:22:18.720375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.624 [2024-04-26 12:22:18.720704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.624 [2024-04-26 12:22:18.720730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.624 qpair failed and we were unable to recover it. 00:26:17.624 [2024-04-26 12:22:18.720991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.625 [2024-04-26 12:22:18.721340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.625 [2024-04-26 12:22:18.721367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.625 qpair failed and we were unable to recover it. 00:26:17.625 [2024-04-26 12:22:18.721722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.625 [2024-04-26 12:22:18.721961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.625 [2024-04-26 12:22:18.721988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.625 qpair failed and we were unable to recover it. 00:26:17.625 [2024-04-26 12:22:18.722317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.625 [2024-04-26 12:22:18.722549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.625 [2024-04-26 12:22:18.722583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.625 qpair failed and we were unable to recover it. 00:26:17.625 [2024-04-26 12:22:18.722958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.625 [2024-04-26 12:22:18.723278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.625 [2024-04-26 12:22:18.723305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.625 qpair failed and we were unable to recover it. 00:26:17.625 [2024-04-26 12:22:18.723665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.625 [2024-04-26 12:22:18.723989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.625 [2024-04-26 12:22:18.724016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.625 qpair failed and we were unable to recover it. 00:26:17.625 [2024-04-26 12:22:18.724361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.625 [2024-04-26 12:22:18.724706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.625 [2024-04-26 12:22:18.724732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.625 qpair failed and we were unable to recover it. 00:26:17.625 [2024-04-26 12:22:18.724986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.625 [2024-04-26 12:22:18.725350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.625 [2024-04-26 12:22:18.725376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.625 qpair failed and we were unable to recover it. 00:26:17.625 [2024-04-26 12:22:18.725738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.625 [2024-04-26 12:22:18.726099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.625 [2024-04-26 12:22:18.726126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.625 qpair failed and we were unable to recover it. 00:26:17.625 [2024-04-26 12:22:18.726493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.625 [2024-04-26 12:22:18.726829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.625 [2024-04-26 12:22:18.726880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.625 qpair failed and we were unable to recover it. 00:26:17.625 [2024-04-26 12:22:18.727292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.625 [2024-04-26 12:22:18.727588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.625 [2024-04-26 12:22:18.727615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.625 qpair failed and we were unable to recover it. 00:26:17.625 [2024-04-26 12:22:18.727967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.625 [2024-04-26 12:22:18.728339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.625 [2024-04-26 12:22:18.728366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.625 qpair failed and we were unable to recover it. 00:26:17.625 [2024-04-26 12:22:18.728729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.625 [2024-04-26 12:22:18.729068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.625 [2024-04-26 12:22:18.729095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.625 qpair failed and we were unable to recover it. 00:26:17.625 [2024-04-26 12:22:18.729453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.625 [2024-04-26 12:22:18.729669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.625 [2024-04-26 12:22:18.729703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.625 qpair failed and we were unable to recover it. 00:26:17.625 [2024-04-26 12:22:18.730061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.625 [2024-04-26 12:22:18.730380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.625 [2024-04-26 12:22:18.730407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.625 qpair failed and we were unable to recover it. 00:26:17.625 [2024-04-26 12:22:18.730770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.625 [2024-04-26 12:22:18.730998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.625 [2024-04-26 12:22:18.731028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.625 qpair failed and we were unable to recover it. 00:26:17.625 [2024-04-26 12:22:18.731349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.625 [2024-04-26 12:22:18.731662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.625 [2024-04-26 12:22:18.731688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.625 qpair failed and we were unable to recover it. 00:26:17.625 [2024-04-26 12:22:18.732105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.625 [2024-04-26 12:22:18.732313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.625 [2024-04-26 12:22:18.732341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.625 qpair failed and we were unable to recover it. 00:26:17.625 [2024-04-26 12:22:18.732678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.625 [2024-04-26 12:22:18.733018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.625 [2024-04-26 12:22:18.733046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.625 qpair failed and we were unable to recover it. 00:26:17.625 [2024-04-26 12:22:18.733407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.625 [2024-04-26 12:22:18.733730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.625 [2024-04-26 12:22:18.733756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.625 qpair failed and we were unable to recover it. 00:26:17.625 [2024-04-26 12:22:18.734050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.625 [2024-04-26 12:22:18.734375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.625 [2024-04-26 12:22:18.734401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.625 qpair failed and we were unable to recover it. 00:26:17.625 [2024-04-26 12:22:18.734739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.625 [2024-04-26 12:22:18.735104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.625 [2024-04-26 12:22:18.735131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.625 qpair failed and we were unable to recover it. 00:26:17.625 [2024-04-26 12:22:18.735516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.625 [2024-04-26 12:22:18.735831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.625 [2024-04-26 12:22:18.735866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.625 qpair failed and we were unable to recover it. 00:26:17.625 [2024-04-26 12:22:18.736211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.625 [2024-04-26 12:22:18.736568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.625 [2024-04-26 12:22:18.736595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.625 qpair failed and we were unable to recover it. 00:26:17.625 [2024-04-26 12:22:18.736973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.625 [2024-04-26 12:22:18.737316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.625 [2024-04-26 12:22:18.737342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.625 qpair failed and we were unable to recover it. 00:26:17.625 [2024-04-26 12:22:18.737671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.626 [2024-04-26 12:22:18.738018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.626 [2024-04-26 12:22:18.738046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.626 qpair failed and we were unable to recover it. 00:26:17.626 [2024-04-26 12:22:18.738382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.626 [2024-04-26 12:22:18.738591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.626 [2024-04-26 12:22:18.738619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.626 qpair failed and we were unable to recover it. 00:26:17.626 [2024-04-26 12:22:18.738987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.626 [2024-04-26 12:22:18.739381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.626 [2024-04-26 12:22:18.739407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.626 qpair failed and we were unable to recover it. 00:26:17.626 [2024-04-26 12:22:18.739778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.626 [2024-04-26 12:22:18.740114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.626 [2024-04-26 12:22:18.740141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.626 qpair failed and we were unable to recover it. 00:26:17.626 [2024-04-26 12:22:18.740506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.626 [2024-04-26 12:22:18.740739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.626 [2024-04-26 12:22:18.740769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.626 qpair failed and we were unable to recover it. 00:26:17.626 [2024-04-26 12:22:18.741141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.626 [2024-04-26 12:22:18.741451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.626 [2024-04-26 12:22:18.741478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.626 qpair failed and we were unable to recover it. 00:26:17.626 [2024-04-26 12:22:18.741915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.626 [2024-04-26 12:22:18.742160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.626 [2024-04-26 12:22:18.742188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.626 qpair failed and we were unable to recover it. 00:26:17.626 [2024-04-26 12:22:18.742567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.626 [2024-04-26 12:22:18.742913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.626 [2024-04-26 12:22:18.742941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.626 qpair failed and we were unable to recover it. 00:26:17.626 [2024-04-26 12:22:18.743297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.626 [2024-04-26 12:22:18.743612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.626 [2024-04-26 12:22:18.743638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.626 qpair failed and we were unable to recover it. 00:26:17.626 [2024-04-26 12:22:18.744021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.626 [2024-04-26 12:22:18.744373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.626 [2024-04-26 12:22:18.744399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.626 qpair failed and we were unable to recover it. 00:26:17.626 [2024-04-26 12:22:18.744753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.626 [2024-04-26 12:22:18.745113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.626 [2024-04-26 12:22:18.745141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.626 qpair failed and we were unable to recover it. 00:26:17.626 [2024-04-26 12:22:18.745514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.626 [2024-04-26 12:22:18.745736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.626 [2024-04-26 12:22:18.745762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.626 qpair failed and we were unable to recover it. 00:26:17.626 [2024-04-26 12:22:18.746095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.626 [2024-04-26 12:22:18.746436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.626 [2024-04-26 12:22:18.746462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.626 qpair failed and we were unable to recover it. 00:26:17.626 [2024-04-26 12:22:18.746691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.626 [2024-04-26 12:22:18.747108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.626 [2024-04-26 12:22:18.747135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.626 qpair failed and we were unable to recover it. 00:26:17.626 [2024-04-26 12:22:18.747466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.626 [2024-04-26 12:22:18.747814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.626 [2024-04-26 12:22:18.747855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.626 qpair failed and we were unable to recover it. 00:26:17.626 [2024-04-26 12:22:18.748147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.626 [2024-04-26 12:22:18.748505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.626 [2024-04-26 12:22:18.748531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.626 qpair failed and we were unable to recover it. 00:26:17.626 [2024-04-26 12:22:18.748894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.626 [2024-04-26 12:22:18.749271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.626 [2024-04-26 12:22:18.749297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.626 qpair failed and we were unable to recover it. 00:26:17.626 [2024-04-26 12:22:18.749636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.626 [2024-04-26 12:22:18.750002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.626 [2024-04-26 12:22:18.750029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.626 qpair failed and we were unable to recover it. 00:26:17.626 [2024-04-26 12:22:18.750376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.626 [2024-04-26 12:22:18.750607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.626 [2024-04-26 12:22:18.750634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.626 qpair failed and we were unable to recover it. 00:26:17.626 [2024-04-26 12:22:18.750982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.626 [2024-04-26 12:22:18.751369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.626 [2024-04-26 12:22:18.751395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.626 qpair failed and we were unable to recover it. 00:26:17.626 [2024-04-26 12:22:18.751736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.626 [2024-04-26 12:22:18.752106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.626 [2024-04-26 12:22:18.752134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.626 qpair failed and we were unable to recover it. 00:26:17.626 [2024-04-26 12:22:18.752486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.626 [2024-04-26 12:22:18.752814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.626 [2024-04-26 12:22:18.752849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.626 qpair failed and we were unable to recover it. 00:26:17.626 [2024-04-26 12:22:18.753187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.626 [2024-04-26 12:22:18.753535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.626 [2024-04-26 12:22:18.753562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.626 qpair failed and we were unable to recover it. 00:26:17.626 [2024-04-26 12:22:18.753813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.626 [2024-04-26 12:22:18.754147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.626 [2024-04-26 12:22:18.754175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.626 qpair failed and we were unable to recover it. 00:26:17.626 [2024-04-26 12:22:18.754535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.626 [2024-04-26 12:22:18.754885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.626 [2024-04-26 12:22:18.754913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.626 qpair failed and we were unable to recover it. 00:26:17.626 [2024-04-26 12:22:18.755329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.626 [2024-04-26 12:22:18.755556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.626 [2024-04-26 12:22:18.755593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.626 qpair failed and we were unable to recover it. 00:26:17.626 [2024-04-26 12:22:18.755976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.626 [2024-04-26 12:22:18.756311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.626 [2024-04-26 12:22:18.756337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.626 qpair failed and we were unable to recover it. 00:26:17.626 [2024-04-26 12:22:18.756704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.626 [2024-04-26 12:22:18.756984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.626 [2024-04-26 12:22:18.757012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.626 qpair failed and we were unable to recover it. 00:26:17.626 [2024-04-26 12:22:18.757350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.626 [2024-04-26 12:22:18.757679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.627 [2024-04-26 12:22:18.757706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.627 qpair failed and we were unable to recover it. 00:26:17.627 [2024-04-26 12:22:18.758105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.627 [2024-04-26 12:22:18.758454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.627 [2024-04-26 12:22:18.758480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.627 qpair failed and we were unable to recover it. 00:26:17.627 [2024-04-26 12:22:18.758715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.627 [2024-04-26 12:22:18.758991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.627 [2024-04-26 12:22:18.759020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.627 qpair failed and we were unable to recover it. 00:26:17.627 [2024-04-26 12:22:18.759367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.627 [2024-04-26 12:22:18.759726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.627 [2024-04-26 12:22:18.759752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.627 qpair failed and we were unable to recover it. 00:26:17.627 [2024-04-26 12:22:18.759995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.627 [2024-04-26 12:22:18.760398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.627 [2024-04-26 12:22:18.760424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.627 qpair failed and we were unable to recover it. 00:26:17.627 [2024-04-26 12:22:18.760764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.627 [2024-04-26 12:22:18.760993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.627 [2024-04-26 12:22:18.761019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.627 qpair failed and we were unable to recover it. 00:26:17.627 [2024-04-26 12:22:18.761282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.627 [2024-04-26 12:22:18.761664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.627 [2024-04-26 12:22:18.761690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.627 qpair failed and we were unable to recover it. 00:26:17.627 [2024-04-26 12:22:18.762021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.627 [2024-04-26 12:22:18.762397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.627 [2024-04-26 12:22:18.762423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.627 qpair failed and we were unable to recover it. 00:26:17.627 [2024-04-26 12:22:18.762787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.627 [2024-04-26 12:22:18.763129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.627 [2024-04-26 12:22:18.763157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.627 qpair failed and we were unable to recover it. 00:26:17.627 [2024-04-26 12:22:18.763407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.627 [2024-04-26 12:22:18.763776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.627 [2024-04-26 12:22:18.763802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.627 qpair failed and we were unable to recover it. 00:26:17.627 [2024-04-26 12:22:18.764182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.627 [2024-04-26 12:22:18.764513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.627 [2024-04-26 12:22:18.764539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.627 qpair failed and we were unable to recover it. 00:26:17.627 [2024-04-26 12:22:18.764905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.627 [2024-04-26 12:22:18.765264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.627 [2024-04-26 12:22:18.765292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.627 qpair failed and we were unable to recover it. 00:26:17.627 [2024-04-26 12:22:18.765584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.627 [2024-04-26 12:22:18.765904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.627 [2024-04-26 12:22:18.765932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.627 qpair failed and we were unable to recover it. 00:26:17.627 [2024-04-26 12:22:18.766186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.627 [2024-04-26 12:22:18.766583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.627 [2024-04-26 12:22:18.766609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.627 qpair failed and we were unable to recover it. 00:26:17.627 [2024-04-26 12:22:18.766967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.627 [2024-04-26 12:22:18.767218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.627 [2024-04-26 12:22:18.767243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.627 qpair failed and we were unable to recover it. 00:26:17.627 [2024-04-26 12:22:18.767617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.627 [2024-04-26 12:22:18.767940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.627 [2024-04-26 12:22:18.767968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.627 qpair failed and we were unable to recover it. 00:26:17.627 [2024-04-26 12:22:18.768205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.627 [2024-04-26 12:22:18.768556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.627 [2024-04-26 12:22:18.768583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.627 qpair failed and we were unable to recover it. 00:26:17.627 [2024-04-26 12:22:18.768931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.627 [2024-04-26 12:22:18.769260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.627 [2024-04-26 12:22:18.769287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.627 qpair failed and we were unable to recover it. 00:26:17.627 [2024-04-26 12:22:18.769667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.627 [2024-04-26 12:22:18.769998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.627 [2024-04-26 12:22:18.770025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.627 qpair failed and we were unable to recover it. 00:26:17.627 [2024-04-26 12:22:18.770373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.627 [2024-04-26 12:22:18.770727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.627 [2024-04-26 12:22:18.770754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.627 qpair failed and we were unable to recover it. 00:26:17.627 [2024-04-26 12:22:18.771151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.627 [2024-04-26 12:22:18.771496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.627 [2024-04-26 12:22:18.771522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.627 qpair failed and we were unable to recover it. 00:26:17.627 [2024-04-26 12:22:18.771855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.627 [2024-04-26 12:22:18.772238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.627 [2024-04-26 12:22:18.772264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.627 qpair failed and we were unable to recover it. 00:26:17.627 [2024-04-26 12:22:18.772511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.627 [2024-04-26 12:22:18.772877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.627 [2024-04-26 12:22:18.772904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.627 qpair failed and we were unable to recover it. 00:26:17.627 [2024-04-26 12:22:18.773287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.627 [2024-04-26 12:22:18.773618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.627 [2024-04-26 12:22:18.773644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.627 qpair failed and we were unable to recover it. 00:26:17.627 [2024-04-26 12:22:18.773987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.627 [2024-04-26 12:22:18.774350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.627 [2024-04-26 12:22:18.774377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.627 qpair failed and we were unable to recover it. 00:26:17.627 [2024-04-26 12:22:18.774763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.627 [2024-04-26 12:22:18.775109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.627 [2024-04-26 12:22:18.775138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.627 qpair failed and we were unable to recover it. 00:26:17.627 [2024-04-26 12:22:18.775498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.627 [2024-04-26 12:22:18.775813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.627 [2024-04-26 12:22:18.775858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.627 qpair failed and we were unable to recover it. 00:26:17.627 [2024-04-26 12:22:18.776206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.627 [2024-04-26 12:22:18.776323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.627 [2024-04-26 12:22:18.776351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f26cc000b90 with addr=10.0.0.2, port=4420 00:26:17.627 qpair failed and we were unable to recover it. 00:26:17.627 Read completed with error (sct=0, sc=8) 00:26:17.627 starting I/O failed 00:26:17.627 Read completed with error (sct=0, sc=8) 00:26:17.627 starting I/O failed 00:26:17.627 Read completed with error (sct=0, sc=8) 00:26:17.627 starting I/O failed 00:26:17.627 Read completed with error (sct=0, sc=8) 00:26:17.627 starting I/O failed 00:26:17.627 Read completed with error (sct=0, sc=8) 00:26:17.627 starting I/O failed 00:26:17.628 Read completed with error (sct=0, sc=8) 00:26:17.628 starting I/O failed 00:26:17.628 Read completed with error (sct=0, sc=8) 00:26:17.628 starting I/O failed 00:26:17.628 Read completed with error (sct=0, sc=8) 00:26:17.628 starting I/O failed 00:26:17.628 Read completed with error (sct=0, sc=8) 00:26:17.628 starting I/O failed 00:26:17.628 Read completed with error (sct=0, sc=8) 00:26:17.628 starting I/O failed 00:26:17.628 Read completed with error (sct=0, sc=8) 00:26:17.628 starting I/O failed 00:26:17.628 Read completed with error (sct=0, sc=8) 00:26:17.628 starting I/O failed 00:26:17.628 Read completed with error (sct=0, sc=8) 00:26:17.628 starting I/O failed 00:26:17.628 Read completed with error (sct=0, sc=8) 00:26:17.628 starting I/O failed 00:26:17.628 Write completed with error (sct=0, sc=8) 00:26:17.628 starting I/O failed 00:26:17.628 Write completed with error (sct=0, sc=8) 00:26:17.628 starting I/O failed 00:26:17.628 Read completed with error (sct=0, sc=8) 00:26:17.628 starting I/O failed 00:26:17.628 Read completed with error (sct=0, sc=8) 00:26:17.628 starting I/O failed 00:26:17.628 Write completed with error (sct=0, sc=8) 00:26:17.628 starting I/O failed 00:26:17.628 Write completed with error (sct=0, sc=8) 00:26:17.628 starting I/O failed 00:26:17.628 Write completed with error (sct=0, sc=8) 00:26:17.628 starting I/O failed 00:26:17.628 Write completed with error (sct=0, sc=8) 00:26:17.628 starting I/O failed 00:26:17.628 Write completed with error (sct=0, sc=8) 00:26:17.628 starting I/O failed 00:26:17.628 Write completed with error (sct=0, sc=8) 00:26:17.628 starting I/O failed 00:26:17.628 Write completed with error (sct=0, sc=8) 00:26:17.628 starting I/O failed 00:26:17.628 Write completed with error (sct=0, sc=8) 00:26:17.628 starting I/O failed 00:26:17.628 Write completed with error (sct=0, sc=8) 00:26:17.628 starting I/O failed 00:26:17.628 Read completed with error (sct=0, sc=8) 00:26:17.628 starting I/O failed 00:26:17.628 Read completed with error (sct=0, sc=8) 00:26:17.628 starting I/O failed 00:26:17.628 Write completed with error (sct=0, sc=8) 00:26:17.628 starting I/O failed 00:26:17.628 Write completed with error (sct=0, sc=8) 00:26:17.628 starting I/O failed 00:26:17.628 Read completed with error (sct=0, sc=8) 00:26:17.628 starting I/O failed 00:26:17.628 [2024-04-26 12:22:18.776651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.628 [2024-04-26 12:22:18.777096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 12:22:18.777433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 12:22:18.777448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.628 qpair failed and we were unable to recover it. 00:26:17.628 [2024-04-26 12:22:18.778036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 12:22:18.778299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 12:22:18.778313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.628 qpair failed and we were unable to recover it. 00:26:17.628 [2024-04-26 12:22:18.778538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 12:22:18.778726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 12:22:18.778737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.628 qpair failed and we were unable to recover it. 00:26:17.628 [2024-04-26 12:22:18.779065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 12:22:18.779387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 12:22:18.779397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.628 qpair failed and we were unable to recover it. 00:26:17.628 [2024-04-26 12:22:18.779744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 12:22:18.780062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 12:22:18.780073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.628 qpair failed and we were unable to recover it. 00:26:17.628 [2024-04-26 12:22:18.780402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 12:22:18.780690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 12:22:18.780699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.628 qpair failed and we were unable to recover it. 00:26:17.628 [2024-04-26 12:22:18.780996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 12:22:18.781350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 12:22:18.781360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.628 qpair failed and we were unable to recover it. 00:26:17.628 [2024-04-26 12:22:18.781668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 12:22:18.782001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 12:22:18.782010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.628 qpair failed and we were unable to recover it. 00:26:17.628 [2024-04-26 12:22:18.782362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 12:22:18.782685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 12:22:18.782694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.628 qpair failed and we were unable to recover it. 00:26:17.628 [2024-04-26 12:22:18.783116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 12:22:18.783421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 12:22:18.783430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.628 qpair failed and we were unable to recover it. 00:26:17.628 [2024-04-26 12:22:18.783850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 12:22:18.784136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 12:22:18.784146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.628 qpair failed and we were unable to recover it. 00:26:17.628 [2024-04-26 12:22:18.784428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 12:22:18.784735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 12:22:18.784744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.628 qpair failed and we were unable to recover it. 00:26:17.628 [2024-04-26 12:22:18.785060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 12:22:18.785286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 12:22:18.785295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.628 qpair failed and we were unable to recover it. 00:26:17.628 [2024-04-26 12:22:18.785623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 12:22:18.785934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 12:22:18.785943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.628 qpair failed and we were unable to recover it. 00:26:17.628 [2024-04-26 12:22:18.786302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 12:22:18.786609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 12:22:18.786618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.628 qpair failed and we were unable to recover it. 00:26:17.628 [2024-04-26 12:22:18.786964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 12:22:18.787296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 12:22:18.787305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.628 qpair failed and we were unable to recover it. 00:26:17.628 [2024-04-26 12:22:18.787646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 12:22:18.787970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 12:22:18.787979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.628 qpair failed and we were unable to recover it. 00:26:17.628 [2024-04-26 12:22:18.788321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 12:22:18.788631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 12:22:18.788639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.628 qpair failed and we were unable to recover it. 00:26:17.628 [2024-04-26 12:22:18.788976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 12:22:18.789300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 12:22:18.789311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.628 qpair failed and we were unable to recover it. 00:26:17.628 [2024-04-26 12:22:18.789677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 12:22:18.790024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 12:22:18.790035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.628 qpair failed and we were unable to recover it. 00:26:17.628 [2024-04-26 12:22:18.790227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 12:22:18.790530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 12:22:18.790540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.628 qpair failed and we were unable to recover it. 00:26:17.628 [2024-04-26 12:22:18.790764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 12:22:18.791056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 12:22:18.791067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.628 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 12:22:18.791283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 12:22:18.791570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 12:22:18.791579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 12:22:18.791984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 12:22:18.792142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 12:22:18.792152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 12:22:18.792469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 12:22:18.792698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 12:22:18.792708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 12:22:18.792911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 12:22:18.793269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 12:22:18.793279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 12:22:18.793611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 12:22:18.793934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 12:22:18.793945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 12:22:18.794271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 12:22:18.794615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 12:22:18.794625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 12:22:18.794920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 12:22:18.795185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 12:22:18.795197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 12:22:18.795429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 12:22:18.795677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 12:22:18.795687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 12:22:18.796011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 12:22:18.796169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 12:22:18.796181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 12:22:18.796413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 12:22:18.796706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 12:22:18.796716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 12:22:18.797095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 12:22:18.797441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 12:22:18.797451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 12:22:18.797666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 12:22:18.797777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 12:22:18.797786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 12:22:18.798124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 12:22:18.798411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 12:22:18.798422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 12:22:18.798761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 12:22:18.799094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 12:22:18.799104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 12:22:18.799444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 12:22:18.799769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 12:22:18.799779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 12:22:18.800135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 12:22:18.800427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 12:22:18.800436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 12:22:18.800634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 12:22:18.800966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 12:22:18.800977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 12:22:18.801320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 12:22:18.801628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 12:22:18.801638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 12:22:18.801986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 12:22:18.802255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 12:22:18.802264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 12:22:18.802587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 12:22:18.802796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 12:22:18.802805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 12:22:18.802948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 12:22:18.803258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 12:22:18.803267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 12:22:18.803594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 12:22:18.803885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 12:22:18.803895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 12:22:18.804227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 12:22:18.804548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 12:22:18.804557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 12:22:18.804751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 12:22:18.805051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 12:22:18.805060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 12:22:18.805399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 12:22:18.805711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 12:22:18.805720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 12:22:18.806101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 12:22:18.806426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 12:22:18.806435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 12:22:18.806780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 12:22:18.806876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 12:22:18.806885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 12:22:18.807176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 12:22:18.807386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 12:22:18.807395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 12:22:18.807749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 12:22:18.808056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 12:22:18.808066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 12:22:18.808406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 12:22:18.808688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 12:22:18.808698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 12:22:18.808995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 12:22:18.809311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 12:22:18.809320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 12:22:18.809647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 12:22:18.809962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 12:22:18.809971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 12:22:18.810359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 12:22:18.810582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 12:22:18.810591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 12:22:18.810916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 12:22:18.811140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 12:22:18.811149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 12:22:18.811424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 12:22:18.811735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 12:22:18.811744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 12:22:18.812072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 12:22:18.812389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 12:22:18.812398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 12:22:18.812733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 12:22:18.813069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 12:22:18.813078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 12:22:18.813402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 12:22:18.813722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 12:22:18.813731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 12:22:18.813972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 12:22:18.814178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 12:22:18.814187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 12:22:18.814403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 12:22:18.814504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 12:22:18.814513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 12:22:18.814832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 12:22:18.815078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 12:22:18.815088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 12:22:18.815422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 12:22:18.815768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 12:22:18.815777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 12:22:18.816102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 12:22:18.816400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 12:22:18.816409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 12:22:18.816730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 12:22:18.817056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 12:22:18.817066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 12:22:18.817393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 12:22:18.817682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 12:22:18.817691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 12:22:18.818030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 12:22:18.818353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 12:22:18.818362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 12:22:18.818662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 12:22:18.818834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 12:22:18.818846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 12:22:18.819064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 12:22:18.819432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 12:22:18.819441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 12:22:18.819735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 12:22:18.820061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 12:22:18.820071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 12:22:18.820368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 12:22:18.820653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 12:22:18.820662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 12:22:18.820969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 12:22:18.821172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 12:22:18.821181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 12:22:18.821511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 12:22:18.821814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 12:22:18.821823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 12:22:18.822120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 12:22:18.822445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 12:22:18.822455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.631 qpair failed and we were unable to recover it. 00:26:17.631 [2024-04-26 12:22:18.822775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 12:22:18.822998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 12:22:18.823007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.631 qpair failed and we were unable to recover it. 00:26:17.631 [2024-04-26 12:22:18.823327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 12:22:18.823648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 12:22:18.823657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.631 qpair failed and we were unable to recover it. 00:26:17.631 [2024-04-26 12:22:18.823981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 12:22:18.824283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 12:22:18.824292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.631 qpair failed and we were unable to recover it. 00:26:17.631 [2024-04-26 12:22:18.824615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 12:22:18.824852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 12:22:18.824862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.631 qpair failed and we were unable to recover it. 00:26:17.631 [2024-04-26 12:22:18.825172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 12:22:18.825487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 12:22:18.825496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.631 qpair failed and we were unable to recover it. 00:26:17.631 [2024-04-26 12:22:18.825824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 12:22:18.826180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 12:22:18.826189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.631 qpair failed and we were unable to recover it. 00:26:17.631 [2024-04-26 12:22:18.826537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 12:22:18.826865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 12:22:18.826876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.631 qpair failed and we were unable to recover it. 00:26:17.631 [2024-04-26 12:22:18.827108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 12:22:18.827411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 12:22:18.827420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.631 qpair failed and we were unable to recover it. 00:26:17.901 [2024-04-26 12:22:18.827763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.901 [2024-04-26 12:22:18.828157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.901 [2024-04-26 12:22:18.828167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.901 qpair failed and we were unable to recover it. 00:26:17.901 [2024-04-26 12:22:18.828470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.901 [2024-04-26 12:22:18.828757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.901 [2024-04-26 12:22:18.828767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.901 qpair failed and we were unable to recover it. 00:26:17.901 [2024-04-26 12:22:18.829140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.901 [2024-04-26 12:22:18.829454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.901 [2024-04-26 12:22:18.829465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.901 qpair failed and we were unable to recover it. 00:26:17.901 [2024-04-26 12:22:18.829781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.901 [2024-04-26 12:22:18.830101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.902 [2024-04-26 12:22:18.830111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.902 qpair failed and we were unable to recover it. 00:26:17.902 [2024-04-26 12:22:18.830329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.902 [2024-04-26 12:22:18.830539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.902 [2024-04-26 12:22:18.830548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.902 qpair failed and we were unable to recover it. 00:26:17.902 [2024-04-26 12:22:18.830865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.902 [2024-04-26 12:22:18.831185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.902 [2024-04-26 12:22:18.831194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.902 qpair failed and we were unable to recover it. 00:26:17.902 [2024-04-26 12:22:18.831520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.902 [2024-04-26 12:22:18.831848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.902 [2024-04-26 12:22:18.831860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.902 qpair failed and we were unable to recover it. 00:26:17.902 [2024-04-26 12:22:18.832172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.902 [2024-04-26 12:22:18.832482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.902 [2024-04-26 12:22:18.832492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.902 qpair failed and we were unable to recover it. 00:26:17.902 [2024-04-26 12:22:18.832849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.902 [2024-04-26 12:22:18.833177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.902 [2024-04-26 12:22:18.833186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.902 qpair failed and we were unable to recover it. 00:26:17.902 [2024-04-26 12:22:18.833572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.902 [2024-04-26 12:22:18.833888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.902 [2024-04-26 12:22:18.833898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.902 qpair failed and we were unable to recover it. 00:26:17.902 [2024-04-26 12:22:18.834219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.902 [2024-04-26 12:22:18.834537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.902 [2024-04-26 12:22:18.834546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.902 qpair failed and we were unable to recover it. 00:26:17.902 [2024-04-26 12:22:18.834891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.902 [2024-04-26 12:22:18.835210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.902 [2024-04-26 12:22:18.835219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.902 qpair failed and we were unable to recover it. 00:26:17.902 [2024-04-26 12:22:18.835612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.902 [2024-04-26 12:22:18.835965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.902 [2024-04-26 12:22:18.835974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.902 qpair failed and we were unable to recover it. 00:26:17.902 [2024-04-26 12:22:18.836278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.902 [2024-04-26 12:22:18.836594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.902 [2024-04-26 12:22:18.836603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.902 qpair failed and we were unable to recover it. 00:26:17.902 [2024-04-26 12:22:18.836929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.902 [2024-04-26 12:22:18.837134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.902 [2024-04-26 12:22:18.837143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.902 qpair failed and we were unable to recover it. 00:26:17.902 [2024-04-26 12:22:18.837458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.902 [2024-04-26 12:22:18.837810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.902 [2024-04-26 12:22:18.837820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.902 qpair failed and we were unable to recover it. 00:26:17.902 [2024-04-26 12:22:18.838145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.902 [2024-04-26 12:22:18.838459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.902 [2024-04-26 12:22:18.838468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.902 qpair failed and we were unable to recover it. 00:26:17.902 [2024-04-26 12:22:18.838803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.902 [2024-04-26 12:22:18.839136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.902 [2024-04-26 12:22:18.839145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.902 qpair failed and we were unable to recover it. 00:26:17.902 [2024-04-26 12:22:18.839412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.902 [2024-04-26 12:22:18.839759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.902 [2024-04-26 12:22:18.839768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.902 qpair failed and we were unable to recover it. 00:26:17.902 [2024-04-26 12:22:18.840067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.902 [2024-04-26 12:22:18.840354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.902 [2024-04-26 12:22:18.840363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.902 qpair failed and we were unable to recover it. 00:26:17.902 [2024-04-26 12:22:18.840682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.902 [2024-04-26 12:22:18.840998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.902 [2024-04-26 12:22:18.841007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.902 qpair failed and we were unable to recover it. 00:26:17.902 [2024-04-26 12:22:18.841347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.902 [2024-04-26 12:22:18.841540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.902 [2024-04-26 12:22:18.841549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.902 qpair failed and we were unable to recover it. 00:26:17.902 [2024-04-26 12:22:18.841887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.902 [2024-04-26 12:22:18.842087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.902 [2024-04-26 12:22:18.842096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.902 qpair failed and we were unable to recover it. 00:26:17.902 [2024-04-26 12:22:18.842422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.902 [2024-04-26 12:22:18.842701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.902 [2024-04-26 12:22:18.842710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.902 qpair failed and we were unable to recover it. 00:26:17.902 [2024-04-26 12:22:18.843022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.902 [2024-04-26 12:22:18.843333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.902 [2024-04-26 12:22:18.843342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.902 qpair failed and we were unable to recover it. 00:26:17.902 [2024-04-26 12:22:18.843685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.902 [2024-04-26 12:22:18.843977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.902 [2024-04-26 12:22:18.843987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.902 qpair failed and we were unable to recover it. 00:26:17.902 [2024-04-26 12:22:18.844363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.902 [2024-04-26 12:22:18.844712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.902 [2024-04-26 12:22:18.844721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.902 qpair failed and we were unable to recover it. 00:26:17.902 [2024-04-26 12:22:18.845079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.902 [2024-04-26 12:22:18.845343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.902 [2024-04-26 12:22:18.845352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.902 qpair failed and we were unable to recover it. 00:26:17.902 [2024-04-26 12:22:18.845680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.902 [2024-04-26 12:22:18.846005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.902 [2024-04-26 12:22:18.846014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.902 qpair failed and we were unable to recover it. 00:26:17.902 [2024-04-26 12:22:18.846351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.902 [2024-04-26 12:22:18.846555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.902 [2024-04-26 12:22:18.846564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.902 qpair failed and we were unable to recover it. 00:26:17.902 [2024-04-26 12:22:18.846857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.902 [2024-04-26 12:22:18.847108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.902 [2024-04-26 12:22:18.847117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.902 qpair failed and we were unable to recover it. 00:26:17.902 [2024-04-26 12:22:18.847434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.902 [2024-04-26 12:22:18.847626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.902 [2024-04-26 12:22:18.847636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.902 qpair failed and we were unable to recover it. 00:26:17.903 [2024-04-26 12:22:18.847966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.903 [2024-04-26 12:22:18.848270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.903 [2024-04-26 12:22:18.848279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.903 qpair failed and we were unable to recover it. 00:26:17.903 [2024-04-26 12:22:18.848624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.903 [2024-04-26 12:22:18.848943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.903 [2024-04-26 12:22:18.848953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.903 qpair failed and we were unable to recover it. 00:26:17.903 [2024-04-26 12:22:18.849281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.903 [2024-04-26 12:22:18.849595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.903 [2024-04-26 12:22:18.849604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.903 qpair failed and we were unable to recover it. 00:26:17.903 [2024-04-26 12:22:18.849992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.903 [2024-04-26 12:22:18.850277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.903 [2024-04-26 12:22:18.850286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.903 qpair failed and we were unable to recover it. 00:26:17.903 [2024-04-26 12:22:18.850624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.903 [2024-04-26 12:22:18.850910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.903 [2024-04-26 12:22:18.850920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.903 qpair failed and we were unable to recover it. 00:26:17.903 [2024-04-26 12:22:18.851277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.903 [2024-04-26 12:22:18.851594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.903 [2024-04-26 12:22:18.851603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.903 qpair failed and we were unable to recover it. 00:26:17.903 [2024-04-26 12:22:18.851896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.903 [2024-04-26 12:22:18.852207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.903 [2024-04-26 12:22:18.852216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.903 qpair failed and we were unable to recover it. 00:26:17.903 [2024-04-26 12:22:18.852584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.903 [2024-04-26 12:22:18.852888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.903 [2024-04-26 12:22:18.852898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.903 qpair failed and we were unable to recover it. 00:26:17.903 [2024-04-26 12:22:18.853114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.903 [2024-04-26 12:22:18.853390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.903 [2024-04-26 12:22:18.853399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.903 qpair failed and we were unable to recover it. 00:26:17.903 [2024-04-26 12:22:18.853738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.903 [2024-04-26 12:22:18.854056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.903 [2024-04-26 12:22:18.854065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.903 qpair failed and we were unable to recover it. 00:26:17.903 [2024-04-26 12:22:18.854390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.903 [2024-04-26 12:22:18.854712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.903 [2024-04-26 12:22:18.854721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.903 qpair failed and we were unable to recover it. 00:26:17.903 [2024-04-26 12:22:18.855022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.903 [2024-04-26 12:22:18.855327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.903 [2024-04-26 12:22:18.855336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.903 qpair failed and we were unable to recover it. 00:26:17.903 [2024-04-26 12:22:18.855585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.903 [2024-04-26 12:22:18.855895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.903 [2024-04-26 12:22:18.855905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.903 qpair failed and we were unable to recover it. 00:26:17.903 [2024-04-26 12:22:18.856240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.903 [2024-04-26 12:22:18.856521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.903 [2024-04-26 12:22:18.856530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.903 qpair failed and we were unable to recover it. 00:26:17.903 [2024-04-26 12:22:18.856823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.903 [2024-04-26 12:22:18.857157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.903 [2024-04-26 12:22:18.857167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.903 qpair failed and we were unable to recover it. 00:26:17.903 [2024-04-26 12:22:18.857500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.903 [2024-04-26 12:22:18.857817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.903 [2024-04-26 12:22:18.857826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.903 qpair failed and we were unable to recover it. 00:26:17.903 [2024-04-26 12:22:18.858146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.903 [2024-04-26 12:22:18.858478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.903 [2024-04-26 12:22:18.858487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.903 qpair failed and we were unable to recover it. 00:26:17.903 [2024-04-26 12:22:18.858693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.903 [2024-04-26 12:22:18.858965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.903 [2024-04-26 12:22:18.858975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.903 qpair failed and we were unable to recover it. 00:26:17.903 [2024-04-26 12:22:18.859235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.903 [2024-04-26 12:22:18.859441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.903 [2024-04-26 12:22:18.859450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.903 qpair failed and we were unable to recover it. 00:26:17.903 [2024-04-26 12:22:18.859766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.903 [2024-04-26 12:22:18.860096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.903 [2024-04-26 12:22:18.860106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.903 qpair failed and we were unable to recover it. 00:26:17.903 [2024-04-26 12:22:18.860318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.903 [2024-04-26 12:22:18.860638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.903 [2024-04-26 12:22:18.860648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.903 qpair failed and we were unable to recover it. 00:26:17.903 [2024-04-26 12:22:18.860996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.903 [2024-04-26 12:22:18.861312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.903 [2024-04-26 12:22:18.861321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.903 qpair failed and we were unable to recover it. 00:26:17.903 [2024-04-26 12:22:18.861531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.903 [2024-04-26 12:22:18.861801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.903 [2024-04-26 12:22:18.861810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.903 qpair failed and we were unable to recover it. 00:26:17.903 [2024-04-26 12:22:18.862144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.903 [2024-04-26 12:22:18.862456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.903 [2024-04-26 12:22:18.862465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.903 qpair failed and we were unable to recover it. 00:26:17.903 [2024-04-26 12:22:18.862816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.903 [2024-04-26 12:22:18.863146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.903 [2024-04-26 12:22:18.863157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.903 qpair failed and we were unable to recover it. 00:26:17.903 [2024-04-26 12:22:18.863538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.903 [2024-04-26 12:22:18.863854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.903 [2024-04-26 12:22:18.863868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.903 qpair failed and we were unable to recover it. 00:26:17.903 [2024-04-26 12:22:18.864199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.903 [2024-04-26 12:22:18.864515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.903 [2024-04-26 12:22:18.864524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.903 qpair failed and we were unable to recover it. 00:26:17.903 [2024-04-26 12:22:18.864798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.903 [2024-04-26 12:22:18.865020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.903 [2024-04-26 12:22:18.865029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.903 qpair failed and we were unable to recover it. 00:26:17.903 [2024-04-26 12:22:18.865349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.903 [2024-04-26 12:22:18.865633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.904 [2024-04-26 12:22:18.865642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.904 qpair failed and we were unable to recover it. 00:26:17.904 [2024-04-26 12:22:18.865833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.904 [2024-04-26 12:22:18.866139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.904 [2024-04-26 12:22:18.866148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.904 qpair failed and we were unable to recover it. 00:26:17.904 [2024-04-26 12:22:18.866471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.904 [2024-04-26 12:22:18.866786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.904 [2024-04-26 12:22:18.866795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.904 qpair failed and we were unable to recover it. 00:26:17.904 [2024-04-26 12:22:18.867130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.904 [2024-04-26 12:22:18.867354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.904 [2024-04-26 12:22:18.867363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.904 qpair failed and we were unable to recover it. 00:26:17.904 [2024-04-26 12:22:18.867666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.904 [2024-04-26 12:22:18.867971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.904 [2024-04-26 12:22:18.867980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.904 qpair failed and we were unable to recover it. 00:26:17.904 [2024-04-26 12:22:18.868195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.904 [2024-04-26 12:22:18.868476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.904 [2024-04-26 12:22:18.868486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.904 qpair failed and we were unable to recover it. 00:26:17.904 [2024-04-26 12:22:18.868815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.904 [2024-04-26 12:22:18.869147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.904 [2024-04-26 12:22:18.869156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.904 qpair failed and we were unable to recover it. 00:26:17.904 [2024-04-26 12:22:18.869457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.904 [2024-04-26 12:22:18.869775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.904 [2024-04-26 12:22:18.869784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.904 qpair failed and we were unable to recover it. 00:26:17.904 [2024-04-26 12:22:18.870095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.904 [2024-04-26 12:22:18.870403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.904 [2024-04-26 12:22:18.870412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.904 qpair failed and we were unable to recover it. 00:26:17.904 [2024-04-26 12:22:18.870698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.904 [2024-04-26 12:22:18.870909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.904 [2024-04-26 12:22:18.870918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.904 qpair failed and we were unable to recover it. 00:26:17.904 [2024-04-26 12:22:18.871262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.904 [2024-04-26 12:22:18.871585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.904 [2024-04-26 12:22:18.871594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.904 qpair failed and we were unable to recover it. 00:26:17.904 [2024-04-26 12:22:18.871908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.904 [2024-04-26 12:22:18.872211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.904 [2024-04-26 12:22:18.872220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.904 qpair failed and we were unable to recover it. 00:26:17.904 [2024-04-26 12:22:18.872575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.904 [2024-04-26 12:22:18.872783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.904 [2024-04-26 12:22:18.872793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.904 qpair failed and we were unable to recover it. 00:26:17.904 [2024-04-26 12:22:18.873104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.904 [2024-04-26 12:22:18.873295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.904 [2024-04-26 12:22:18.873305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.904 qpair failed and we were unable to recover it. 00:26:17.904 [2024-04-26 12:22:18.873627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.904 [2024-04-26 12:22:18.874005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.904 [2024-04-26 12:22:18.874015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.904 qpair failed and we were unable to recover it. 00:26:17.904 [2024-04-26 12:22:18.874325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.904 [2024-04-26 12:22:18.874650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.904 [2024-04-26 12:22:18.874659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.904 qpair failed and we were unable to recover it. 00:26:17.904 [2024-04-26 12:22:18.874959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.904 [2024-04-26 12:22:18.875297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.904 [2024-04-26 12:22:18.875306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.904 qpair failed and we were unable to recover it. 00:26:17.904 [2024-04-26 12:22:18.875644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.904 [2024-04-26 12:22:18.875965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.904 [2024-04-26 12:22:18.875975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.904 qpair failed and we were unable to recover it. 00:26:17.904 [2024-04-26 12:22:18.876368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.904 [2024-04-26 12:22:18.876676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.904 [2024-04-26 12:22:18.876686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.904 qpair failed and we were unable to recover it. 00:26:17.904 [2024-04-26 12:22:18.877012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.904 [2024-04-26 12:22:18.877327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.904 [2024-04-26 12:22:18.877338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.904 qpair failed and we were unable to recover it. 00:26:17.904 [2024-04-26 12:22:18.877651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.904 [2024-04-26 12:22:18.877967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.904 [2024-04-26 12:22:18.877977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.904 qpair failed and we were unable to recover it. 00:26:17.904 [2024-04-26 12:22:18.878361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.904 [2024-04-26 12:22:18.878661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.904 [2024-04-26 12:22:18.878670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.904 qpair failed and we were unable to recover it. 00:26:17.904 [2024-04-26 12:22:18.879020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.904 [2024-04-26 12:22:18.879370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.904 [2024-04-26 12:22:18.879379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.904 qpair failed and we were unable to recover it. 00:26:17.904 [2024-04-26 12:22:18.879684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.904 [2024-04-26 12:22:18.879972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.904 [2024-04-26 12:22:18.879981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.904 qpair failed and we were unable to recover it. 00:26:17.904 [2024-04-26 12:22:18.880342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.904 [2024-04-26 12:22:18.880584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.904 [2024-04-26 12:22:18.880593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.904 qpair failed and we were unable to recover it. 00:26:17.904 [2024-04-26 12:22:18.880894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.904 [2024-04-26 12:22:18.881192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.904 [2024-04-26 12:22:18.881201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.904 qpair failed and we were unable to recover it. 00:26:17.904 [2024-04-26 12:22:18.881533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.904 [2024-04-26 12:22:18.881847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.904 [2024-04-26 12:22:18.881857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.904 qpair failed and we were unable to recover it. 00:26:17.904 [2024-04-26 12:22:18.882172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.904 [2024-04-26 12:22:18.882485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.904 [2024-04-26 12:22:18.882494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.904 qpair failed and we were unable to recover it. 00:26:17.904 [2024-04-26 12:22:18.882828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.904 [2024-04-26 12:22:18.883148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.904 [2024-04-26 12:22:18.883157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.904 qpair failed and we were unable to recover it. 00:26:17.904 [2024-04-26 12:22:18.883537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.905 [2024-04-26 12:22:18.883850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.905 [2024-04-26 12:22:18.883865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.905 qpair failed and we were unable to recover it. 00:26:17.905 [2024-04-26 12:22:18.884078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.905 [2024-04-26 12:22:18.884392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.905 [2024-04-26 12:22:18.884402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.905 qpair failed and we were unable to recover it. 00:26:17.905 [2024-04-26 12:22:18.884758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.905 [2024-04-26 12:22:18.885070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.905 [2024-04-26 12:22:18.885079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.905 qpair failed and we were unable to recover it. 00:26:17.905 [2024-04-26 12:22:18.885390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.905 [2024-04-26 12:22:18.885715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.905 [2024-04-26 12:22:18.885724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.905 qpair failed and we were unable to recover it. 00:26:17.905 [2024-04-26 12:22:18.886095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.905 [2024-04-26 12:22:18.886422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.905 [2024-04-26 12:22:18.886432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.905 qpair failed and we were unable to recover it. 00:26:17.905 [2024-04-26 12:22:18.886753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.905 [2024-04-26 12:22:18.887067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.905 [2024-04-26 12:22:18.887077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.905 qpair failed and we were unable to recover it. 00:26:17.905 [2024-04-26 12:22:18.887410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.905 [2024-04-26 12:22:18.887540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.905 [2024-04-26 12:22:18.887549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.905 qpair failed and we were unable to recover it. 00:26:17.905 [2024-04-26 12:22:18.887864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.905 [2024-04-26 12:22:18.888151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.905 [2024-04-26 12:22:18.888161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.905 qpair failed and we were unable to recover it. 00:26:17.905 [2024-04-26 12:22:18.888483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.905 [2024-04-26 12:22:18.888785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.905 [2024-04-26 12:22:18.888795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.905 qpair failed and we were unable to recover it. 00:26:17.905 [2024-04-26 12:22:18.889139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.905 [2024-04-26 12:22:18.889444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.905 [2024-04-26 12:22:18.889454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.905 qpair failed and we were unable to recover it. 00:26:17.905 [2024-04-26 12:22:18.889775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.905 [2024-04-26 12:22:18.889994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.905 [2024-04-26 12:22:18.890004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.905 qpair failed and we were unable to recover it. 00:26:17.905 [2024-04-26 12:22:18.890330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.905 [2024-04-26 12:22:18.890681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.905 [2024-04-26 12:22:18.890691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.905 qpair failed and we were unable to recover it. 00:26:17.905 [2024-04-26 12:22:18.891032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.905 [2024-04-26 12:22:18.891343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.905 [2024-04-26 12:22:18.891351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.905 qpair failed and we were unable to recover it. 00:26:17.905 [2024-04-26 12:22:18.891678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.905 [2024-04-26 12:22:18.891729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.905 [2024-04-26 12:22:18.891739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.905 qpair failed and we were unable to recover it. 00:26:17.905 [2024-04-26 12:22:18.892016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.905 [2024-04-26 12:22:18.892355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.905 [2024-04-26 12:22:18.892364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.905 qpair failed and we were unable to recover it. 00:26:17.905 [2024-04-26 12:22:18.892688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.905 [2024-04-26 12:22:18.892908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.905 [2024-04-26 12:22:18.892918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.905 qpair failed and we were unable to recover it. 00:26:17.905 [2024-04-26 12:22:18.893233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.905 [2024-04-26 12:22:18.893432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.905 [2024-04-26 12:22:18.893441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.905 qpair failed and we were unable to recover it. 00:26:17.905 [2024-04-26 12:22:18.893662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.905 [2024-04-26 12:22:18.893939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.905 [2024-04-26 12:22:18.893948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.905 qpair failed and we were unable to recover it. 00:26:17.905 [2024-04-26 12:22:18.894269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.905 [2024-04-26 12:22:18.894576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.905 [2024-04-26 12:22:18.894585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.905 qpair failed and we were unable to recover it. 00:26:17.905 [2024-04-26 12:22:18.894980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.905 [2024-04-26 12:22:18.895310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.905 [2024-04-26 12:22:18.895321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.905 qpair failed and we were unable to recover it. 00:26:17.905 [2024-04-26 12:22:18.895659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.905 [2024-04-26 12:22:18.895872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.905 [2024-04-26 12:22:18.895882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.905 qpair failed and we were unable to recover it. 00:26:17.905 [2024-04-26 12:22:18.896177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.905 [2024-04-26 12:22:18.896528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.905 [2024-04-26 12:22:18.896537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.905 qpair failed and we were unable to recover it. 00:26:17.905 [2024-04-26 12:22:18.896847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.905 [2024-04-26 12:22:18.897176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.905 [2024-04-26 12:22:18.897186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.905 qpair failed and we were unable to recover it. 00:26:17.905 [2024-04-26 12:22:18.897404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.905 [2024-04-26 12:22:18.897673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.905 [2024-04-26 12:22:18.897681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.905 qpair failed and we were unable to recover it. 00:26:17.905 [2024-04-26 12:22:18.897994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.905 [2024-04-26 12:22:18.898307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.905 [2024-04-26 12:22:18.898316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.905 qpair failed and we were unable to recover it. 00:26:17.905 [2024-04-26 12:22:18.898634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.905 [2024-04-26 12:22:18.898955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.905 [2024-04-26 12:22:18.898964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.905 qpair failed and we were unable to recover it. 00:26:17.905 [2024-04-26 12:22:18.899271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.905 [2024-04-26 12:22:18.899571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.905 [2024-04-26 12:22:18.899579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.905 qpair failed and we were unable to recover it. 00:26:17.905 [2024-04-26 12:22:18.899897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.905 [2024-04-26 12:22:18.900204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.905 [2024-04-26 12:22:18.900213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.905 qpair failed and we were unable to recover it. 00:26:17.905 [2024-04-26 12:22:18.900509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.905 [2024-04-26 12:22:18.900854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.905 [2024-04-26 12:22:18.900863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.905 qpair failed and we were unable to recover it. 00:26:17.905 [2024-04-26 12:22:18.901154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.906 [2024-04-26 12:22:18.901477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.906 [2024-04-26 12:22:18.901488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.906 qpair failed and we were unable to recover it. 00:26:17.906 [2024-04-26 12:22:18.901741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.906 [2024-04-26 12:22:18.901974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.906 [2024-04-26 12:22:18.901983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.906 qpair failed and we were unable to recover it. 00:26:17.906 [2024-04-26 12:22:18.902275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.906 [2024-04-26 12:22:18.902556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.906 [2024-04-26 12:22:18.902564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.906 qpair failed and we were unable to recover it. 00:26:17.906 [2024-04-26 12:22:18.902853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.906 [2024-04-26 12:22:18.903179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.906 [2024-04-26 12:22:18.903188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.906 qpair failed and we were unable to recover it. 00:26:17.906 [2024-04-26 12:22:18.903496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.906 [2024-04-26 12:22:18.903819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.906 [2024-04-26 12:22:18.903827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.906 qpair failed and we were unable to recover it. 00:26:17.906 [2024-04-26 12:22:18.904104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.906 [2024-04-26 12:22:18.904423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.906 [2024-04-26 12:22:18.904432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.906 qpair failed and we were unable to recover it. 00:26:17.906 [2024-04-26 12:22:18.904763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.906 [2024-04-26 12:22:18.905135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.906 [2024-04-26 12:22:18.905145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.906 qpair failed and we were unable to recover it. 00:26:17.906 [2024-04-26 12:22:18.905466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.906 [2024-04-26 12:22:18.905813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.906 [2024-04-26 12:22:18.905823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.906 qpair failed and we were unable to recover it. 00:26:17.906 [2024-04-26 12:22:18.906022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.906 [2024-04-26 12:22:18.906307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.906 [2024-04-26 12:22:18.906316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.906 qpair failed and we were unable to recover it. 00:26:17.906 [2024-04-26 12:22:18.906634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.906 [2024-04-26 12:22:18.906951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.906 [2024-04-26 12:22:18.906960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.906 qpair failed and we were unable to recover it. 00:26:17.906 [2024-04-26 12:22:18.907305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.906 [2024-04-26 12:22:18.907734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.906 [2024-04-26 12:22:18.907746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.906 qpair failed and we were unable to recover it. 00:26:17.906 [2024-04-26 12:22:18.908049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.906 [2024-04-26 12:22:18.908376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.906 [2024-04-26 12:22:18.908386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.906 qpair failed and we were unable to recover it. 00:26:17.906 [2024-04-26 12:22:18.908714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.906 [2024-04-26 12:22:18.909060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.906 [2024-04-26 12:22:18.909070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.906 qpair failed and we were unable to recover it. 00:26:17.906 [2024-04-26 12:22:18.909404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.906 [2024-04-26 12:22:18.909710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.906 [2024-04-26 12:22:18.909719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.906 qpair failed and we were unable to recover it. 00:26:17.906 [2024-04-26 12:22:18.910025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.906 [2024-04-26 12:22:18.910358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.906 [2024-04-26 12:22:18.910367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.906 qpair failed and we were unable to recover it. 00:26:17.906 [2024-04-26 12:22:18.910713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.906 [2024-04-26 12:22:18.910915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.906 [2024-04-26 12:22:18.910924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.906 qpair failed and we were unable to recover it. 00:26:17.906 [2024-04-26 12:22:18.911314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.906 [2024-04-26 12:22:18.911636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.906 [2024-04-26 12:22:18.911645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.906 qpair failed and we were unable to recover it. 00:26:17.906 [2024-04-26 12:22:18.911868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.906 [2024-04-26 12:22:18.912237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.906 [2024-04-26 12:22:18.912246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.906 qpair failed and we were unable to recover it. 00:26:17.906 [2024-04-26 12:22:18.912447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.906 [2024-04-26 12:22:18.912793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.906 [2024-04-26 12:22:18.912802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.906 qpair failed and we were unable to recover it. 00:26:17.906 [2024-04-26 12:22:18.913147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.906 [2024-04-26 12:22:18.913469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.906 [2024-04-26 12:22:18.913479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.906 qpair failed and we were unable to recover it. 00:26:17.906 [2024-04-26 12:22:18.913723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.906 [2024-04-26 12:22:18.914048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.906 [2024-04-26 12:22:18.914057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.906 qpair failed and we were unable to recover it. 00:26:17.906 [2024-04-26 12:22:18.914394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.906 [2024-04-26 12:22:18.914777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.906 [2024-04-26 12:22:18.914786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.906 qpair failed and we were unable to recover it. 00:26:17.906 [2024-04-26 12:22:18.915101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.906 [2024-04-26 12:22:18.915414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.906 [2024-04-26 12:22:18.915424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.906 qpair failed and we were unable to recover it. 00:26:17.906 [2024-04-26 12:22:18.915729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.906 [2024-04-26 12:22:18.916062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.907 [2024-04-26 12:22:18.916071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.907 qpair failed and we were unable to recover it. 00:26:17.907 [2024-04-26 12:22:18.916260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.907 [2024-04-26 12:22:18.916575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.907 [2024-04-26 12:22:18.916584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.907 qpair failed and we were unable to recover it. 00:26:17.907 [2024-04-26 12:22:18.916903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.907 [2024-04-26 12:22:18.917206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.907 [2024-04-26 12:22:18.917215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.907 qpair failed and we were unable to recover it. 00:26:17.907 [2024-04-26 12:22:18.917539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.907 [2024-04-26 12:22:18.917765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.907 [2024-04-26 12:22:18.917774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.907 qpair failed and we were unable to recover it. 00:26:17.907 [2024-04-26 12:22:18.918091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.907 [2024-04-26 12:22:18.918407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.907 [2024-04-26 12:22:18.918417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.907 qpair failed and we were unable to recover it. 00:26:17.907 [2024-04-26 12:22:18.918702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.907 [2024-04-26 12:22:18.918985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.907 [2024-04-26 12:22:18.918996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.907 qpair failed and we were unable to recover it. 00:26:17.907 [2024-04-26 12:22:18.919297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.907 [2024-04-26 12:22:18.919582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.907 [2024-04-26 12:22:18.919591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.907 qpair failed and we were unable to recover it. 00:26:17.907 [2024-04-26 12:22:18.919859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.907 [2024-04-26 12:22:18.920044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.907 [2024-04-26 12:22:18.920055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.907 qpair failed and we were unable to recover it. 00:26:17.907 [2024-04-26 12:22:18.920377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.907 [2024-04-26 12:22:18.920710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.907 [2024-04-26 12:22:18.920720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.907 qpair failed and we were unable to recover it. 00:26:17.907 [2024-04-26 12:22:18.921062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.907 [2024-04-26 12:22:18.921362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.907 [2024-04-26 12:22:18.921372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.907 qpair failed and we were unable to recover it. 00:26:17.907 [2024-04-26 12:22:18.921686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.907 [2024-04-26 12:22:18.921897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.907 [2024-04-26 12:22:18.921907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.907 qpair failed and we were unable to recover it. 00:26:17.907 [2024-04-26 12:22:18.922228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.907 [2024-04-26 12:22:18.922405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.907 [2024-04-26 12:22:18.922414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.907 qpair failed and we were unable to recover it. 00:26:17.907 [2024-04-26 12:22:18.922715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.907 [2024-04-26 12:22:18.923015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.907 [2024-04-26 12:22:18.923025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.907 qpair failed and we were unable to recover it. 00:26:17.907 [2024-04-26 12:22:18.923329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.907 [2024-04-26 12:22:18.923666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.907 [2024-04-26 12:22:18.923675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.907 qpair failed and we were unable to recover it. 00:26:17.907 [2024-04-26 12:22:18.924007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.907 [2024-04-26 12:22:18.924209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.907 [2024-04-26 12:22:18.924218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.907 qpair failed and we were unable to recover it. 00:26:17.907 [2024-04-26 12:22:18.924464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.907 [2024-04-26 12:22:18.924738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.907 [2024-04-26 12:22:18.924747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.907 qpair failed and we were unable to recover it. 00:26:17.907 [2024-04-26 12:22:18.925126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.907 [2024-04-26 12:22:18.925329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.907 [2024-04-26 12:22:18.925337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.907 qpair failed and we were unable to recover it. 00:26:17.907 [2024-04-26 12:22:18.925644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.907 [2024-04-26 12:22:18.925884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.907 [2024-04-26 12:22:18.925893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.907 qpair failed and we were unable to recover it. 00:26:17.907 [2024-04-26 12:22:18.926200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.907 [2024-04-26 12:22:18.926532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.907 [2024-04-26 12:22:18.926543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.907 qpair failed and we were unable to recover it. 00:26:17.907 [2024-04-26 12:22:18.926842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.907 [2024-04-26 12:22:18.927135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.907 [2024-04-26 12:22:18.927144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.907 qpair failed and we were unable to recover it. 00:26:17.907 [2024-04-26 12:22:18.927457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.907 [2024-04-26 12:22:18.927672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.907 [2024-04-26 12:22:18.927681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.907 qpair failed and we were unable to recover it. 00:26:17.907 [2024-04-26 12:22:18.927754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.907 [2024-04-26 12:22:18.928101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.907 [2024-04-26 12:22:18.928111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.907 qpair failed and we were unable to recover it. 00:26:17.907 [2024-04-26 12:22:18.928425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.907 [2024-04-26 12:22:18.928758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.907 [2024-04-26 12:22:18.928767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.907 qpair failed and we were unable to recover it. 00:26:17.907 [2024-04-26 12:22:18.929123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.907 [2024-04-26 12:22:18.929425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.907 [2024-04-26 12:22:18.929434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.907 qpair failed and we were unable to recover it. 00:26:17.907 [2024-04-26 12:22:18.929772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.907 [2024-04-26 12:22:18.930002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.907 [2024-04-26 12:22:18.930011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.907 qpair failed and we were unable to recover it. 00:26:17.907 [2024-04-26 12:22:18.930221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.907 [2024-04-26 12:22:18.930546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.907 [2024-04-26 12:22:18.930555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.907 qpair failed and we were unable to recover it. 00:26:17.907 [2024-04-26 12:22:18.930854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.907 [2024-04-26 12:22:18.931057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.907 [2024-04-26 12:22:18.931066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.907 qpair failed and we were unable to recover it. 00:26:17.907 [2024-04-26 12:22:18.931400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.907 [2024-04-26 12:22:18.931693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.907 [2024-04-26 12:22:18.931702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.907 qpair failed and we were unable to recover it. 00:26:17.907 [2024-04-26 12:22:18.932032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.907 [2024-04-26 12:22:18.932215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.907 [2024-04-26 12:22:18.932228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.908 qpair failed and we were unable to recover it. 00:26:17.908 [2024-04-26 12:22:18.932404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.908 [2024-04-26 12:22:18.932739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.908 [2024-04-26 12:22:18.932749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.908 qpair failed and we were unable to recover it. 00:26:17.908 [2024-04-26 12:22:18.933079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.908 [2024-04-26 12:22:18.933422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.908 [2024-04-26 12:22:18.933432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.908 qpair failed and we were unable to recover it. 00:26:17.908 [2024-04-26 12:22:18.933760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.908 [2024-04-26 12:22:18.934143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.908 [2024-04-26 12:22:18.934152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.908 qpair failed and we were unable to recover it. 00:26:17.908 [2024-04-26 12:22:18.934359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.908 [2024-04-26 12:22:18.934714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.908 [2024-04-26 12:22:18.934723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.908 qpair failed and we were unable to recover it. 00:26:17.908 [2024-04-26 12:22:18.935019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.908 [2024-04-26 12:22:18.935346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.908 [2024-04-26 12:22:18.935355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.908 qpair failed and we were unable to recover it. 00:26:17.908 [2024-04-26 12:22:18.935674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.908 [2024-04-26 12:22:18.936002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.908 [2024-04-26 12:22:18.936011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.908 qpair failed and we were unable to recover it. 00:26:17.908 [2024-04-26 12:22:18.936341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.908 [2024-04-26 12:22:18.936689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.908 [2024-04-26 12:22:18.936697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.908 qpair failed and we were unable to recover it. 00:26:17.908 [2024-04-26 12:22:18.937023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.908 [2024-04-26 12:22:18.937171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.908 [2024-04-26 12:22:18.937180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.908 qpair failed and we were unable to recover it. 00:26:17.908 [2024-04-26 12:22:18.937462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.908 [2024-04-26 12:22:18.937807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.908 [2024-04-26 12:22:18.937816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.908 qpair failed and we were unable to recover it. 00:26:17.908 [2024-04-26 12:22:18.938014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.908 [2024-04-26 12:22:18.938325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.908 [2024-04-26 12:22:18.938334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.908 qpair failed and we were unable to recover it. 00:26:17.908 [2024-04-26 12:22:18.938624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.908 [2024-04-26 12:22:18.938902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.908 [2024-04-26 12:22:18.938912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.908 qpair failed and we were unable to recover it. 00:26:17.908 [2024-04-26 12:22:18.939230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.908 [2024-04-26 12:22:18.939524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.908 [2024-04-26 12:22:18.939533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.908 qpair failed and we were unable to recover it. 00:26:17.908 [2024-04-26 12:22:18.939922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.908 [2024-04-26 12:22:18.940238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.908 [2024-04-26 12:22:18.940247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.908 qpair failed and we were unable to recover it. 00:26:17.908 [2024-04-26 12:22:18.940571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.908 [2024-04-26 12:22:18.940877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.908 [2024-04-26 12:22:18.940887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.908 qpair failed and we were unable to recover it. 00:26:17.908 [2024-04-26 12:22:18.941294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.908 [2024-04-26 12:22:18.941581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.908 [2024-04-26 12:22:18.941590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.908 qpair failed and we were unable to recover it. 00:26:17.908 [2024-04-26 12:22:18.941923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.908 [2024-04-26 12:22:18.942259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.908 [2024-04-26 12:22:18.942268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.908 qpair failed and we were unable to recover it. 00:26:17.908 [2024-04-26 12:22:18.942583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.908 [2024-04-26 12:22:18.942895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.908 [2024-04-26 12:22:18.942905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.908 qpair failed and we were unable to recover it. 00:26:17.908 [2024-04-26 12:22:18.943219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.908 [2024-04-26 12:22:18.943358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.908 [2024-04-26 12:22:18.943366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.908 qpair failed and we were unable to recover it. 00:26:17.908 [2024-04-26 12:22:18.943676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.908 [2024-04-26 12:22:18.943980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.908 [2024-04-26 12:22:18.943989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.908 qpair failed and we were unable to recover it. 00:26:17.908 [2024-04-26 12:22:18.944388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.908 [2024-04-26 12:22:18.944710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.908 [2024-04-26 12:22:18.944720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.908 qpair failed and we were unable to recover it. 00:26:17.908 [2024-04-26 12:22:18.945042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.908 [2024-04-26 12:22:18.945323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.908 [2024-04-26 12:22:18.945332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.908 qpair failed and we were unable to recover it. 00:26:17.908 [2024-04-26 12:22:18.945646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.908 [2024-04-26 12:22:18.945966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.908 [2024-04-26 12:22:18.945975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.908 qpair failed and we were unable to recover it. 00:26:17.908 [2024-04-26 12:22:18.946306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.908 [2024-04-26 12:22:18.946642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.908 [2024-04-26 12:22:18.946651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.908 qpair failed and we were unable to recover it. 00:26:17.908 [2024-04-26 12:22:18.946967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.908 [2024-04-26 12:22:18.947260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.908 [2024-04-26 12:22:18.947269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.908 qpair failed and we were unable to recover it. 00:26:17.908 [2024-04-26 12:22:18.947591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.908 [2024-04-26 12:22:18.947907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.908 [2024-04-26 12:22:18.947917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.908 qpair failed and we were unable to recover it. 00:26:17.908 [2024-04-26 12:22:18.948241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.908 [2024-04-26 12:22:18.948560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.908 [2024-04-26 12:22:18.948570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.908 qpair failed and we were unable to recover it. 00:26:17.908 [2024-04-26 12:22:18.948903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.908 [2024-04-26 12:22:18.949256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.908 [2024-04-26 12:22:18.949265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.908 qpair failed and we were unable to recover it. 00:26:17.908 [2024-04-26 12:22:18.949453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.908 [2024-04-26 12:22:18.949772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.908 [2024-04-26 12:22:18.949781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.908 qpair failed and we were unable to recover it. 00:26:17.908 [2024-04-26 12:22:18.950074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.908 [2024-04-26 12:22:18.950369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 12:22:18.950378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.909 qpair failed and we were unable to recover it. 00:26:17.909 [2024-04-26 12:22:18.950774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 12:22:18.951150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 12:22:18.951159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.909 qpair failed and we were unable to recover it. 00:26:17.909 [2024-04-26 12:22:18.951450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 12:22:18.951743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 12:22:18.951752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.909 qpair failed and we were unable to recover it. 00:26:17.909 [2024-04-26 12:22:18.952163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 12:22:18.952475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 12:22:18.952484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.909 qpair failed and we were unable to recover it. 00:26:17.909 [2024-04-26 12:22:18.952818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 12:22:18.953123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 12:22:18.953132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.909 qpair failed and we were unable to recover it. 00:26:17.909 [2024-04-26 12:22:18.953464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 12:22:18.953748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 12:22:18.953756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.909 qpair failed and we were unable to recover it. 00:26:17.909 [2024-04-26 12:22:18.954103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 12:22:18.954435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 12:22:18.954445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.909 qpair failed and we were unable to recover it. 00:26:17.909 [2024-04-26 12:22:18.954636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 12:22:18.954981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 12:22:18.954991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.909 qpair failed and we were unable to recover it. 00:26:17.909 [2024-04-26 12:22:18.955322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 12:22:18.955642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 12:22:18.955651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.909 qpair failed and we were unable to recover it. 00:26:17.909 [2024-04-26 12:22:18.955974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 12:22:18.956296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 12:22:18.956304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.909 qpair failed and we were unable to recover it. 00:26:17.909 [2024-04-26 12:22:18.956641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 12:22:18.956946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 12:22:18.956955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.909 qpair failed and we were unable to recover it. 00:26:17.909 [2024-04-26 12:22:18.957248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 12:22:18.957475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 12:22:18.957484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.909 qpair failed and we were unable to recover it. 00:26:17.909 [2024-04-26 12:22:18.957780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 12:22:18.958073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 12:22:18.958082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.909 qpair failed and we were unable to recover it. 00:26:17.909 [2024-04-26 12:22:18.958474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 12:22:18.958773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 12:22:18.958782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.909 qpair failed and we were unable to recover it. 00:26:17.909 [2024-04-26 12:22:18.959100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 12:22:18.959412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 12:22:18.959421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.909 qpair failed and we were unable to recover it. 00:26:17.909 [2024-04-26 12:22:18.959612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 12:22:18.959900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 12:22:18.959909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.909 qpair failed and we were unable to recover it. 00:26:17.909 [2024-04-26 12:22:18.960106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 12:22:18.960479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 12:22:18.960488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.909 qpair failed and we were unable to recover it. 00:26:17.909 [2024-04-26 12:22:18.960797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 12:22:18.961096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 12:22:18.961105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.909 qpair failed and we were unable to recover it. 00:26:17.909 [2024-04-26 12:22:18.961459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 12:22:18.961730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 12:22:18.961739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.909 qpair failed and we were unable to recover it. 00:26:17.909 [2024-04-26 12:22:18.962099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 12:22:18.962411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 12:22:18.962421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.909 qpair failed and we were unable to recover it. 00:26:17.909 [2024-04-26 12:22:18.962755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 12:22:18.963053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 12:22:18.963063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.909 qpair failed and we were unable to recover it. 00:26:17.909 [2024-04-26 12:22:18.963255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 12:22:18.963540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 12:22:18.963550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.909 qpair failed and we were unable to recover it. 00:26:17.909 [2024-04-26 12:22:18.963885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 12:22:18.964181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 12:22:18.964192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.909 qpair failed and we were unable to recover it. 00:26:17.909 [2024-04-26 12:22:18.964540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 12:22:18.964850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 12:22:18.964860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.909 qpair failed and we were unable to recover it. 00:26:17.909 [2024-04-26 12:22:18.965189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 12:22:18.965483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 12:22:18.965492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.909 qpair failed and we were unable to recover it. 00:26:17.909 [2024-04-26 12:22:18.965787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 12:22:18.966105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 12:22:18.966115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.909 qpair failed and we were unable to recover it. 00:26:17.909 [2024-04-26 12:22:18.966444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 12:22:18.966762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 12:22:18.966771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.909 qpair failed and we were unable to recover it. 00:26:17.909 [2024-04-26 12:22:18.966999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 12:22:18.967324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 12:22:18.967334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.909 qpair failed and we were unable to recover it. 00:26:17.909 [2024-04-26 12:22:18.967662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 12:22:18.967947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 12:22:18.967957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.909 qpair failed and we were unable to recover it. 00:26:17.909 [2024-04-26 12:22:18.968270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 12:22:18.968614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 12:22:18.968623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.910 qpair failed and we were unable to recover it. 00:26:17.910 [2024-04-26 12:22:18.968941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 12:22:18.969277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 12:22:18.969286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.910 qpair failed and we were unable to recover it. 00:26:17.910 [2024-04-26 12:22:18.969621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 12:22:18.969940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 12:22:18.969950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.910 qpair failed and we were unable to recover it. 00:26:17.910 [2024-04-26 12:22:18.970264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 12:22:18.970580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 12:22:18.970589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.910 qpair failed and we were unable to recover it. 00:26:17.910 [2024-04-26 12:22:18.970963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 12:22:18.971275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 12:22:18.971284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.910 qpair failed and we were unable to recover it. 00:26:17.910 [2024-04-26 12:22:18.971613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 12:22:18.971951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 12:22:18.971960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.910 qpair failed and we were unable to recover it. 00:26:17.910 [2024-04-26 12:22:18.972271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 12:22:18.972589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 12:22:18.972598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.910 qpair failed and we were unable to recover it. 00:26:17.910 [2024-04-26 12:22:18.972888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 12:22:18.973190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 12:22:18.973199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.910 qpair failed and we were unable to recover it. 00:26:17.910 [2024-04-26 12:22:18.973522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 12:22:18.973847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 12:22:18.973857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.910 qpair failed and we were unable to recover it. 00:26:17.910 [2024-04-26 12:22:18.974178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 12:22:18.974466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 12:22:18.974474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.910 qpair failed and we were unable to recover it. 00:26:17.910 [2024-04-26 12:22:18.974813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 12:22:18.975128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 12:22:18.975137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.910 qpair failed and we were unable to recover it. 00:26:17.910 [2024-04-26 12:22:18.975402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 12:22:18.975742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 12:22:18.975751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.910 qpair failed and we were unable to recover it. 00:26:17.910 [2024-04-26 12:22:18.975991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 12:22:18.976317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 12:22:18.976326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.910 qpair failed and we were unable to recover it. 00:26:17.910 [2024-04-26 12:22:18.976618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 12:22:18.976695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 12:22:18.976704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.910 qpair failed and we were unable to recover it. 00:26:17.910 [2024-04-26 12:22:18.976997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 12:22:18.977173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 12:22:18.977182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.910 qpair failed and we were unable to recover it. 00:26:17.910 [2024-04-26 12:22:18.977478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 12:22:18.977813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 12:22:18.977823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.910 qpair failed and we were unable to recover it. 00:26:17.910 [2024-04-26 12:22:18.978153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 12:22:18.978473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 12:22:18.978482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.910 qpair failed and we were unable to recover it. 00:26:17.910 [2024-04-26 12:22:18.978799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 12:22:18.979089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 12:22:18.979099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.910 qpair failed and we were unable to recover it. 00:26:17.910 [2024-04-26 12:22:18.979438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 12:22:18.979742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 12:22:18.979751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.910 qpair failed and we were unable to recover it. 00:26:17.910 [2024-04-26 12:22:18.980077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 12:22:18.980380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 12:22:18.980390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.910 qpair failed and we were unable to recover it. 00:26:17.910 [2024-04-26 12:22:18.980701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 12:22:18.981030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 12:22:18.981040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.910 qpair failed and we were unable to recover it. 00:26:17.910 [2024-04-26 12:22:18.981356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 12:22:18.981672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 12:22:18.981681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.910 qpair failed and we were unable to recover it. 00:26:17.910 [2024-04-26 12:22:18.982021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 12:22:18.982335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 12:22:18.982343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.910 qpair failed and we were unable to recover it. 00:26:17.910 [2024-04-26 12:22:18.982667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 12:22:18.982973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 12:22:18.982982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.910 qpair failed and we were unable to recover it. 00:26:17.910 [2024-04-26 12:22:18.983193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 12:22:18.983542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 12:22:18.983551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.910 qpair failed and we were unable to recover it. 00:26:17.910 [2024-04-26 12:22:18.983839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 12:22:18.984169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 12:22:18.984178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.910 qpair failed and we were unable to recover it. 00:26:17.910 [2024-04-26 12:22:18.984511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 12:22:18.984823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 12:22:18.984832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.910 qpair failed and we were unable to recover it. 00:26:17.910 [2024-04-26 12:22:18.985129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 12:22:18.985467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 12:22:18.985475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.910 qpair failed and we were unable to recover it. 00:26:17.910 [2024-04-26 12:22:18.985865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 12:22:18.986180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 12:22:18.986189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.910 qpair failed and we were unable to recover it. 00:26:17.910 [2024-04-26 12:22:18.986402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 12:22:18.986679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 12:22:18.986689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.911 qpair failed and we were unable to recover it. 00:26:17.911 [2024-04-26 12:22:18.987014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 12:22:18.987203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 12:22:18.987212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.911 qpair failed and we were unable to recover it. 00:26:17.911 [2024-04-26 12:22:18.987550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 12:22:18.987752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 12:22:18.987762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.911 qpair failed and we were unable to recover it. 00:26:17.911 [2024-04-26 12:22:18.987972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 12:22:18.988272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 12:22:18.988282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.911 qpair failed and we were unable to recover it. 00:26:17.911 [2024-04-26 12:22:18.988584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 12:22:18.988848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 12:22:18.988858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.911 qpair failed and we were unable to recover it. 00:26:17.911 [2024-04-26 12:22:18.989168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 12:22:18.989387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 12:22:18.989396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.911 qpair failed and we were unable to recover it. 00:26:17.911 [2024-04-26 12:22:18.989677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 12:22:18.989904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 12:22:18.989914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.911 qpair failed and we were unable to recover it. 00:26:17.911 [2024-04-26 12:22:18.990131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 12:22:18.990436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 12:22:18.990445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.911 qpair failed and we were unable to recover it. 00:26:17.911 [2024-04-26 12:22:18.990768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 12:22:18.991060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 12:22:18.991069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.911 qpair failed and we were unable to recover it. 00:26:17.911 [2024-04-26 12:22:18.991404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 12:22:18.991780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 12:22:18.991789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.911 qpair failed and we were unable to recover it. 00:26:17.911 [2024-04-26 12:22:18.992074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 12:22:18.992388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 12:22:18.992397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.911 qpair failed and we were unable to recover it. 00:26:17.911 [2024-04-26 12:22:18.992734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 12:22:18.993056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 12:22:18.993065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.911 qpair failed and we were unable to recover it. 00:26:17.911 [2024-04-26 12:22:18.993376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 12:22:18.993680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 12:22:18.993689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.911 qpair failed and we were unable to recover it. 00:26:17.911 [2024-04-26 12:22:18.993974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 12:22:18.994295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 12:22:18.994304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.911 qpair failed and we were unable to recover it. 00:26:17.911 [2024-04-26 12:22:18.994626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 12:22:18.994908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 12:22:18.994917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.911 qpair failed and we were unable to recover it. 00:26:17.911 [2024-04-26 12:22:18.995270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 12:22:18.995449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 12:22:18.995461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.911 qpair failed and we were unable to recover it. 00:26:17.911 [2024-04-26 12:22:18.995742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 12:22:18.996050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 12:22:18.996059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.911 qpair failed and we were unable to recover it. 00:26:17.911 [2024-04-26 12:22:18.996371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 12:22:18.996693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 12:22:18.996704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.911 qpair failed and we were unable to recover it. 00:26:17.911 [2024-04-26 12:22:18.996935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 12:22:18.997269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 12:22:18.997280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.911 qpair failed and we were unable to recover it. 00:26:17.911 [2024-04-26 12:22:18.997595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 12:22:18.997907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 12:22:18.997917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.911 qpair failed and we were unable to recover it. 00:26:17.911 [2024-04-26 12:22:18.998245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 12:22:18.998539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 12:22:18.998548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.911 qpair failed and we were unable to recover it. 00:26:17.911 [2024-04-26 12:22:18.998888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 12:22:18.999181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 12:22:18.999191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.911 qpair failed and we were unable to recover it. 00:26:17.911 [2024-04-26 12:22:18.999506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 12:22:18.999819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 12:22:18.999828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.911 qpair failed and we were unable to recover it. 00:26:17.911 [2024-04-26 12:22:19.000134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 12:22:19.000424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 12:22:19.000434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.911 qpair failed and we were unable to recover it. 00:26:17.911 [2024-04-26 12:22:19.000727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 12:22:19.001020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 12:22:19.001030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.911 qpair failed and we were unable to recover it. 00:26:17.912 [2024-04-26 12:22:19.001328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 12:22:19.001633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 12:22:19.001642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.912 qpair failed and we were unable to recover it. 00:26:17.912 [2024-04-26 12:22:19.001961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 12:22:19.002254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 12:22:19.002264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.912 qpair failed and we were unable to recover it. 00:26:17.912 [2024-04-26 12:22:19.002444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 12:22:19.002740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 12:22:19.002749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.912 qpair failed and we were unable to recover it. 00:26:17.912 [2024-04-26 12:22:19.003085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 12:22:19.003358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 12:22:19.003367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.912 qpair failed and we were unable to recover it. 00:26:17.912 [2024-04-26 12:22:19.003704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 12:22:19.004020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 12:22:19.004030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.912 qpair failed and we were unable to recover it. 00:26:17.912 [2024-04-26 12:22:19.004382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 12:22:19.004699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 12:22:19.004708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.912 qpair failed and we were unable to recover it. 00:26:17.912 [2024-04-26 12:22:19.005015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 12:22:19.005315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 12:22:19.005325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.912 qpair failed and we were unable to recover it. 00:26:17.912 [2024-04-26 12:22:19.005655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 12:22:19.005859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 12:22:19.005869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.912 qpair failed and we were unable to recover it. 00:26:17.912 [2024-04-26 12:22:19.006182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 12:22:19.006494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 12:22:19.006502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.912 qpair failed and we were unable to recover it. 00:26:17.912 [2024-04-26 12:22:19.006833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 12:22:19.007047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 12:22:19.007057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.912 qpair failed and we were unable to recover it. 00:26:17.912 [2024-04-26 12:22:19.007263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 12:22:19.007552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 12:22:19.007561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.912 qpair failed and we were unable to recover it. 00:26:17.912 [2024-04-26 12:22:19.007850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 12:22:19.008136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 12:22:19.008146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.912 qpair failed and we were unable to recover it. 00:26:17.912 [2024-04-26 12:22:19.008524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 12:22:19.008853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 12:22:19.008863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.912 qpair failed and we were unable to recover it. 00:26:17.912 [2024-04-26 12:22:19.009055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 12:22:19.009383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 12:22:19.009392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.912 qpair failed and we were unable to recover it. 00:26:17.912 [2024-04-26 12:22:19.009730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 12:22:19.010030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 12:22:19.010040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.912 qpair failed and we were unable to recover it. 00:26:17.912 [2024-04-26 12:22:19.010346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 12:22:19.010628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 12:22:19.010638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.912 qpair failed and we were unable to recover it. 00:26:17.912 [2024-04-26 12:22:19.010952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 12:22:19.011266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 12:22:19.011275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.912 qpair failed and we were unable to recover it. 00:26:17.912 [2024-04-26 12:22:19.011609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 12:22:19.011918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 12:22:19.011928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.912 qpair failed and we were unable to recover it. 00:26:17.912 [2024-04-26 12:22:19.012255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 12:22:19.012532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 12:22:19.012542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.912 qpair failed and we were unable to recover it. 00:26:17.912 [2024-04-26 12:22:19.012868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 12:22:19.013176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 12:22:19.013185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.912 qpair failed and we were unable to recover it. 00:26:17.912 [2024-04-26 12:22:19.013514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 12:22:19.013836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 12:22:19.013850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.912 qpair failed and we were unable to recover it. 00:26:17.912 [2024-04-26 12:22:19.014164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 12:22:19.014481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 12:22:19.014490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.912 qpair failed and we were unable to recover it. 00:26:17.912 [2024-04-26 12:22:19.014727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 12:22:19.015065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 12:22:19.015075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.912 qpair failed and we were unable to recover it. 00:26:17.912 [2024-04-26 12:22:19.015395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 12:22:19.015685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 12:22:19.015694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.912 qpair failed and we were unable to recover it. 00:26:17.912 [2024-04-26 12:22:19.016080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 12:22:19.016280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 12:22:19.016289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.912 qpair failed and we were unable to recover it. 00:26:17.912 [2024-04-26 12:22:19.016481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 12:22:19.016810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 12:22:19.016818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.912 qpair failed and we were unable to recover it. 00:26:17.912 [2024-04-26 12:22:19.017157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 12:22:19.017472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 12:22:19.017481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.912 qpair failed and we were unable to recover it. 00:26:17.912 [2024-04-26 12:22:19.017802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 12:22:19.018103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 12:22:19.018113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.912 qpair failed and we were unable to recover it. 00:26:17.912 [2024-04-26 12:22:19.018429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 12:22:19.018751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 12:22:19.018760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.913 qpair failed and we were unable to recover it. 00:26:17.913 [2024-04-26 12:22:19.019098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 12:22:19.019413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 12:22:19.019423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.913 qpair failed and we were unable to recover it. 00:26:17.913 [2024-04-26 12:22:19.019759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 12:22:19.020082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 12:22:19.020092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.913 qpair failed and we were unable to recover it. 00:26:17.913 [2024-04-26 12:22:19.020483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 12:22:19.020863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 12:22:19.020873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.913 qpair failed and we were unable to recover it. 00:26:17.913 [2024-04-26 12:22:19.021053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 12:22:19.021347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 12:22:19.021356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.913 qpair failed and we were unable to recover it. 00:26:17.913 [2024-04-26 12:22:19.021688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 12:22:19.022006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 12:22:19.022016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.913 qpair failed and we were unable to recover it. 00:26:17.913 [2024-04-26 12:22:19.022335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 12:22:19.022642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 12:22:19.022651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.913 qpair failed and we were unable to recover it. 00:26:17.913 [2024-04-26 12:22:19.022975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 12:22:19.023290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 12:22:19.023300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.913 qpair failed and we were unable to recover it. 00:26:17.913 [2024-04-26 12:22:19.023644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 12:22:19.023826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 12:22:19.023835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.913 qpair failed and we were unable to recover it. 00:26:17.913 [2024-04-26 12:22:19.024130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 12:22:19.024472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 12:22:19.024481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.913 qpair failed and we were unable to recover it. 00:26:17.913 [2024-04-26 12:22:19.024777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 12:22:19.024974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 12:22:19.024984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.913 qpair failed and we were unable to recover it. 00:26:17.913 [2024-04-26 12:22:19.025208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 12:22:19.025475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 12:22:19.025484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.913 qpair failed and we were unable to recover it. 00:26:17.913 [2024-04-26 12:22:19.025775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 12:22:19.026101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 12:22:19.026111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.913 qpair failed and we were unable to recover it. 00:26:17.913 [2024-04-26 12:22:19.026429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 12:22:19.026726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 12:22:19.026739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.913 qpair failed and we were unable to recover it. 00:26:17.913 [2024-04-26 12:22:19.027024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 12:22:19.027354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 12:22:19.027364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.913 qpair failed and we were unable to recover it. 00:26:17.913 [2024-04-26 12:22:19.027704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 12:22:19.028023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 12:22:19.028032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.913 qpair failed and we were unable to recover it. 00:26:17.913 [2024-04-26 12:22:19.028368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 12:22:19.028719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 12:22:19.028728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.913 qpair failed and we were unable to recover it. 00:26:17.913 [2024-04-26 12:22:19.029024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 12:22:19.029357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 12:22:19.029366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.913 qpair failed and we were unable to recover it. 00:26:17.913 [2024-04-26 12:22:19.029669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 12:22:19.029990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 12:22:19.030000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.913 qpair failed and we were unable to recover it. 00:26:17.913 [2024-04-26 12:22:19.030151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 12:22:19.030453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 12:22:19.030462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.913 qpair failed and we were unable to recover it. 00:26:17.913 [2024-04-26 12:22:19.030770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 12:22:19.031097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 12:22:19.031107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.913 qpair failed and we were unable to recover it. 00:26:17.913 [2024-04-26 12:22:19.031423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 12:22:19.031705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 12:22:19.031714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.913 qpair failed and we were unable to recover it. 00:26:17.913 [2024-04-26 12:22:19.032021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 12:22:19.032356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 12:22:19.032366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.913 qpair failed and we were unable to recover it. 00:26:17.913 [2024-04-26 12:22:19.032681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 12:22:19.032988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 12:22:19.033000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.913 qpair failed and we were unable to recover it. 00:26:17.913 [2024-04-26 12:22:19.033320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 12:22:19.033624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 12:22:19.033633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.913 qpair failed and we were unable to recover it. 00:26:17.913 [2024-04-26 12:22:19.033971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 12:22:19.034205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 12:22:19.034214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.913 qpair failed and we were unable to recover it. 00:26:17.913 [2024-04-26 12:22:19.034488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 12:22:19.034808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 12:22:19.034817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.913 qpair failed and we were unable to recover it. 00:26:17.913 [2024-04-26 12:22:19.035139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 12:22:19.035417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 12:22:19.035427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.913 qpair failed and we were unable to recover it. 00:26:17.913 [2024-04-26 12:22:19.035761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 12:22:19.036075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 12:22:19.036085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.913 qpair failed and we were unable to recover it. 00:26:17.913 [2024-04-26 12:22:19.036409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 12:22:19.036574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 12:22:19.036584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.914 qpair failed and we were unable to recover it. 00:26:17.914 [2024-04-26 12:22:19.036902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 12:22:19.037194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 12:22:19.037204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.914 qpair failed and we were unable to recover it. 00:26:17.914 [2024-04-26 12:22:19.037529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 12:22:19.037719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 12:22:19.037729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.914 qpair failed and we were unable to recover it. 00:26:17.914 [2024-04-26 12:22:19.038023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 12:22:19.038228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 12:22:19.038237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.914 qpair failed and we were unable to recover it. 00:26:17.914 [2024-04-26 12:22:19.038510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 12:22:19.038814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 12:22:19.038823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.914 qpair failed and we were unable to recover it. 00:26:17.914 [2024-04-26 12:22:19.039043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 12:22:19.039365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 12:22:19.039374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.914 qpair failed and we were unable to recover it. 00:26:17.914 [2024-04-26 12:22:19.039696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 12:22:19.040004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 12:22:19.040013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.914 qpair failed and we were unable to recover it. 00:26:17.914 [2024-04-26 12:22:19.040347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 12:22:19.040659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 12:22:19.040668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.914 qpair failed and we were unable to recover it. 00:26:17.914 [2024-04-26 12:22:19.040986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 12:22:19.041276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 12:22:19.041285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.914 qpair failed and we were unable to recover it. 00:26:17.914 [2024-04-26 12:22:19.041607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 12:22:19.041930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 12:22:19.041940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.914 qpair failed and we were unable to recover it. 00:26:17.914 [2024-04-26 12:22:19.042265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 12:22:19.042598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 12:22:19.042607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.914 qpair failed and we were unable to recover it. 00:26:17.914 [2024-04-26 12:22:19.042953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 12:22:19.043232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 12:22:19.043241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.914 qpair failed and we were unable to recover it. 00:26:17.914 [2024-04-26 12:22:19.043513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 12:22:19.043813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 12:22:19.043822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.914 qpair failed and we were unable to recover it. 00:26:17.914 [2024-04-26 12:22:19.044156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 12:22:19.044470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 12:22:19.044480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.914 qpair failed and we were unable to recover it. 00:26:17.914 [2024-04-26 12:22:19.044885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 12:22:19.045235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 12:22:19.045245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.914 qpair failed and we were unable to recover it. 00:26:17.914 [2024-04-26 12:22:19.045597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 12:22:19.045917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 12:22:19.045926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.914 qpair failed and we were unable to recover it. 00:26:17.914 [2024-04-26 12:22:19.046243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 12:22:19.046556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 12:22:19.046565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.914 qpair failed and we were unable to recover it. 00:26:17.914 [2024-04-26 12:22:19.046885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 12:22:19.047206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 12:22:19.047215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.914 qpair failed and we were unable to recover it. 00:26:17.914 [2024-04-26 12:22:19.047536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 12:22:19.047866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 12:22:19.047876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.914 qpair failed and we were unable to recover it. 00:26:17.914 [2024-04-26 12:22:19.048228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 12:22:19.048542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 12:22:19.048552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.914 qpair failed and we were unable to recover it. 00:26:17.914 [2024-04-26 12:22:19.048869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 12:22:19.049057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 12:22:19.049066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.914 qpair failed and we were unable to recover it. 00:26:17.914 [2024-04-26 12:22:19.049264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 12:22:19.049535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 12:22:19.049545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.914 qpair failed and we were unable to recover it. 00:26:17.914 [2024-04-26 12:22:19.049919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 12:22:19.050232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 12:22:19.050241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.914 qpair failed and we were unable to recover it. 00:26:17.914 [2024-04-26 12:22:19.050585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 12:22:19.050887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 12:22:19.050896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.914 qpair failed and we were unable to recover it. 00:26:17.914 [2024-04-26 12:22:19.051187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 12:22:19.051503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 12:22:19.051512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.914 qpair failed and we were unable to recover it. 00:26:17.914 [2024-04-26 12:22:19.051807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 12:22:19.052154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 12:22:19.052164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.914 qpair failed and we were unable to recover it. 00:26:17.914 [2024-04-26 12:22:19.052439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 12:22:19.052779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 12:22:19.052789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.914 qpair failed and we were unable to recover it. 00:26:17.914 [2024-04-26 12:22:19.053088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 12:22:19.053390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 12:22:19.053399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.914 qpair failed and we were unable to recover it. 00:26:17.914 [2024-04-26 12:22:19.053689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 12:22:19.054026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 12:22:19.054035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.914 qpair failed and we were unable to recover it. 00:26:17.914 [2024-04-26 12:22:19.054357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 12:22:19.054681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 12:22:19.054690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.915 qpair failed and we were unable to recover it. 00:26:17.915 [2024-04-26 12:22:19.054900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 12:22:19.055257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 12:22:19.055268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.915 qpair failed and we were unable to recover it. 00:26:17.915 [2024-04-26 12:22:19.055600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 12:22:19.055829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 12:22:19.055845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.915 qpair failed and we were unable to recover it. 00:26:17.915 [2024-04-26 12:22:19.056147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 12:22:19.056429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 12:22:19.056438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.915 qpair failed and we were unable to recover it. 00:26:17.915 [2024-04-26 12:22:19.056667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 12:22:19.056971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 12:22:19.056981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.915 qpair failed and we were unable to recover it. 00:26:17.915 [2024-04-26 12:22:19.057196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 12:22:19.057522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 12:22:19.057533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.915 qpair failed and we were unable to recover it. 00:26:17.915 [2024-04-26 12:22:19.057840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 12:22:19.058121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 12:22:19.058131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.915 qpair failed and we were unable to recover it. 00:26:17.915 [2024-04-26 12:22:19.058327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 12:22:19.058613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 12:22:19.058622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.915 qpair failed and we were unable to recover it. 00:26:17.915 [2024-04-26 12:22:19.058775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 12:22:19.059086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 12:22:19.059095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.915 qpair failed and we were unable to recover it. 00:26:17.915 [2024-04-26 12:22:19.059366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 12:22:19.059698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 12:22:19.059707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.915 qpair failed and we were unable to recover it. 00:26:17.915 [2024-04-26 12:22:19.060038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 12:22:19.060362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 12:22:19.060372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.915 qpair failed and we were unable to recover it. 00:26:17.915 [2024-04-26 12:22:19.060598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 12:22:19.060863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 12:22:19.060873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.915 qpair failed and we were unable to recover it. 00:26:17.915 [2024-04-26 12:22:19.061177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 12:22:19.061478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 12:22:19.061487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.915 qpair failed and we were unable to recover it. 00:26:17.915 [2024-04-26 12:22:19.061818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 12:22:19.062126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 12:22:19.062135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.915 qpair failed and we were unable to recover it. 00:26:17.915 [2024-04-26 12:22:19.062346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 12:22:19.062540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 12:22:19.062550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.915 qpair failed and we were unable to recover it. 00:26:17.915 [2024-04-26 12:22:19.062886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 12:22:19.063198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 12:22:19.063208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.915 qpair failed and we were unable to recover it. 00:26:17.915 [2024-04-26 12:22:19.063510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 12:22:19.063806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 12:22:19.063818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.915 qpair failed and we were unable to recover it. 00:26:17.915 [2024-04-26 12:22:19.064119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 12:22:19.064456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 12:22:19.064466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.915 qpair failed and we were unable to recover it. 00:26:17.915 [2024-04-26 12:22:19.064773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 12:22:19.065001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 12:22:19.065012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.915 qpair failed and we were unable to recover it. 00:26:17.915 [2024-04-26 12:22:19.065383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 12:22:19.065689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 12:22:19.065699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.915 qpair failed and we were unable to recover it. 00:26:17.915 [2024-04-26 12:22:19.065871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 12:22:19.066217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 12:22:19.066226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.915 qpair failed and we were unable to recover it. 00:26:17.915 [2024-04-26 12:22:19.066544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 12:22:19.066728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 12:22:19.066737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.915 qpair failed and we were unable to recover it. 00:26:17.915 [2024-04-26 12:22:19.067112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 12:22:19.067435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 12:22:19.067445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.915 qpair failed and we were unable to recover it. 00:26:17.915 [2024-04-26 12:22:19.067763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 12:22:19.068075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 12:22:19.068084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.915 qpair failed and we were unable to recover it. 00:26:17.915 [2024-04-26 12:22:19.068403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 12:22:19.068689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 12:22:19.068698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.915 qpair failed and we were unable to recover it. 00:26:17.916 [2024-04-26 12:22:19.069036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 12:22:19.069376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 12:22:19.069385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.916 qpair failed and we were unable to recover it. 00:26:17.916 [2024-04-26 12:22:19.069720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 12:22:19.070021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 12:22:19.070031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.916 qpair failed and we were unable to recover it. 00:26:17.916 [2024-04-26 12:22:19.070220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 12:22:19.070415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 12:22:19.070424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.916 qpair failed and we were unable to recover it. 00:26:17.916 [2024-04-26 12:22:19.070698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 12:22:19.071005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 12:22:19.071016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.916 qpair failed and we were unable to recover it. 00:26:17.916 [2024-04-26 12:22:19.071344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 12:22:19.071649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 12:22:19.071658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.916 qpair failed and we were unable to recover it. 00:26:17.916 [2024-04-26 12:22:19.072007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 12:22:19.072320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 12:22:19.072329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.916 qpair failed and we were unable to recover it. 00:26:17.916 [2024-04-26 12:22:19.072614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 12:22:19.072943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 12:22:19.072953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.916 qpair failed and we were unable to recover it. 00:26:17.916 [2024-04-26 12:22:19.073343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 12:22:19.073607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 12:22:19.073616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.916 qpair failed and we were unable to recover it. 00:26:17.916 [2024-04-26 12:22:19.073931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 12:22:19.074150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 12:22:19.074159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.916 qpair failed and we were unable to recover it. 00:26:17.916 [2024-04-26 12:22:19.074430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 12:22:19.074777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 12:22:19.074786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.916 qpair failed and we were unable to recover it. 00:26:17.916 [2024-04-26 12:22:19.075103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 12:22:19.075384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 12:22:19.075394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.916 qpair failed and we were unable to recover it. 00:26:17.916 [2024-04-26 12:22:19.075703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 12:22:19.076016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 12:22:19.076026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.916 qpair failed and we were unable to recover it. 00:26:17.916 [2024-04-26 12:22:19.076349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 12:22:19.076697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 12:22:19.076706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.916 qpair failed and we were unable to recover it. 00:26:17.916 [2024-04-26 12:22:19.077036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 12:22:19.077331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 12:22:19.077340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.916 qpair failed and we were unable to recover it. 00:26:17.916 [2024-04-26 12:22:19.077663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 12:22:19.077897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 12:22:19.077907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.916 qpair failed and we were unable to recover it. 00:26:17.916 [2024-04-26 12:22:19.078216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 12:22:19.078382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 12:22:19.078393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.916 qpair failed and we were unable to recover it. 00:26:17.916 [2024-04-26 12:22:19.078657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 12:22:19.078951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 12:22:19.078961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.916 qpair failed and we were unable to recover it. 00:26:17.916 [2024-04-26 12:22:19.079272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 12:22:19.079586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 12:22:19.079595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.916 qpair failed and we were unable to recover it. 00:26:17.916 [2024-04-26 12:22:19.079943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 12:22:19.080264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 12:22:19.080273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.916 qpair failed and we were unable to recover it. 00:26:17.916 [2024-04-26 12:22:19.080450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 12:22:19.080763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 12:22:19.080772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.916 qpair failed and we were unable to recover it. 00:26:17.916 [2024-04-26 12:22:19.081098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 12:22:19.081373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 12:22:19.081382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.916 qpair failed and we were unable to recover it. 00:26:17.916 [2024-04-26 12:22:19.081719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 12:22:19.081931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 12:22:19.081941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.916 qpair failed and we were unable to recover it. 00:26:17.916 [2024-04-26 12:22:19.082219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 12:22:19.082434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 12:22:19.082443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.916 qpair failed and we were unable to recover it. 00:26:17.916 [2024-04-26 12:22:19.082768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 12:22:19.083086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 12:22:19.083095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.916 qpair failed and we were unable to recover it. 00:26:17.916 [2024-04-26 12:22:19.083412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 12:22:19.083731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 12:22:19.083740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.916 qpair failed and we were unable to recover it. 00:26:17.916 [2024-04-26 12:22:19.084102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 12:22:19.084429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 12:22:19.084439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.916 qpair failed and we were unable to recover it. 00:26:17.916 [2024-04-26 12:22:19.084755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 12:22:19.085076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 12:22:19.085086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.916 qpair failed and we were unable to recover it. 00:26:17.916 [2024-04-26 12:22:19.085417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 12:22:19.085721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 12:22:19.085731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.916 qpair failed and we were unable to recover it. 00:26:17.916 [2024-04-26 12:22:19.086039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 12:22:19.086349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 12:22:19.086359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.916 qpair failed and we were unable to recover it. 00:26:17.917 [2024-04-26 12:22:19.086696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 12:22:19.087051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 12:22:19.087061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.917 qpair failed and we were unable to recover it. 00:26:17.917 [2024-04-26 12:22:19.087349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 12:22:19.087657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 12:22:19.087666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.917 qpair failed and we were unable to recover it. 00:26:17.917 [2024-04-26 12:22:19.088005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 12:22:19.088321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 12:22:19.088330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.917 qpair failed and we were unable to recover it. 00:26:17.917 [2024-04-26 12:22:19.088659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 12:22:19.088992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 12:22:19.089002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.917 qpair failed and we were unable to recover it. 00:26:17.917 [2024-04-26 12:22:19.089344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 12:22:19.089579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 12:22:19.089588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.917 qpair failed and we were unable to recover it. 00:26:17.917 [2024-04-26 12:22:19.089896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 12:22:19.090207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 12:22:19.090216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.917 qpair failed and we were unable to recover it. 00:26:17.917 [2024-04-26 12:22:19.090517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 12:22:19.090715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 12:22:19.090725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.917 qpair failed and we were unable to recover it. 00:26:17.917 [2024-04-26 12:22:19.091015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 12:22:19.091389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 12:22:19.091398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.917 qpair failed and we were unable to recover it. 00:26:17.917 [2024-04-26 12:22:19.091750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 12:22:19.092043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 12:22:19.092052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.917 qpair failed and we were unable to recover it. 00:26:17.917 [2024-04-26 12:22:19.092368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 12:22:19.092680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 12:22:19.092689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.917 qpair failed and we were unable to recover it. 00:26:17.917 [2024-04-26 12:22:19.092878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 12:22:19.093174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 12:22:19.093183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.917 qpair failed and we were unable to recover it. 00:26:17.917 [2024-04-26 12:22:19.093501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 12:22:19.093806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 12:22:19.093815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.917 qpair failed and we were unable to recover it. 00:26:17.917 [2024-04-26 12:22:19.094217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 12:22:19.094531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 12:22:19.094541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.917 qpair failed and we were unable to recover it. 00:26:17.917 [2024-04-26 12:22:19.094910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 12:22:19.095221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 12:22:19.095232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.917 qpair failed and we were unable to recover it. 00:26:17.917 [2024-04-26 12:22:19.095522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 12:22:19.095824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 12:22:19.095833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.917 qpair failed and we were unable to recover it. 00:26:17.917 [2024-04-26 12:22:19.096092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 12:22:19.096431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 12:22:19.096440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.917 qpair failed and we were unable to recover it. 00:26:17.917 [2024-04-26 12:22:19.096749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 12:22:19.097060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 12:22:19.097069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.917 qpair failed and we were unable to recover it. 00:26:17.917 [2024-04-26 12:22:19.097369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 12:22:19.097568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 12:22:19.097578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.917 qpair failed and we were unable to recover it. 00:26:17.917 [2024-04-26 12:22:19.097854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 12:22:19.098185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 12:22:19.098194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.917 qpair failed and we were unable to recover it. 00:26:17.917 [2024-04-26 12:22:19.098486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 12:22:19.098793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 12:22:19.098802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.917 qpair failed and we were unable to recover it. 00:26:17.917 [2024-04-26 12:22:19.099122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 12:22:19.099439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 12:22:19.099449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.917 qpair failed and we were unable to recover it. 00:26:17.917 [2024-04-26 12:22:19.099757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 12:22:19.100090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 12:22:19.100100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.917 qpair failed and we were unable to recover it. 00:26:17.917 [2024-04-26 12:22:19.100409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 12:22:19.100735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 12:22:19.100744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.917 qpair failed and we were unable to recover it. 00:26:17.917 [2024-04-26 12:22:19.101057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 12:22:19.101371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 12:22:19.101380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.917 qpair failed and we were unable to recover it. 00:26:17.917 [2024-04-26 12:22:19.101750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 12:22:19.102050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 12:22:19.102059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.917 qpair failed and we were unable to recover it. 00:26:17.917 [2024-04-26 12:22:19.102357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 12:22:19.102661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 12:22:19.102670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.917 qpair failed and we were unable to recover it. 00:26:17.917 [2024-04-26 12:22:19.103072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 12:22:19.103372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 12:22:19.103381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.917 qpair failed and we were unable to recover it. 00:26:17.917 [2024-04-26 12:22:19.103740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 12:22:19.104033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 12:22:19.104042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.917 qpair failed and we were unable to recover it. 00:26:17.917 [2024-04-26 12:22:19.104305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 12:22:19.104626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 12:22:19.104635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.918 qpair failed and we were unable to recover it. 00:26:17.918 [2024-04-26 12:22:19.104937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 12:22:19.105279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 12:22:19.105288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.918 qpair failed and we were unable to recover it. 00:26:17.918 [2024-04-26 12:22:19.105651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 12:22:19.105954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 12:22:19.105963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.918 qpair failed and we were unable to recover it. 00:26:17.918 [2024-04-26 12:22:19.106167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 12:22:19.106450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 12:22:19.106459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.918 qpair failed and we were unable to recover it. 00:26:17.918 [2024-04-26 12:22:19.106765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 12:22:19.107062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 12:22:19.107072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.918 qpair failed and we were unable to recover it. 00:26:17.918 [2024-04-26 12:22:19.107396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 12:22:19.107684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 12:22:19.107693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.918 qpair failed and we were unable to recover it. 00:26:17.918 [2024-04-26 12:22:19.107995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 12:22:19.108217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 12:22:19.108226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.918 qpair failed and we were unable to recover it. 00:26:17.918 [2024-04-26 12:22:19.108520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 12:22:19.108844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 12:22:19.108854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.918 qpair failed and we were unable to recover it. 00:26:17.918 [2024-04-26 12:22:19.109155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 12:22:19.109472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 12:22:19.109481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.918 qpair failed and we were unable to recover it. 00:26:17.918 [2024-04-26 12:22:19.109667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 12:22:19.109923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 12:22:19.109934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.918 qpair failed and we were unable to recover it. 00:26:17.918 [2024-04-26 12:22:19.110235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 12:22:19.110535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 12:22:19.110544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.918 qpair failed and we were unable to recover it. 00:26:17.918 [2024-04-26 12:22:19.110771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 12:22:19.111099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 12:22:19.111109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.918 qpair failed and we were unable to recover it. 00:26:17.918 [2024-04-26 12:22:19.111428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 12:22:19.111707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 12:22:19.111716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:17.918 qpair failed and we were unable to recover it. 00:26:17.918 [2024-04-26 12:22:19.112029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.190 [2024-04-26 12:22:19.112359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.190 [2024-04-26 12:22:19.112370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.190 qpair failed and we were unable to recover it. 00:26:18.190 [2024-04-26 12:22:19.113264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.190 [2024-04-26 12:22:19.113567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.190 [2024-04-26 12:22:19.113577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.190 qpair failed and we were unable to recover it. 00:26:18.190 [2024-04-26 12:22:19.113776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.190 [2024-04-26 12:22:19.114063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.190 [2024-04-26 12:22:19.114073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.190 qpair failed and we were unable to recover it. 00:26:18.190 [2024-04-26 12:22:19.114387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.190 [2024-04-26 12:22:19.114553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.190 [2024-04-26 12:22:19.114563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.190 qpair failed and we were unable to recover it. 00:26:18.190 [2024-04-26 12:22:19.114886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.190 [2024-04-26 12:22:19.115093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.190 [2024-04-26 12:22:19.115102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.190 qpair failed and we were unable to recover it. 00:26:18.190 [2024-04-26 12:22:19.115455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.190 [2024-04-26 12:22:19.115646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 12:22:19.115656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.191 qpair failed and we were unable to recover it. 00:26:18.191 [2024-04-26 12:22:19.115983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 12:22:19.116263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 12:22:19.116274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.191 qpair failed and we were unable to recover it. 00:26:18.191 [2024-04-26 12:22:19.116567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 12:22:19.116881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 12:22:19.116891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.191 qpair failed and we were unable to recover it. 00:26:18.191 [2024-04-26 12:22:19.117183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 12:22:19.117503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 12:22:19.117512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.191 qpair failed and we were unable to recover it. 00:26:18.191 [2024-04-26 12:22:19.117733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 12:22:19.117975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 12:22:19.117984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.191 qpair failed and we were unable to recover it. 00:26:18.191 [2024-04-26 12:22:19.118308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 12:22:19.118611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 12:22:19.118620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.191 qpair failed and we were unable to recover it. 00:26:18.191 [2024-04-26 12:22:19.118989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 12:22:19.119294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 12:22:19.119303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.191 qpair failed and we were unable to recover it. 00:26:18.191 [2024-04-26 12:22:19.119617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 12:22:19.119850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 12:22:19.119860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.191 qpair failed and we were unable to recover it. 00:26:18.191 [2024-04-26 12:22:19.120065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 12:22:19.120343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 12:22:19.120353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.191 qpair failed and we were unable to recover it. 00:26:18.191 [2024-04-26 12:22:19.120681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 12:22:19.121020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 12:22:19.121029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.191 qpair failed and we were unable to recover it. 00:26:18.191 [2024-04-26 12:22:19.121358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 12:22:19.121669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 12:22:19.121678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.191 qpair failed and we were unable to recover it. 00:26:18.191 [2024-04-26 12:22:19.121883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 12:22:19.122198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 12:22:19.122207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.191 qpair failed and we were unable to recover it. 00:26:18.191 [2024-04-26 12:22:19.122538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 12:22:19.122886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 12:22:19.122896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.191 qpair failed and we were unable to recover it. 00:26:18.191 [2024-04-26 12:22:19.123183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 12:22:19.123498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 12:22:19.123507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.191 qpair failed and we were unable to recover it. 00:26:18.191 [2024-04-26 12:22:19.123815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 12:22:19.124163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 12:22:19.124172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.191 qpair failed and we were unable to recover it. 00:26:18.191 [2024-04-26 12:22:19.124498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 12:22:19.124830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 12:22:19.124844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.191 qpair failed and we were unable to recover it. 00:26:18.191 [2024-04-26 12:22:19.125177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 12:22:19.125477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 12:22:19.125486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.191 qpair failed and we were unable to recover it. 00:26:18.191 [2024-04-26 12:22:19.125824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 12:22:19.126137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 12:22:19.126147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.191 qpair failed and we were unable to recover it. 00:26:18.191 [2024-04-26 12:22:19.126488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 12:22:19.126794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 12:22:19.126805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.191 qpair failed and we were unable to recover it. 00:26:18.191 [2024-04-26 12:22:19.127125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 12:22:19.127391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 12:22:19.127400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.191 qpair failed and we were unable to recover it. 00:26:18.191 [2024-04-26 12:22:19.127742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 12:22:19.127934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 12:22:19.127944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.191 qpair failed and we were unable to recover it. 00:26:18.191 [2024-04-26 12:22:19.128232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 12:22:19.128566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 12:22:19.128575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.191 qpair failed and we were unable to recover it. 00:26:18.191 [2024-04-26 12:22:19.128872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 12:22:19.129180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 12:22:19.129189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.191 qpair failed and we were unable to recover it. 00:26:18.191 [2024-04-26 12:22:19.129466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 12:22:19.129765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 12:22:19.129773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.191 qpair failed and we were unable to recover it. 00:26:18.191 [2024-04-26 12:22:19.130069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 12:22:19.130377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 12:22:19.130386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.191 qpair failed and we were unable to recover it. 00:26:18.191 [2024-04-26 12:22:19.130683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 12:22:19.130988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 12:22:19.130999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.191 qpair failed and we were unable to recover it. 00:26:18.191 [2024-04-26 12:22:19.131333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 12:22:19.131553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 12:22:19.131563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.191 qpair failed and we were unable to recover it. 00:26:18.191 [2024-04-26 12:22:19.131755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 12:22:19.132054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 12:22:19.132063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.191 qpair failed and we were unable to recover it. 00:26:18.191 [2024-04-26 12:22:19.132346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 12:22:19.132621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 12:22:19.132633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.192 qpair failed and we were unable to recover it. 00:26:18.192 [2024-04-26 12:22:19.132962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 12:22:19.133161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 12:22:19.133170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.192 qpair failed and we were unable to recover it. 00:26:18.192 [2024-04-26 12:22:19.133508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 12:22:19.133829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 12:22:19.133843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.192 qpair failed and we were unable to recover it. 00:26:18.192 [2024-04-26 12:22:19.134160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 12:22:19.134477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 12:22:19.134487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.192 qpair failed and we were unable to recover it. 00:26:18.192 [2024-04-26 12:22:19.134717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 12:22:19.135018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 12:22:19.135028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.192 qpair failed and we were unable to recover it. 00:26:18.192 [2024-04-26 12:22:19.135348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 12:22:19.135664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 12:22:19.135673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.192 qpair failed and we were unable to recover it. 00:26:18.192 [2024-04-26 12:22:19.135860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 12:22:19.136138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 12:22:19.136148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.192 qpair failed and we were unable to recover it. 00:26:18.192 [2024-04-26 12:22:19.136477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 12:22:19.136778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 12:22:19.136787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.192 qpair failed and we were unable to recover it. 00:26:18.192 [2024-04-26 12:22:19.137119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 12:22:19.137375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 12:22:19.137384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.192 qpair failed and we were unable to recover it. 00:26:18.192 [2024-04-26 12:22:19.137706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 12:22:19.137999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 12:22:19.138008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.192 qpair failed and we were unable to recover it. 00:26:18.192 [2024-04-26 12:22:19.138346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 12:22:19.138660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 12:22:19.138669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.192 qpair failed and we were unable to recover it. 00:26:18.192 [2024-04-26 12:22:19.138993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 12:22:19.139287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 12:22:19.139296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.192 qpair failed and we were unable to recover it. 00:26:18.192 [2024-04-26 12:22:19.139630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 12:22:19.139944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 12:22:19.139953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.192 qpair failed and we were unable to recover it. 00:26:18.192 [2024-04-26 12:22:19.140274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 12:22:19.140591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 12:22:19.140599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.192 qpair failed and we were unable to recover it. 00:26:18.192 [2024-04-26 12:22:19.140947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 12:22:19.141255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 12:22:19.141264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.192 qpair failed and we were unable to recover it. 00:26:18.192 [2024-04-26 12:22:19.141579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 12:22:19.141973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 12:22:19.141982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.192 qpair failed and we were unable to recover it. 00:26:18.192 [2024-04-26 12:22:19.142274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 12:22:19.142586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 12:22:19.142596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.192 qpair failed and we were unable to recover it. 00:26:18.192 [2024-04-26 12:22:19.142909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 12:22:19.143229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 12:22:19.143238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.192 qpair failed and we were unable to recover it. 00:26:18.192 [2024-04-26 12:22:19.143539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 12:22:19.143867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 12:22:19.143877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.192 qpair failed and we were unable to recover it. 00:26:18.192 [2024-04-26 12:22:19.144198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 12:22:19.144512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 12:22:19.144521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.192 qpair failed and we were unable to recover it. 00:26:18.192 [2024-04-26 12:22:19.144846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 12:22:19.145126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 12:22:19.145135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.192 qpair failed and we were unable to recover it. 00:26:18.192 [2024-04-26 12:22:19.145458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 12:22:19.145628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 12:22:19.145637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.192 qpair failed and we were unable to recover it. 00:26:18.192 [2024-04-26 12:22:19.145955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 12:22:19.146253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 12:22:19.146262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.192 qpair failed and we were unable to recover it. 00:26:18.192 [2024-04-26 12:22:19.146591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 12:22:19.146928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 12:22:19.146938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.192 qpair failed and we were unable to recover it. 00:26:18.192 [2024-04-26 12:22:19.147272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 12:22:19.147607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 12:22:19.147615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.192 qpair failed and we were unable to recover it. 00:26:18.192 [2024-04-26 12:22:19.147865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 12:22:19.148068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 12:22:19.148078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.192 qpair failed and we were unable to recover it. 00:26:18.192 [2024-04-26 12:22:19.148398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 12:22:19.148710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 12:22:19.148721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.192 qpair failed and we were unable to recover it. 00:26:18.192 [2024-04-26 12:22:19.149063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 12:22:19.149368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 12:22:19.149378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.192 qpair failed and we were unable to recover it. 00:26:18.192 [2024-04-26 12:22:19.149707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 12:22:19.150017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 12:22:19.150026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.192 qpair failed and we were unable to recover it. 00:26:18.192 [2024-04-26 12:22:19.150340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 12:22:19.150619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 12:22:19.150628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.193 qpair failed and we were unable to recover it. 00:26:18.193 [2024-04-26 12:22:19.150796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 12:22:19.151079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 12:22:19.151089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.193 qpair failed and we were unable to recover it. 00:26:18.193 [2024-04-26 12:22:19.151415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 12:22:19.151718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 12:22:19.151727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.193 qpair failed and we were unable to recover it. 00:26:18.193 [2024-04-26 12:22:19.152064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 12:22:19.152363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 12:22:19.152371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.193 qpair failed and we were unable to recover it. 00:26:18.193 [2024-04-26 12:22:19.152692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 12:22:19.152970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 12:22:19.152980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.193 qpair failed and we were unable to recover it. 00:26:18.193 [2024-04-26 12:22:19.153180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 12:22:19.153488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 12:22:19.153496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.193 qpair failed and we were unable to recover it. 00:26:18.193 [2024-04-26 12:22:19.153805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 12:22:19.154128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 12:22:19.154137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.193 qpair failed and we were unable to recover it. 00:26:18.193 [2024-04-26 12:22:19.154390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 12:22:19.154719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 12:22:19.154728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.193 qpair failed and we were unable to recover it. 00:26:18.193 [2024-04-26 12:22:19.154903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 12:22:19.155213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 12:22:19.155222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.193 qpair failed and we were unable to recover it. 00:26:18.193 [2024-04-26 12:22:19.155529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 12:22:19.155865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 12:22:19.155874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.193 qpair failed and we were unable to recover it. 00:26:18.193 [2024-04-26 12:22:19.156167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 12:22:19.156446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 12:22:19.156455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.193 qpair failed and we were unable to recover it. 00:26:18.193 [2024-04-26 12:22:19.156777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 12:22:19.156966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 12:22:19.156976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.193 qpair failed and we were unable to recover it. 00:26:18.193 [2024-04-26 12:22:19.157273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 12:22:19.157553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 12:22:19.157562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.193 qpair failed and we were unable to recover it. 00:26:18.193 [2024-04-26 12:22:19.157922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 12:22:19.158278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 12:22:19.158287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.193 qpair failed and we were unable to recover it. 00:26:18.193 [2024-04-26 12:22:19.158596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 12:22:19.158926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 12:22:19.158936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.193 qpair failed and we were unable to recover it. 00:26:18.193 [2024-04-26 12:22:19.159256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 12:22:19.159603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 12:22:19.159612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.193 qpair failed and we were unable to recover it. 00:26:18.193 [2024-04-26 12:22:19.159976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 12:22:19.160274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 12:22:19.160283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.193 qpair failed and we were unable to recover it. 00:26:18.193 [2024-04-26 12:22:19.160567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 12:22:19.160902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 12:22:19.160911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.193 qpair failed and we were unable to recover it. 00:26:18.193 [2024-04-26 12:22:19.161111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 12:22:19.161398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 12:22:19.161407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.193 qpair failed and we were unable to recover it. 00:26:18.193 [2024-04-26 12:22:19.161693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 12:22:19.161977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 12:22:19.161987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.193 qpair failed and we were unable to recover it. 00:26:18.193 [2024-04-26 12:22:19.162210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 12:22:19.162512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 12:22:19.162521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.193 qpair failed and we were unable to recover it. 00:26:18.193 [2024-04-26 12:22:19.162819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 12:22:19.163139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 12:22:19.163148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.193 qpair failed and we were unable to recover it. 00:26:18.193 [2024-04-26 12:22:19.163469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 12:22:19.163792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 12:22:19.163802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.193 qpair failed and we were unable to recover it. 00:26:18.193 [2024-04-26 12:22:19.164003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 12:22:19.164305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 12:22:19.164314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.193 qpair failed and we were unable to recover it. 00:26:18.193 [2024-04-26 12:22:19.164500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 12:22:19.164863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 12:22:19.164872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.193 qpair failed and we were unable to recover it. 00:26:18.193 [2024-04-26 12:22:19.165229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 12:22:19.165544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 12:22:19.165554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.193 qpair failed and we were unable to recover it. 00:26:18.193 [2024-04-26 12:22:19.165865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 12:22:19.166202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 12:22:19.166211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.193 qpair failed and we were unable to recover it. 00:26:18.193 [2024-04-26 12:22:19.166412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 12:22:19.166691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 12:22:19.166701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.193 qpair failed and we were unable to recover it. 00:26:18.193 [2024-04-26 12:22:19.167017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 12:22:19.167364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 12:22:19.167373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.194 qpair failed and we were unable to recover it. 00:26:18.194 [2024-04-26 12:22:19.167715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 12:22:19.168016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 12:22:19.168026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.194 qpair failed and we were unable to recover it. 00:26:18.194 [2024-04-26 12:22:19.168232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 12:22:19.168448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 12:22:19.168457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.194 qpair failed and we were unable to recover it. 00:26:18.194 [2024-04-26 12:22:19.168759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 12:22:19.169063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 12:22:19.169073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.194 qpair failed and we were unable to recover it. 00:26:18.194 [2024-04-26 12:22:19.169231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 12:22:19.169559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 12:22:19.169568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.194 qpair failed and we were unable to recover it. 00:26:18.194 [2024-04-26 12:22:19.169791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 12:22:19.170121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 12:22:19.170131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.194 qpair failed and we were unable to recover it. 00:26:18.194 [2024-04-26 12:22:19.170443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 12:22:19.170667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 12:22:19.170676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.194 qpair failed and we were unable to recover it. 00:26:18.194 [2024-04-26 12:22:19.170935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 12:22:19.171191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 12:22:19.171200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.194 qpair failed and we were unable to recover it. 00:26:18.194 [2024-04-26 12:22:19.171507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 12:22:19.171845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 12:22:19.171855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.194 qpair failed and we were unable to recover it. 00:26:18.194 [2024-04-26 12:22:19.172251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 12:22:19.172437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 12:22:19.172447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.194 qpair failed and we were unable to recover it. 00:26:18.194 [2024-04-26 12:22:19.172827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 12:22:19.173049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 12:22:19.173058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.194 qpair failed and we were unable to recover it. 00:26:18.194 [2024-04-26 12:22:19.173344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 12:22:19.173634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 12:22:19.173644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.194 qpair failed and we were unable to recover it. 00:26:18.194 [2024-04-26 12:22:19.173860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 12:22:19.174188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 12:22:19.174196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.194 qpair failed and we were unable to recover it. 00:26:18.194 [2024-04-26 12:22:19.174576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 12:22:19.174880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 12:22:19.174889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.194 qpair failed and we were unable to recover it. 00:26:18.194 [2024-04-26 12:22:19.175201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 12:22:19.175532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 12:22:19.175541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.194 qpair failed and we were unable to recover it. 00:26:18.194 [2024-04-26 12:22:19.175731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 12:22:19.175831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 12:22:19.175845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.194 qpair failed and we were unable to recover it. 00:26:18.194 [2024-04-26 12:22:19.176179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 12:22:19.176500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 12:22:19.176509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.194 qpair failed and we were unable to recover it. 00:26:18.194 [2024-04-26 12:22:19.176816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 12:22:19.177026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 12:22:19.177035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.194 qpair failed and we were unable to recover it. 00:26:18.194 [2024-04-26 12:22:19.177239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 12:22:19.177614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 12:22:19.177623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.194 qpair failed and we were unable to recover it. 00:26:18.194 [2024-04-26 12:22:19.177956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 12:22:19.178307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 12:22:19.178316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.194 qpair failed and we were unable to recover it. 00:26:18.194 [2024-04-26 12:22:19.178635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 12:22:19.178957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 12:22:19.178966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.194 qpair failed and we were unable to recover it. 00:26:18.194 [2024-04-26 12:22:19.179281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 12:22:19.179608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 12:22:19.179617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.194 qpair failed and we were unable to recover it. 00:26:18.194 [2024-04-26 12:22:19.179909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 12:22:19.180131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 12:22:19.180140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.194 qpair failed and we were unable to recover it. 00:26:18.194 [2024-04-26 12:22:19.180337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 12:22:19.180546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 12:22:19.180556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.194 qpair failed and we were unable to recover it. 00:26:18.194 [2024-04-26 12:22:19.180879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 12:22:19.181079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 12:22:19.181088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.194 qpair failed and we were unable to recover it. 00:26:18.194 [2024-04-26 12:22:19.181396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 12:22:19.181644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 12:22:19.181652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.194 qpair failed and we were unable to recover it. 00:26:18.194 [2024-04-26 12:22:19.182049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 12:22:19.182366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 12:22:19.182376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.194 qpair failed and we were unable to recover it. 00:26:18.194 [2024-04-26 12:22:19.182694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 12:22:19.182917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 12:22:19.182926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.194 qpair failed and we were unable to recover it. 00:26:18.194 [2024-04-26 12:22:19.183240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 12:22:19.183348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 12:22:19.183356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.194 qpair failed and we were unable to recover it. 00:26:18.194 [2024-04-26 12:22:19.183759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 12:22:19.184118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 12:22:19.184129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.195 qpair failed and we were unable to recover it. 00:26:18.195 [2024-04-26 12:22:19.184427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 12:22:19.184743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 12:22:19.184751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.195 qpair failed and we were unable to recover it. 00:26:18.195 [2024-04-26 12:22:19.185183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 12:22:19.185511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 12:22:19.185520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.195 qpair failed and we were unable to recover it. 00:26:18.195 [2024-04-26 12:22:19.185703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 12:22:19.186069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 12:22:19.186079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.195 qpair failed and we were unable to recover it. 00:26:18.195 [2024-04-26 12:22:19.186426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 12:22:19.186740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 12:22:19.186749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.195 qpair failed and we were unable to recover it. 00:26:18.195 [2024-04-26 12:22:19.187023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 12:22:19.187376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 12:22:19.187386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.195 qpair failed and we were unable to recover it. 00:26:18.195 [2024-04-26 12:22:19.187608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 12:22:19.187809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 12:22:19.187819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.195 qpair failed and we were unable to recover it. 00:26:18.195 [2024-04-26 12:22:19.188072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 12:22:19.188419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 12:22:19.188429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.195 qpair failed and we were unable to recover it. 00:26:18.195 [2024-04-26 12:22:19.188719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 12:22:19.189060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 12:22:19.189070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.195 qpair failed and we were unable to recover it. 00:26:18.195 [2024-04-26 12:22:19.189291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 12:22:19.189634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 12:22:19.189643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.195 qpair failed and we were unable to recover it. 00:26:18.195 [2024-04-26 12:22:19.190020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 12:22:19.190321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 12:22:19.190330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.195 qpair failed and we were unable to recover it. 00:26:18.195 [2024-04-26 12:22:19.190666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 12:22:19.190867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 12:22:19.190876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.195 qpair failed and we were unable to recover it. 00:26:18.195 [2024-04-26 12:22:19.190996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 12:22:19.191248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 12:22:19.191258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.195 qpair failed and we were unable to recover it. 00:26:18.195 [2024-04-26 12:22:19.191585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 12:22:19.191784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 12:22:19.191794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.195 qpair failed and we were unable to recover it. 00:26:18.195 [2024-04-26 12:22:19.192085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 12:22:19.192409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 12:22:19.192419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.195 qpair failed and we were unable to recover it. 00:26:18.195 [2024-04-26 12:22:19.192635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 12:22:19.192892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 12:22:19.192902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.195 qpair failed and we were unable to recover it. 00:26:18.195 [2024-04-26 12:22:19.193127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 12:22:19.193462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 12:22:19.193473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.195 qpair failed and we were unable to recover it. 00:26:18.195 [2024-04-26 12:22:19.193792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 12:22:19.194096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 12:22:19.194105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.195 qpair failed and we were unable to recover it. 00:26:18.195 [2024-04-26 12:22:19.194406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 12:22:19.194626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 12:22:19.194634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.195 qpair failed and we were unable to recover it. 00:26:18.195 [2024-04-26 12:22:19.194937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 12:22:19.195231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 12:22:19.195239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.195 qpair failed and we were unable to recover it. 00:26:18.195 [2024-04-26 12:22:19.195454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 12:22:19.195782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 12:22:19.195791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.195 qpair failed and we were unable to recover it. 00:26:18.195 [2024-04-26 12:22:19.196124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 12:22:19.196499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 12:22:19.196508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.195 qpair failed and we were unable to recover it. 00:26:18.195 [2024-04-26 12:22:19.196806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 12:22:19.196991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 12:22:19.197001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.196 qpair failed and we were unable to recover it. 00:26:18.196 [2024-04-26 12:22:19.197315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 12:22:19.197639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 12:22:19.197648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.196 qpair failed and we were unable to recover it. 00:26:18.196 [2024-04-26 12:22:19.197950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 12:22:19.198283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 12:22:19.198291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.196 qpair failed and we were unable to recover it. 00:26:18.196 [2024-04-26 12:22:19.198607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 12:22:19.198947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 12:22:19.198957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.196 qpair failed and we were unable to recover it. 00:26:18.196 [2024-04-26 12:22:19.199167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 12:22:19.199453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 12:22:19.199462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.196 qpair failed and we were unable to recover it. 00:26:18.196 [2024-04-26 12:22:19.199771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 12:22:19.200086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 12:22:19.200095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.196 qpair failed and we were unable to recover it. 00:26:18.196 [2024-04-26 12:22:19.200467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 12:22:19.200662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 12:22:19.200671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.196 qpair failed and we were unable to recover it. 00:26:18.196 [2024-04-26 12:22:19.200912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 12:22:19.201207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 12:22:19.201216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.196 qpair failed and we were unable to recover it. 00:26:18.196 [2024-04-26 12:22:19.201527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 12:22:19.201843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 12:22:19.201852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.196 qpair failed and we were unable to recover it. 00:26:18.196 [2024-04-26 12:22:19.202235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 12:22:19.202587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 12:22:19.202596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.196 qpair failed and we were unable to recover it. 00:26:18.196 [2024-04-26 12:22:19.202913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 12:22:19.203246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 12:22:19.203255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.196 qpair failed and we were unable to recover it. 00:26:18.196 [2024-04-26 12:22:19.203469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 12:22:19.203798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 12:22:19.203807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.196 qpair failed and we were unable to recover it. 00:26:18.196 [2024-04-26 12:22:19.204017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 12:22:19.204255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 12:22:19.204264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.196 qpair failed and we were unable to recover it. 00:26:18.196 [2024-04-26 12:22:19.204454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 12:22:19.204758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 12:22:19.204768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.196 qpair failed and we were unable to recover it. 00:26:18.196 [2024-04-26 12:22:19.205039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 12:22:19.205337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 12:22:19.205346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.196 qpair failed and we were unable to recover it. 00:26:18.196 [2024-04-26 12:22:19.205549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 12:22:19.205794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 12:22:19.205804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.196 qpair failed and we were unable to recover it. 00:26:18.196 [2024-04-26 12:22:19.206116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 12:22:19.206417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 12:22:19.206427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.196 qpair failed and we were unable to recover it. 00:26:18.196 [2024-04-26 12:22:19.206745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 12:22:19.207041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 12:22:19.207051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.196 qpair failed and we were unable to recover it. 00:26:18.196 [2024-04-26 12:22:19.207361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 12:22:19.207700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 12:22:19.207709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.196 qpair failed and we were unable to recover it. 00:26:18.196 [2024-04-26 12:22:19.208009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 12:22:19.208239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 12:22:19.208248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.196 qpair failed and we were unable to recover it. 00:26:18.196 [2024-04-26 12:22:19.208418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 12:22:19.208690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 12:22:19.208699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.196 qpair failed and we were unable to recover it. 00:26:18.196 [2024-04-26 12:22:19.208885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 12:22:19.209177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 12:22:19.209187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.196 qpair failed and we were unable to recover it. 00:26:18.196 [2024-04-26 12:22:19.209482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 12:22:19.209814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 12:22:19.209823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.196 qpair failed and we were unable to recover it. 00:26:18.196 [2024-04-26 12:22:19.210102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 12:22:19.210488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 12:22:19.210497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.196 qpair failed and we were unable to recover it. 00:26:18.196 [2024-04-26 12:22:19.210795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 12:22:19.210924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 12:22:19.210934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.196 qpair failed and we were unable to recover it. 00:26:18.196 [2024-04-26 12:22:19.211285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 12:22:19.211465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 12:22:19.211474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.196 qpair failed and we were unable to recover it. 00:26:18.196 [2024-04-26 12:22:19.211843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 12:22:19.212159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 12:22:19.212168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.196 qpair failed and we were unable to recover it. 00:26:18.196 [2024-04-26 12:22:19.212353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 12:22:19.212649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 12:22:19.212658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.196 qpair failed and we were unable to recover it. 00:26:18.196 [2024-04-26 12:22:19.213001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 12:22:19.213330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 12:22:19.213339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.196 qpair failed and we were unable to recover it. 00:26:18.196 [2024-04-26 12:22:19.213669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 12:22:19.213854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 12:22:19.213864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.197 qpair failed and we were unable to recover it. 00:26:18.197 [2024-04-26 12:22:19.214175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 12:22:19.214505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 12:22:19.214514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.197 qpair failed and we were unable to recover it. 00:26:18.197 [2024-04-26 12:22:19.214849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 12:22:19.215037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 12:22:19.215047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.197 qpair failed and we were unable to recover it. 00:26:18.197 [2024-04-26 12:22:19.215247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 12:22:19.215624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 12:22:19.215633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.197 qpair failed and we were unable to recover it. 00:26:18.197 [2024-04-26 12:22:19.215962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 12:22:19.216274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 12:22:19.216283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.197 qpair failed and we were unable to recover it. 00:26:18.197 [2024-04-26 12:22:19.216573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 12:22:19.216831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 12:22:19.216845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.197 qpair failed and we were unable to recover it. 00:26:18.197 [2024-04-26 12:22:19.217019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 12:22:19.217396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 12:22:19.217405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.197 qpair failed and we were unable to recover it. 00:26:18.197 [2024-04-26 12:22:19.217728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 12:22:19.217978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 12:22:19.217987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.197 qpair failed and we were unable to recover it. 00:26:18.197 [2024-04-26 12:22:19.218290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 12:22:19.218572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 12:22:19.218581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.197 qpair failed and we were unable to recover it. 00:26:18.197 [2024-04-26 12:22:19.218905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 12:22:19.219230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 12:22:19.219239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.197 qpair failed and we were unable to recover it. 00:26:18.197 [2024-04-26 12:22:19.219634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 12:22:19.219982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 12:22:19.219991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.197 qpair failed and we were unable to recover it. 00:26:18.197 [2024-04-26 12:22:19.220205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 12:22:19.220401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 12:22:19.220411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.197 qpair failed and we were unable to recover it. 00:26:18.197 [2024-04-26 12:22:19.220728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 12:22:19.221056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 12:22:19.221066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.197 qpair failed and we were unable to recover it. 00:26:18.197 [2024-04-26 12:22:19.221366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 12:22:19.221708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 12:22:19.221717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.197 qpair failed and we were unable to recover it. 00:26:18.197 [2024-04-26 12:22:19.222043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 12:22:19.222241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 12:22:19.222250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.197 qpair failed and we were unable to recover it. 00:26:18.197 [2024-04-26 12:22:19.222563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 12:22:19.222880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 12:22:19.222890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.197 qpair failed and we were unable to recover it. 00:26:18.197 [2024-04-26 12:22:19.223229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 12:22:19.223405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 12:22:19.223417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.197 qpair failed and we were unable to recover it. 00:26:18.197 [2024-04-26 12:22:19.223735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 12:22:19.224051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 12:22:19.224067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.197 qpair failed and we were unable to recover it. 00:26:18.197 [2024-04-26 12:22:19.224387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 12:22:19.224688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 12:22:19.224697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.197 qpair failed and we were unable to recover it. 00:26:18.197 [2024-04-26 12:22:19.224894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 12:22:19.225175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 12:22:19.225184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.197 qpair failed and we were unable to recover it. 00:26:18.197 [2024-04-26 12:22:19.225531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 12:22:19.225828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 12:22:19.225840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.197 qpair failed and we were unable to recover it. 00:26:18.197 [2024-04-26 12:22:19.226146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 12:22:19.226345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 12:22:19.226354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.197 qpair failed and we were unable to recover it. 00:26:18.197 [2024-04-26 12:22:19.226660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 12:22:19.226972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 12:22:19.226981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.197 qpair failed and we were unable to recover it. 00:26:18.197 [2024-04-26 12:22:19.227296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 12:22:19.227630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 12:22:19.227639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.197 qpair failed and we were unable to recover it. 00:26:18.197 [2024-04-26 12:22:19.227966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 12:22:19.228298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 12:22:19.228307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.197 qpair failed and we were unable to recover it. 00:26:18.197 [2024-04-26 12:22:19.228607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 12:22:19.228908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 12:22:19.228918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.197 qpair failed and we were unable to recover it. 00:26:18.197 [2024-04-26 12:22:19.229239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 12:22:19.229528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 12:22:19.229536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.197 qpair failed and we were unable to recover it. 00:26:18.197 [2024-04-26 12:22:19.229863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 12:22:19.230204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 12:22:19.230213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.197 qpair failed and we were unable to recover it. 00:26:18.197 [2024-04-26 12:22:19.230502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 12:22:19.230781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 12:22:19.230790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.197 qpair failed and we were unable to recover it. 00:26:18.197 [2024-04-26 12:22:19.230996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 12:22:19.231295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 12:22:19.231304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.198 qpair failed and we were unable to recover it. 00:26:18.198 [2024-04-26 12:22:19.231481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 12:22:19.231744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 12:22:19.231753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.198 qpair failed and we were unable to recover it. 00:26:18.198 [2024-04-26 12:22:19.232030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 12:22:19.232360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 12:22:19.232369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.198 qpair failed and we were unable to recover it. 00:26:18.198 [2024-04-26 12:22:19.232679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 12:22:19.232877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 12:22:19.232886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.198 qpair failed and we were unable to recover it. 00:26:18.198 [2024-04-26 12:22:19.233200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 12:22:19.233397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 12:22:19.233406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.198 qpair failed and we were unable to recover it. 00:26:18.198 [2024-04-26 12:22:19.233637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 12:22:19.233952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 12:22:19.233962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.198 qpair failed and we were unable to recover it. 00:26:18.198 [2024-04-26 12:22:19.234284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 12:22:19.234633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 12:22:19.234643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.198 qpair failed and we were unable to recover it. 00:26:18.198 [2024-04-26 12:22:19.234833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 12:22:19.235171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 12:22:19.235180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.198 qpair failed and we were unable to recover it. 00:26:18.198 [2024-04-26 12:22:19.235495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 12:22:19.235805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 12:22:19.235814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.198 qpair failed and we were unable to recover it. 00:26:18.198 [2024-04-26 12:22:19.236172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 12:22:19.236487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 12:22:19.236497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.198 qpair failed and we were unable to recover it. 00:26:18.198 [2024-04-26 12:22:19.236814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 12:22:19.237143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 12:22:19.237153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.198 qpair failed and we were unable to recover it. 00:26:18.198 [2024-04-26 12:22:19.237480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 12:22:19.237794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 12:22:19.237802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.198 qpair failed and we were unable to recover it. 00:26:18.198 [2024-04-26 12:22:19.238139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 12:22:19.238420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 12:22:19.238429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.198 qpair failed and we were unable to recover it. 00:26:18.198 [2024-04-26 12:22:19.238786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 12:22:19.239012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 12:22:19.239022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.198 qpair failed and we were unable to recover it. 00:26:18.198 [2024-04-26 12:22:19.239224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 12:22:19.239540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 12:22:19.239548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.198 qpair failed and we were unable to recover it. 00:26:18.198 [2024-04-26 12:22:19.239863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 12:22:19.240154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 12:22:19.240164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.198 qpair failed and we were unable to recover it. 00:26:18.198 [2024-04-26 12:22:19.240452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 12:22:19.240681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 12:22:19.240691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.198 qpair failed and we were unable to recover it. 00:26:18.198 [2024-04-26 12:22:19.241015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 12:22:19.241329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 12:22:19.241338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.198 qpair failed and we were unable to recover it. 00:26:18.198 [2024-04-26 12:22:19.241638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 12:22:19.241963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 12:22:19.241972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.198 qpair failed and we were unable to recover it. 00:26:18.198 [2024-04-26 12:22:19.242290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 12:22:19.242488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 12:22:19.242497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.198 qpair failed and we were unable to recover it. 00:26:18.198 [2024-04-26 12:22:19.242781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 12:22:19.243105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 12:22:19.243114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.198 qpair failed and we were unable to recover it. 00:26:18.198 [2024-04-26 12:22:19.243503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 12:22:19.243775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 12:22:19.243784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.198 qpair failed and we were unable to recover it. 00:26:18.198 [2024-04-26 12:22:19.244154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 12:22:19.244502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 12:22:19.244511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.198 qpair failed and we were unable to recover it. 00:26:18.198 [2024-04-26 12:22:19.244827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 12:22:19.245028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 12:22:19.245037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.198 qpair failed and we were unable to recover it. 00:26:18.198 [2024-04-26 12:22:19.245351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 12:22:19.245671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 12:22:19.245680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.198 qpair failed and we were unable to recover it. 00:26:18.198 [2024-04-26 12:22:19.246001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 12:22:19.246292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 12:22:19.246301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.198 qpair failed and we were unable to recover it. 00:26:18.198 [2024-04-26 12:22:19.246625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 12:22:19.246949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 12:22:19.246958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.198 qpair failed and we were unable to recover it. 00:26:18.198 [2024-04-26 12:22:19.247270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 12:22:19.247589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 12:22:19.247598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.198 qpair failed and we were unable to recover it. 00:26:18.198 [2024-04-26 12:22:19.247939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 12:22:19.248241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 12:22:19.248250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.198 qpair failed and we were unable to recover it. 00:26:18.199 [2024-04-26 12:22:19.248562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 12:22:19.248699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 12:22:19.248710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.199 qpair failed and we were unable to recover it. 00:26:18.199 [2024-04-26 12:22:19.249026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 12:22:19.249335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 12:22:19.249344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.199 qpair failed and we were unable to recover it. 00:26:18.199 [2024-04-26 12:22:19.249686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 12:22:19.250008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 12:22:19.250019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.199 qpair failed and we were unable to recover it. 00:26:18.199 [2024-04-26 12:22:19.250354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 12:22:19.250630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 12:22:19.250640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.199 qpair failed and we were unable to recover it. 00:26:18.199 [2024-04-26 12:22:19.251016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 12:22:19.251301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 12:22:19.251310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.199 qpair failed and we were unable to recover it. 00:26:18.199 [2024-04-26 12:22:19.251630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 12:22:19.251816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 12:22:19.251825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.199 qpair failed and we were unable to recover it. 00:26:18.199 [2024-04-26 12:22:19.252149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 12:22:19.252463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 12:22:19.252472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.199 qpair failed and we were unable to recover it. 00:26:18.199 [2024-04-26 12:22:19.252742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 12:22:19.253125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 12:22:19.253134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.199 qpair failed and we were unable to recover it. 00:26:18.199 [2024-04-26 12:22:19.253434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 12:22:19.253738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 12:22:19.253747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.199 qpair failed and we were unable to recover it. 00:26:18.199 [2024-04-26 12:22:19.254039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 12:22:19.254328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 12:22:19.254339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.199 qpair failed and we were unable to recover it. 00:26:18.199 [2024-04-26 12:22:19.254660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 12:22:19.254942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 12:22:19.254951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.199 qpair failed and we were unable to recover it. 00:26:18.199 [2024-04-26 12:22:19.255242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 12:22:19.255559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 12:22:19.255568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.199 qpair failed and we were unable to recover it. 00:26:18.199 [2024-04-26 12:22:19.255909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 12:22:19.256200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 12:22:19.256208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.199 qpair failed and we were unable to recover it. 00:26:18.199 [2024-04-26 12:22:19.256372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 12:22:19.256676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 12:22:19.256686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.199 qpair failed and we were unable to recover it. 00:26:18.199 [2024-04-26 12:22:19.257025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 12:22:19.257338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 12:22:19.257348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.199 qpair failed and we were unable to recover it. 00:26:18.199 [2024-04-26 12:22:19.257680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 12:22:19.257999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 12:22:19.258008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.199 qpair failed and we were unable to recover it. 00:26:18.199 [2024-04-26 12:22:19.258209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 12:22:19.258528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 12:22:19.258537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.199 qpair failed and we were unable to recover it. 00:26:18.199 [2024-04-26 12:22:19.258850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 12:22:19.259170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 12:22:19.259180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.199 qpair failed and we were unable to recover it. 00:26:18.199 [2024-04-26 12:22:19.259511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 12:22:19.259823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 12:22:19.259832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.199 qpair failed and we were unable to recover it. 00:26:18.199 [2024-04-26 12:22:19.260120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 12:22:19.260438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 12:22:19.260449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.199 qpair failed and we were unable to recover it. 00:26:18.199 [2024-04-26 12:22:19.260766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 12:22:19.261091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 12:22:19.261100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.199 qpair failed and we were unable to recover it. 00:26:18.199 [2024-04-26 12:22:19.261444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 12:22:19.261771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 12:22:19.261781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.199 qpair failed and we were unable to recover it. 00:26:18.199 [2024-04-26 12:22:19.262149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 12:22:19.262447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 12:22:19.262455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.199 qpair failed and we were unable to recover it. 00:26:18.199 [2024-04-26 12:22:19.262795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 12:22:19.263006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 12:22:19.263015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.199 qpair failed and we were unable to recover it. 00:26:18.199 [2024-04-26 12:22:19.263328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 12:22:19.263651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 12:22:19.263660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.199 qpair failed and we were unable to recover it. 00:26:18.199 [2024-04-26 12:22:19.263963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 12:22:19.264273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 12:22:19.264282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.199 qpair failed and we were unable to recover it. 00:26:18.199 [2024-04-26 12:22:19.264600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 12:22:19.264922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 12:22:19.264931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.199 qpair failed and we were unable to recover it. 00:26:18.199 [2024-04-26 12:22:19.265228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 12:22:19.265562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 12:22:19.265571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.199 qpair failed and we were unable to recover it. 00:26:18.199 [2024-04-26 12:22:19.265882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 12:22:19.266181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 12:22:19.266190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.200 qpair failed and we were unable to recover it. 00:26:18.200 [2024-04-26 12:22:19.266497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 12:22:19.266816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 12:22:19.266825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.200 qpair failed and we were unable to recover it. 00:26:18.200 [2024-04-26 12:22:19.267130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 12:22:19.267441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 12:22:19.267449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.200 qpair failed and we were unable to recover it. 00:26:18.200 [2024-04-26 12:22:19.267784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 12:22:19.268086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 12:22:19.268096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.200 qpair failed and we were unable to recover it. 00:26:18.200 [2024-04-26 12:22:19.268403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 12:22:19.268703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 12:22:19.268712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.200 qpair failed and we were unable to recover it. 00:26:18.200 [2024-04-26 12:22:19.269157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 12:22:19.269461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 12:22:19.269470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.200 qpair failed and we were unable to recover it. 00:26:18.200 [2024-04-26 12:22:19.269791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 12:22:19.270100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 12:22:19.270109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.200 qpair failed and we were unable to recover it. 00:26:18.200 [2024-04-26 12:22:19.270445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 12:22:19.270757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 12:22:19.270766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.200 qpair failed and we were unable to recover it. 00:26:18.200 [2024-04-26 12:22:19.271000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 12:22:19.271255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 12:22:19.271263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.200 qpair failed and we were unable to recover it. 00:26:18.200 [2024-04-26 12:22:19.271593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 12:22:19.271799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 12:22:19.271807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.200 qpair failed and we were unable to recover it. 00:26:18.200 [2024-04-26 12:22:19.272119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 12:22:19.272435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 12:22:19.272444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.200 qpair failed and we were unable to recover it. 00:26:18.200 [2024-04-26 12:22:19.272779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 12:22:19.273092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 12:22:19.273102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.200 qpair failed and we were unable to recover it. 00:26:18.200 [2024-04-26 12:22:19.273414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 12:22:19.273727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 12:22:19.273737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.200 qpair failed and we were unable to recover it. 00:26:18.200 [2024-04-26 12:22:19.274000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 12:22:19.274225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 12:22:19.274234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.200 qpair failed and we were unable to recover it. 00:26:18.200 [2024-04-26 12:22:19.274561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 12:22:19.274731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 12:22:19.274740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.200 qpair failed and we were unable to recover it. 00:26:18.200 [2024-04-26 12:22:19.275045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 12:22:19.275230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 12:22:19.275240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.200 qpair failed and we were unable to recover it. 00:26:18.200 [2024-04-26 12:22:19.275519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 12:22:19.275854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 12:22:19.275864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.200 qpair failed and we were unable to recover it. 00:26:18.200 [2024-04-26 12:22:19.276156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 12:22:19.276468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 12:22:19.276477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.200 qpair failed and we were unable to recover it. 00:26:18.200 [2024-04-26 12:22:19.276793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 12:22:19.277121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 12:22:19.277130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.200 qpair failed and we were unable to recover it. 00:26:18.200 [2024-04-26 12:22:19.277441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 12:22:19.277630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 12:22:19.277641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.200 qpair failed and we were unable to recover it. 00:26:18.200 [2024-04-26 12:22:19.278009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 12:22:19.278332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 12:22:19.278341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.200 qpair failed and we were unable to recover it. 00:26:18.200 [2024-04-26 12:22:19.278602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 12:22:19.278827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 12:22:19.278861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.200 qpair failed and we were unable to recover it. 00:26:18.200 [2024-04-26 12:22:19.279172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 12:22:19.279527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 12:22:19.279537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.200 qpair failed and we were unable to recover it. 00:26:18.200 [2024-04-26 12:22:19.279874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 12:22:19.280159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 12:22:19.280167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.200 qpair failed and we were unable to recover it. 00:26:18.200 [2024-04-26 12:22:19.280553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 12:22:19.280900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 12:22:19.280909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.201 qpair failed and we were unable to recover it. 00:26:18.201 [2024-04-26 12:22:19.281249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 12:22:19.281561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 12:22:19.281571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.201 qpair failed and we were unable to recover it. 00:26:18.201 [2024-04-26 12:22:19.281881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 12:22:19.282163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 12:22:19.282172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.201 qpair failed and we were unable to recover it. 00:26:18.201 [2024-04-26 12:22:19.282456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 12:22:19.282626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 12:22:19.282635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.201 qpair failed and we were unable to recover it. 00:26:18.201 [2024-04-26 12:22:19.282971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 12:22:19.283186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 12:22:19.283194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.201 qpair failed and we were unable to recover it. 00:26:18.201 [2024-04-26 12:22:19.283513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 12:22:19.283817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 12:22:19.283827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.201 qpair failed and we were unable to recover it. 00:26:18.201 [2024-04-26 12:22:19.284156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 12:22:19.284499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 12:22:19.284509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.201 qpair failed and we were unable to recover it. 00:26:18.201 [2024-04-26 12:22:19.284859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 12:22:19.285171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 12:22:19.285180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.201 qpair failed and we were unable to recover it. 00:26:18.201 [2024-04-26 12:22:19.285511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 12:22:19.285798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 12:22:19.285808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.201 qpair failed and we were unable to recover it. 00:26:18.201 [2024-04-26 12:22:19.286148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 12:22:19.286460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 12:22:19.286470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.201 qpair failed and we were unable to recover it. 00:26:18.201 [2024-04-26 12:22:19.286730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 12:22:19.287056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 12:22:19.287065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.201 qpair failed and we were unable to recover it. 00:26:18.201 [2024-04-26 12:22:19.287362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 12:22:19.287676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 12:22:19.287686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.201 qpair failed and we were unable to recover it. 00:26:18.201 [2024-04-26 12:22:19.287995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 12:22:19.288309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 12:22:19.288319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.201 qpair failed and we were unable to recover it. 00:26:18.201 [2024-04-26 12:22:19.288653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 12:22:19.288966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 12:22:19.288975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.201 qpair failed and we were unable to recover it. 00:26:18.201 [2024-04-26 12:22:19.289291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 12:22:19.289605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 12:22:19.289614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.201 qpair failed and we were unable to recover it. 00:26:18.201 [2024-04-26 12:22:19.289921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 12:22:19.290261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 12:22:19.290270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.201 qpair failed and we were unable to recover it. 00:26:18.201 [2024-04-26 12:22:19.290581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 12:22:19.290909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 12:22:19.290919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.201 qpair failed and we were unable to recover it. 00:26:18.201 [2024-04-26 12:22:19.291269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 12:22:19.291539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 12:22:19.291548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.201 qpair failed and we were unable to recover it. 00:26:18.201 [2024-04-26 12:22:19.291879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 12:22:19.292222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 12:22:19.292236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.201 qpair failed and we were unable to recover it. 00:26:18.201 [2024-04-26 12:22:19.292576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 12:22:19.292829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 12:22:19.292851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.201 qpair failed and we were unable to recover it. 00:26:18.201 [2024-04-26 12:22:19.293137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 12:22:19.293427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 12:22:19.293436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.201 qpair failed and we were unable to recover it. 00:26:18.201 [2024-04-26 12:22:19.293788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 12:22:19.294103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 12:22:19.294113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.201 qpair failed and we were unable to recover it. 00:26:18.201 [2024-04-26 12:22:19.294397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 12:22:19.294677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 12:22:19.294686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.201 qpair failed and we were unable to recover it. 00:26:18.201 [2024-04-26 12:22:19.295075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 12:22:19.295389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 12:22:19.295399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.201 qpair failed and we were unable to recover it. 00:26:18.201 [2024-04-26 12:22:19.295723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 12:22:19.296021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 12:22:19.296031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.201 qpair failed and we were unable to recover it. 00:26:18.201 [2024-04-26 12:22:19.296362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 12:22:19.296674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 12:22:19.296683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.201 qpair failed and we were unable to recover it. 00:26:18.201 [2024-04-26 12:22:19.297009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 12:22:19.297330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 12:22:19.297339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.201 qpair failed and we were unable to recover it. 00:26:18.201 [2024-04-26 12:22:19.297694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 12:22:19.297980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 12:22:19.297990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.202 qpair failed and we were unable to recover it. 00:26:18.202 [2024-04-26 12:22:19.298219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 12:22:19.298438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 12:22:19.298447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.202 qpair failed and we were unable to recover it. 00:26:18.202 [2024-04-26 12:22:19.298797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 12:22:19.299125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 12:22:19.299135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.202 qpair failed and we were unable to recover it. 00:26:18.202 [2024-04-26 12:22:19.299454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 12:22:19.299765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 12:22:19.299774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.202 qpair failed and we were unable to recover it. 00:26:18.202 [2024-04-26 12:22:19.300141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 12:22:19.300441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 12:22:19.300450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.202 qpair failed and we were unable to recover it. 00:26:18.202 [2024-04-26 12:22:19.300732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 12:22:19.301023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 12:22:19.301033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.202 qpair failed and we were unable to recover it. 00:26:18.202 [2024-04-26 12:22:19.301363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 12:22:19.301650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 12:22:19.301659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.202 qpair failed and we were unable to recover it. 00:26:18.202 [2024-04-26 12:22:19.301880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 12:22:19.302206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 12:22:19.302215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.202 qpair failed and we were unable to recover it. 00:26:18.202 [2024-04-26 12:22:19.302399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 12:22:19.302678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 12:22:19.302687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.202 qpair failed and we were unable to recover it. 00:26:18.202 [2024-04-26 12:22:19.303008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 12:22:19.303336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 12:22:19.303345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.202 qpair failed and we were unable to recover it. 00:26:18.202 [2024-04-26 12:22:19.303683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 12:22:19.304010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 12:22:19.304020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.202 qpair failed and we were unable to recover it. 00:26:18.202 [2024-04-26 12:22:19.304343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 12:22:19.304659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 12:22:19.304668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.202 qpair failed and we were unable to recover it. 00:26:18.202 [2024-04-26 12:22:19.304984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 12:22:19.305317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 12:22:19.305326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.202 qpair failed and we were unable to recover it. 00:26:18.202 [2024-04-26 12:22:19.305637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 12:22:19.305961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 12:22:19.305970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.202 qpair failed and we were unable to recover it. 00:26:18.202 [2024-04-26 12:22:19.306263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 12:22:19.306553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 12:22:19.306561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.202 qpair failed and we were unable to recover it. 00:26:18.202 [2024-04-26 12:22:19.306887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 12:22:19.307190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 12:22:19.307199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.202 qpair failed and we were unable to recover it. 00:26:18.202 [2024-04-26 12:22:19.307485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 12:22:19.307792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 12:22:19.307801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.202 qpair failed and we were unable to recover it. 00:26:18.202 [2024-04-26 12:22:19.308108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 12:22:19.308384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 12:22:19.308393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.202 qpair failed and we were unable to recover it. 00:26:18.202 [2024-04-26 12:22:19.308708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 12:22:19.309020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 12:22:19.309029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.202 qpair failed and we were unable to recover it. 00:26:18.202 [2024-04-26 12:22:19.309246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 12:22:19.309584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 12:22:19.309593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.202 qpair failed and we were unable to recover it. 00:26:18.202 [2024-04-26 12:22:19.309966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 12:22:19.310263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 12:22:19.310272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.202 qpair failed and we were unable to recover it. 00:26:18.202 [2024-04-26 12:22:19.310568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 12:22:19.310899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 12:22:19.310908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.202 qpair failed and we were unable to recover it. 00:26:18.202 [2024-04-26 12:22:19.311205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 12:22:19.311480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 12:22:19.311488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.202 qpair failed and we were unable to recover it. 00:26:18.202 [2024-04-26 12:22:19.311803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 12:22:19.312080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 12:22:19.312090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.202 qpair failed and we were unable to recover it. 00:26:18.202 [2024-04-26 12:22:19.312411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 12:22:19.312753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 12:22:19.312762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.202 qpair failed and we were unable to recover it. 00:26:18.202 [2024-04-26 12:22:19.313094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 12:22:19.313405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 12:22:19.313414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.202 qpair failed and we were unable to recover it. 00:26:18.202 [2024-04-26 12:22:19.313760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 12:22:19.313926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 12:22:19.313936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.203 qpair failed and we were unable to recover it. 00:26:18.203 [2024-04-26 12:22:19.314210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 12:22:19.314481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 12:22:19.314489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.203 qpair failed and we were unable to recover it. 00:26:18.203 [2024-04-26 12:22:19.314786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 12:22:19.315110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 12:22:19.315119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.203 qpair failed and we were unable to recover it. 00:26:18.203 [2024-04-26 12:22:19.315425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 12:22:19.315745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 12:22:19.315754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.203 qpair failed and we were unable to recover it. 00:26:18.203 [2024-04-26 12:22:19.315967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 12:22:19.316387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 12:22:19.316396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.203 qpair failed and we were unable to recover it. 00:26:18.203 [2024-04-26 12:22:19.316651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 12:22:19.316992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 12:22:19.317001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.203 qpair failed and we were unable to recover it. 00:26:18.203 [2024-04-26 12:22:19.317293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 12:22:19.317488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 12:22:19.317497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.203 qpair failed and we were unable to recover it. 00:26:18.203 [2024-04-26 12:22:19.317806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 12:22:19.318025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 12:22:19.318035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.203 qpair failed and we were unable to recover it. 00:26:18.203 [2024-04-26 12:22:19.318348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 12:22:19.318659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 12:22:19.318668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.203 qpair failed and we were unable to recover it. 00:26:18.203 [2024-04-26 12:22:19.319001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 12:22:19.319320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 12:22:19.319329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.203 qpair failed and we were unable to recover it. 00:26:18.203 [2024-04-26 12:22:19.319709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 12:22:19.320018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 12:22:19.320028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.203 qpair failed and we were unable to recover it. 00:26:18.203 [2024-04-26 12:22:19.320396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 12:22:19.320716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 12:22:19.320725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.203 qpair failed and we were unable to recover it. 00:26:18.203 [2024-04-26 12:22:19.321065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 12:22:19.321361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 12:22:19.321369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.203 qpair failed and we were unable to recover it. 00:26:18.203 [2024-04-26 12:22:19.321776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 12:22:19.322069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 12:22:19.322078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.203 qpair failed and we were unable to recover it. 00:26:18.203 [2024-04-26 12:22:19.322379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 12:22:19.322683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 12:22:19.322693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.203 qpair failed and we were unable to recover it. 00:26:18.203 [2024-04-26 12:22:19.323026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 12:22:19.323411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 12:22:19.323420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.203 qpair failed and we were unable to recover it. 00:26:18.203 [2024-04-26 12:22:19.323708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 12:22:19.324018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 12:22:19.324029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.203 qpair failed and we were unable to recover it. 00:26:18.203 [2024-04-26 12:22:19.324363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 12:22:19.324670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 12:22:19.324679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.203 qpair failed and we were unable to recover it. 00:26:18.203 [2024-04-26 12:22:19.325038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 12:22:19.325221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 12:22:19.325229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.203 qpair failed and we were unable to recover it. 00:26:18.203 [2024-04-26 12:22:19.325555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 12:22:19.325834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 12:22:19.325853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.203 qpair failed and we were unable to recover it. 00:26:18.203 [2024-04-26 12:22:19.326173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 12:22:19.326458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 12:22:19.326467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.203 qpair failed and we were unable to recover it. 00:26:18.203 [2024-04-26 12:22:19.326758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 12:22:19.327071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 12:22:19.327081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.203 qpair failed and we were unable to recover it. 00:26:18.203 [2024-04-26 12:22:19.327417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 12:22:19.327714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 12:22:19.327722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.203 qpair failed and we were unable to recover it. 00:26:18.203 [2024-04-26 12:22:19.328068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 12:22:19.328368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 12:22:19.328377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.203 qpair failed and we were unable to recover it. 00:26:18.203 [2024-04-26 12:22:19.328598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 12:22:19.328900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 12:22:19.328910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.203 qpair failed and we were unable to recover it. 00:26:18.203 [2024-04-26 12:22:19.329206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 12:22:19.329523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 12:22:19.329532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.203 qpair failed and we were unable to recover it. 00:26:18.203 [2024-04-26 12:22:19.329866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 12:22:19.330170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 12:22:19.330179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.203 qpair failed and we were unable to recover it. 00:26:18.203 [2024-04-26 12:22:19.330493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 12:22:19.330804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 12:22:19.330813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.204 qpair failed and we were unable to recover it. 00:26:18.204 [2024-04-26 12:22:19.331127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 12:22:19.331467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 12:22:19.331477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.204 qpair failed and we were unable to recover it. 00:26:18.204 [2024-04-26 12:22:19.331699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 12:22:19.331932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 12:22:19.331941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.204 qpair failed and we were unable to recover it. 00:26:18.204 [2024-04-26 12:22:19.332238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 12:22:19.332529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 12:22:19.332538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.204 qpair failed and we were unable to recover it. 00:26:18.204 [2024-04-26 12:22:19.332845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 12:22:19.333143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 12:22:19.333152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.204 qpair failed and we were unable to recover it. 00:26:18.204 [2024-04-26 12:22:19.333464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 12:22:19.333780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 12:22:19.333789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.204 qpair failed and we were unable to recover it. 00:26:18.204 [2024-04-26 12:22:19.334084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 12:22:19.334369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 12:22:19.334378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.204 qpair failed and we were unable to recover it. 00:26:18.204 [2024-04-26 12:22:19.334715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 12:22:19.335017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 12:22:19.335027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.204 qpair failed and we were unable to recover it. 00:26:18.204 [2024-04-26 12:22:19.335356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 12:22:19.335650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 12:22:19.335659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.204 qpair failed and we were unable to recover it. 00:26:18.204 [2024-04-26 12:22:19.335902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 12:22:19.336277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 12:22:19.336286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.204 qpair failed and we were unable to recover it. 00:26:18.204 [2024-04-26 12:22:19.336590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 12:22:19.336911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 12:22:19.336920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.204 qpair failed and we were unable to recover it. 00:26:18.204 [2024-04-26 12:22:19.337261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 12:22:19.337565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 12:22:19.337574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.204 qpair failed and we were unable to recover it. 00:26:18.204 [2024-04-26 12:22:19.337974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 12:22:19.338315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 12:22:19.338325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.204 qpair failed and we were unable to recover it. 00:26:18.204 [2024-04-26 12:22:19.338643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 12:22:19.338961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 12:22:19.338971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.204 qpair failed and we were unable to recover it. 00:26:18.204 [2024-04-26 12:22:19.339140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 12:22:19.339466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 12:22:19.339475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.204 qpair failed and we were unable to recover it. 00:26:18.204 [2024-04-26 12:22:19.339776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 12:22:19.340073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 12:22:19.340082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.204 qpair failed and we were unable to recover it. 00:26:18.204 [2024-04-26 12:22:19.340462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 12:22:19.340757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 12:22:19.340766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.204 qpair failed and we were unable to recover it. 00:26:18.204 [2024-04-26 12:22:19.341098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 12:22:19.341390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 12:22:19.341399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.204 qpair failed and we were unable to recover it. 00:26:18.204 [2024-04-26 12:22:19.341683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 12:22:19.342024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 12:22:19.342033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.204 qpair failed and we were unable to recover it. 00:26:18.204 [2024-04-26 12:22:19.342370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 12:22:19.342686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 12:22:19.342695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.204 qpair failed and we were unable to recover it. 00:26:18.204 [2024-04-26 12:22:19.342992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 12:22:19.343306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 12:22:19.343315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.204 qpair failed and we were unable to recover it. 00:26:18.204 [2024-04-26 12:22:19.343629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 12:22:19.343948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 12:22:19.343957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.204 qpair failed and we were unable to recover it. 00:26:18.204 [2024-04-26 12:22:19.344274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 12:22:19.344592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 12:22:19.344600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.204 qpair failed and we were unable to recover it. 00:26:18.204 [2024-04-26 12:22:19.344900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 12:22:19.345217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 12:22:19.345225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.204 qpair failed and we were unable to recover it. 00:26:18.204 [2024-04-26 12:22:19.345522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 12:22:19.345833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 12:22:19.345853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.204 qpair failed and we were unable to recover it. 00:26:18.204 [2024-04-26 12:22:19.346166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 12:22:19.346491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 12:22:19.346501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.205 qpair failed and we were unable to recover it. 00:26:18.205 [2024-04-26 12:22:19.346849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 12:22:19.347160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 12:22:19.347168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.205 qpair failed and we were unable to recover it. 00:26:18.205 [2024-04-26 12:22:19.347465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 12:22:19.347802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 12:22:19.347812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.205 qpair failed and we were unable to recover it. 00:26:18.205 [2024-04-26 12:22:19.348199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 12:22:19.348545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 12:22:19.348553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.205 qpair failed and we were unable to recover it. 00:26:18.205 [2024-04-26 12:22:19.348867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 12:22:19.349177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 12:22:19.349186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.205 qpair failed and we were unable to recover it. 00:26:18.205 [2024-04-26 12:22:19.349367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 12:22:19.349650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 12:22:19.349659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.205 qpair failed and we were unable to recover it. 00:26:18.205 [2024-04-26 12:22:19.350029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 12:22:19.350382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 12:22:19.350392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.205 qpair failed and we were unable to recover it. 00:26:18.205 [2024-04-26 12:22:19.350718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 12:22:19.351091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 12:22:19.351102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.205 qpair failed and we were unable to recover it. 00:26:18.205 [2024-04-26 12:22:19.351313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 12:22:19.351590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 12:22:19.351599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.205 qpair failed and we were unable to recover it. 00:26:18.205 [2024-04-26 12:22:19.351893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 12:22:19.352187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 12:22:19.352196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.205 qpair failed and we were unable to recover it. 00:26:18.205 [2024-04-26 12:22:19.352537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 12:22:19.352845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 12:22:19.352854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.205 qpair failed and we were unable to recover it. 00:26:18.205 [2024-04-26 12:22:19.353158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 12:22:19.353456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 12:22:19.353465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.205 qpair failed and we were unable to recover it. 00:26:18.205 [2024-04-26 12:22:19.353789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 12:22:19.354114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 12:22:19.354123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.205 qpair failed and we were unable to recover it. 00:26:18.205 [2024-04-26 12:22:19.354406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 12:22:19.354711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 12:22:19.354720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.205 qpair failed and we were unable to recover it. 00:26:18.205 [2024-04-26 12:22:19.354902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 12:22:19.355177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 12:22:19.355186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.205 qpair failed and we were unable to recover it. 00:26:18.205 [2024-04-26 12:22:19.355463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 12:22:19.355798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 12:22:19.355810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.205 qpair failed and we were unable to recover it. 00:26:18.205 [2024-04-26 12:22:19.356039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 12:22:19.356370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 12:22:19.356379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.205 qpair failed and we were unable to recover it. 00:26:18.205 [2024-04-26 12:22:19.356710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 12:22:19.357027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 12:22:19.357036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.205 qpair failed and we were unable to recover it. 00:26:18.205 [2024-04-26 12:22:19.357356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 12:22:19.357664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 12:22:19.357673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.205 qpair failed and we were unable to recover it. 00:26:18.205 [2024-04-26 12:22:19.358001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 12:22:19.358340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 12:22:19.358349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.205 qpair failed and we were unable to recover it. 00:26:18.205 [2024-04-26 12:22:19.358663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 12:22:19.358953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 12:22:19.358962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.205 qpair failed and we were unable to recover it. 00:26:18.205 [2024-04-26 12:22:19.359191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 12:22:19.359483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 12:22:19.359495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.205 qpair failed and we were unable to recover it. 00:26:18.205 [2024-04-26 12:22:19.359836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 12:22:19.360145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 12:22:19.360153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.205 qpair failed and we were unable to recover it. 00:26:18.205 [2024-04-26 12:22:19.360463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 12:22:19.360629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 12:22:19.360639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.205 qpair failed and we were unable to recover it. 00:26:18.205 [2024-04-26 12:22:19.360996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 12:22:19.361300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 12:22:19.361309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.205 qpair failed and we were unable to recover it. 00:26:18.205 [2024-04-26 12:22:19.361519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 12:22:19.361798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 12:22:19.361809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.205 qpair failed and we were unable to recover it. 00:26:18.205 [2024-04-26 12:22:19.362021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 12:22:19.362271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 12:22:19.362280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.205 qpair failed and we were unable to recover it. 00:26:18.206 [2024-04-26 12:22:19.362573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.206 [2024-04-26 12:22:19.362877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.206 [2024-04-26 12:22:19.362887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.206 qpair failed and we were unable to recover it. 00:26:18.206 [2024-04-26 12:22:19.363190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.206 [2024-04-26 12:22:19.363534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.206 [2024-04-26 12:22:19.363543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.206 qpair failed and we were unable to recover it. 00:26:18.206 [2024-04-26 12:22:19.363888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.206 [2024-04-26 12:22:19.364195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.206 [2024-04-26 12:22:19.364204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.206 qpair failed and we were unable to recover it. 00:26:18.206 [2024-04-26 12:22:19.364534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.206 [2024-04-26 12:22:19.364852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.206 [2024-04-26 12:22:19.364862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.206 qpair failed and we were unable to recover it. 00:26:18.206 [2024-04-26 12:22:19.365183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.206 [2024-04-26 12:22:19.365495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.206 [2024-04-26 12:22:19.365504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.206 qpair failed and we were unable to recover it. 00:26:18.206 [2024-04-26 12:22:19.365826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.206 [2024-04-26 12:22:19.366122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.206 [2024-04-26 12:22:19.366131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.206 qpair failed and we were unable to recover it. 00:26:18.206 [2024-04-26 12:22:19.366528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.206 [2024-04-26 12:22:19.366828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.206 [2024-04-26 12:22:19.366845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.206 qpair failed and we were unable to recover it. 00:26:18.206 [2024-04-26 12:22:19.367185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.206 [2024-04-26 12:22:19.367478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.206 [2024-04-26 12:22:19.367487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.206 qpair failed and we were unable to recover it. 00:26:18.206 [2024-04-26 12:22:19.367814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.206 [2024-04-26 12:22:19.368014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.206 [2024-04-26 12:22:19.368024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.206 qpair failed and we were unable to recover it. 00:26:18.206 [2024-04-26 12:22:19.368314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.206 [2024-04-26 12:22:19.368512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.206 [2024-04-26 12:22:19.368521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.206 qpair failed and we were unable to recover it. 00:26:18.206 [2024-04-26 12:22:19.368812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.206 [2024-04-26 12:22:19.369010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.206 [2024-04-26 12:22:19.369020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.206 qpair failed and we were unable to recover it. 00:26:18.206 [2024-04-26 12:22:19.369323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.206 [2024-04-26 12:22:19.369607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.206 [2024-04-26 12:22:19.369615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.206 qpair failed and we were unable to recover it. 00:26:18.206 [2024-04-26 12:22:19.369909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.206 [2024-04-26 12:22:19.370074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.206 [2024-04-26 12:22:19.370084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.206 qpair failed and we were unable to recover it. 00:26:18.206 [2024-04-26 12:22:19.370308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.206 [2024-04-26 12:22:19.370562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.206 [2024-04-26 12:22:19.370571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.206 qpair failed and we were unable to recover it. 00:26:18.206 [2024-04-26 12:22:19.370845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.206 [2024-04-26 12:22:19.371176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.206 [2024-04-26 12:22:19.371185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.206 qpair failed and we were unable to recover it. 00:26:18.206 [2024-04-26 12:22:19.371483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.206 [2024-04-26 12:22:19.371815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.206 [2024-04-26 12:22:19.371825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.206 qpair failed and we were unable to recover it. 00:26:18.206 [2024-04-26 12:22:19.372154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.206 [2024-04-26 12:22:19.372468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.206 [2024-04-26 12:22:19.372478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.206 qpair failed and we were unable to recover it. 00:26:18.206 [2024-04-26 12:22:19.372790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.206 [2024-04-26 12:22:19.373110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.206 [2024-04-26 12:22:19.373121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.206 qpair failed and we were unable to recover it. 00:26:18.206 [2024-04-26 12:22:19.373430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.206 [2024-04-26 12:22:19.373736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.206 [2024-04-26 12:22:19.373746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.206 qpair failed and we were unable to recover it. 00:26:18.206 [2024-04-26 12:22:19.374113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.206 [2024-04-26 12:22:19.374298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.206 [2024-04-26 12:22:19.374308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.206 qpair failed and we were unable to recover it. 00:26:18.206 [2024-04-26 12:22:19.374618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.206 [2024-04-26 12:22:19.374943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.206 [2024-04-26 12:22:19.374952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.206 qpair failed and we were unable to recover it. 00:26:18.206 [2024-04-26 12:22:19.375151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.206 [2024-04-26 12:22:19.375411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.206 [2024-04-26 12:22:19.375420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.206 qpair failed and we were unable to recover it. 00:26:18.206 [2024-04-26 12:22:19.375716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.206 [2024-04-26 12:22:19.375951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.206 [2024-04-26 12:22:19.375961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.206 qpair failed and we were unable to recover it. 00:26:18.206 [2024-04-26 12:22:19.376269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.206 [2024-04-26 12:22:19.376408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.206 [2024-04-26 12:22:19.376417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.206 qpair failed and we were unable to recover it. 00:26:18.206 [2024-04-26 12:22:19.376742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.206 [2024-04-26 12:22:19.377027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.206 [2024-04-26 12:22:19.377037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.206 qpair failed and we were unable to recover it. 00:26:18.206 [2024-04-26 12:22:19.377341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.206 [2024-04-26 12:22:19.377561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.206 [2024-04-26 12:22:19.377570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.206 qpair failed and we were unable to recover it. 00:26:18.206 [2024-04-26 12:22:19.377908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.206 [2024-04-26 12:22:19.378228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.206 [2024-04-26 12:22:19.378237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.206 qpair failed and we were unable to recover it. 00:26:18.206 [2024-04-26 12:22:19.378545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.206 [2024-04-26 12:22:19.378872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.206 [2024-04-26 12:22:19.378881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.206 qpair failed and we were unable to recover it. 00:26:18.206 [2024-04-26 12:22:19.379192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.206 [2024-04-26 12:22:19.379457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.207 [2024-04-26 12:22:19.379466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.207 qpair failed and we were unable to recover it. 00:26:18.207 [2024-04-26 12:22:19.379702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.207 [2024-04-26 12:22:19.380012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.207 [2024-04-26 12:22:19.380022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.207 qpair failed and we were unable to recover it. 00:26:18.207 [2024-04-26 12:22:19.380321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.207 [2024-04-26 12:22:19.380635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.207 [2024-04-26 12:22:19.380644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.207 qpair failed and we were unable to recover it. 00:26:18.207 [2024-04-26 12:22:19.380936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.207 [2024-04-26 12:22:19.381282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.207 [2024-04-26 12:22:19.381291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.207 qpair failed and we were unable to recover it. 00:26:18.207 [2024-04-26 12:22:19.381593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.207 [2024-04-26 12:22:19.381915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.207 [2024-04-26 12:22:19.381925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.207 qpair failed and we were unable to recover it. 00:26:18.207 [2024-04-26 12:22:19.382246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.207 [2024-04-26 12:22:19.382437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.207 [2024-04-26 12:22:19.382445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.207 qpair failed and we were unable to recover it. 00:26:18.207 [2024-04-26 12:22:19.382765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.207 [2024-04-26 12:22:19.383114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.207 [2024-04-26 12:22:19.383124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.207 qpair failed and we were unable to recover it. 00:26:18.207 [2024-04-26 12:22:19.383421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.207 [2024-04-26 12:22:19.383743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.207 [2024-04-26 12:22:19.383751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.207 qpair failed and we were unable to recover it. 00:26:18.207 [2024-04-26 12:22:19.384071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.207 [2024-04-26 12:22:19.384346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.207 [2024-04-26 12:22:19.384355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.207 qpair failed and we were unable to recover it. 00:26:18.207 [2024-04-26 12:22:19.384554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.207 [2024-04-26 12:22:19.384899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.207 [2024-04-26 12:22:19.384908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.207 qpair failed and we were unable to recover it. 00:26:18.207 [2024-04-26 12:22:19.385224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.207 [2024-04-26 12:22:19.385536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.207 [2024-04-26 12:22:19.385545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.207 qpair failed and we were unable to recover it. 00:26:18.207 [2024-04-26 12:22:19.385882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.207 [2024-04-26 12:22:19.386193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.207 [2024-04-26 12:22:19.386202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.207 qpair failed and we were unable to recover it. 00:26:18.207 [2024-04-26 12:22:19.386524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.207 [2024-04-26 12:22:19.386822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.207 [2024-04-26 12:22:19.386831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.207 qpair failed and we were unable to recover it. 00:26:18.207 [2024-04-26 12:22:19.387175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.207 [2024-04-26 12:22:19.387489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.207 [2024-04-26 12:22:19.387499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.207 qpair failed and we were unable to recover it. 00:26:18.207 [2024-04-26 12:22:19.387711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.207 [2024-04-26 12:22:19.387980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.207 [2024-04-26 12:22:19.387990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.207 qpair failed and we were unable to recover it. 00:26:18.207 [2024-04-26 12:22:19.388349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.207 [2024-04-26 12:22:19.388648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.207 [2024-04-26 12:22:19.388658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.207 qpair failed and we were unable to recover it. 00:26:18.207 [2024-04-26 12:22:19.388980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.207 [2024-04-26 12:22:19.389286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.207 [2024-04-26 12:22:19.389296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.207 qpair failed and we were unable to recover it. 00:26:18.207 [2024-04-26 12:22:19.389638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.207 [2024-04-26 12:22:19.389862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.207 [2024-04-26 12:22:19.389872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.207 qpair failed and we were unable to recover it. 00:26:18.207 [2024-04-26 12:22:19.390182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.207 [2024-04-26 12:22:19.390382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.207 [2024-04-26 12:22:19.390391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.207 qpair failed and we were unable to recover it. 00:26:18.207 [2024-04-26 12:22:19.390690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.207 [2024-04-26 12:22:19.390965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.207 [2024-04-26 12:22:19.390974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.207 qpair failed and we were unable to recover it. 00:26:18.207 [2024-04-26 12:22:19.391160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.207 [2024-04-26 12:22:19.391484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.207 [2024-04-26 12:22:19.391493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.207 qpair failed and we were unable to recover it. 00:26:18.207 [2024-04-26 12:22:19.391791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.207 [2024-04-26 12:22:19.391976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.207 [2024-04-26 12:22:19.391987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.207 qpair failed and we were unable to recover it. 00:26:18.207 [2024-04-26 12:22:19.392308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.207 [2024-04-26 12:22:19.392620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.207 [2024-04-26 12:22:19.392629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.207 qpair failed and we were unable to recover it. 00:26:18.207 [2024-04-26 12:22:19.392733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.207 [2024-04-26 12:22:19.393049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.207 [2024-04-26 12:22:19.393059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.207 qpair failed and we were unable to recover it. 00:26:18.207 [2024-04-26 12:22:19.393401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.207 [2024-04-26 12:22:19.393703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.207 [2024-04-26 12:22:19.393712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.207 qpair failed and we were unable to recover it. 00:26:18.207 [2024-04-26 12:22:19.393952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.207 [2024-04-26 12:22:19.394334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.207 [2024-04-26 12:22:19.394344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.207 qpair failed and we were unable to recover it. 00:26:18.207 [2024-04-26 12:22:19.394658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.207 [2024-04-26 12:22:19.394960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.207 [2024-04-26 12:22:19.394970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.207 qpair failed and we were unable to recover it. 00:26:18.207 [2024-04-26 12:22:19.395305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.207 [2024-04-26 12:22:19.395533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.207 [2024-04-26 12:22:19.395541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.207 qpair failed and we were unable to recover it. 00:26:18.207 [2024-04-26 12:22:19.395769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.207 [2024-04-26 12:22:19.396037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.207 [2024-04-26 12:22:19.396046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.207 qpair failed and we were unable to recover it. 00:26:18.207 [2024-04-26 12:22:19.396354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.207 [2024-04-26 12:22:19.396642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.207 [2024-04-26 12:22:19.396652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.207 qpair failed and we were unable to recover it. 00:26:18.208 [2024-04-26 12:22:19.396974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.208 [2024-04-26 12:22:19.397253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.208 [2024-04-26 12:22:19.397262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.208 qpair failed and we were unable to recover it. 00:26:18.208 [2024-04-26 12:22:19.397637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.208 [2024-04-26 12:22:19.397942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.208 [2024-04-26 12:22:19.397951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.208 qpair failed and we were unable to recover it. 00:26:18.208 [2024-04-26 12:22:19.398257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.208 [2024-04-26 12:22:19.398538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.208 [2024-04-26 12:22:19.398548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.208 qpair failed and we were unable to recover it. 00:26:18.208 [2024-04-26 12:22:19.398857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.208 [2024-04-26 12:22:19.399203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.208 [2024-04-26 12:22:19.399212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.208 qpair failed and we were unable to recover it. 00:26:18.208 [2024-04-26 12:22:19.399546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.208 [2024-04-26 12:22:19.399874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.208 [2024-04-26 12:22:19.399883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.208 qpair failed and we were unable to recover it. 00:26:18.208 [2024-04-26 12:22:19.400186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.208 [2024-04-26 12:22:19.400412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.208 [2024-04-26 12:22:19.400421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.208 qpair failed and we were unable to recover it. 00:26:18.208 [2024-04-26 12:22:19.400664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 12:22:19.401000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 12:22:19.401011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.482 qpair failed and we were unable to recover it. 00:26:18.482 [2024-04-26 12:22:19.401324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 12:22:19.401522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 12:22:19.401531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.482 qpair failed and we were unable to recover it. 00:26:18.482 [2024-04-26 12:22:19.401846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 12:22:19.402064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 12:22:19.402072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.482 qpair failed and we were unable to recover it. 00:26:18.482 [2024-04-26 12:22:19.402414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 12:22:19.402727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 12:22:19.402737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.482 qpair failed and we were unable to recover it. 00:26:18.482 [2024-04-26 12:22:19.403062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 12:22:19.403393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 12:22:19.403402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.482 qpair failed and we were unable to recover it. 00:26:18.482 [2024-04-26 12:22:19.403591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 12:22:19.403900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 12:22:19.403910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.482 qpair failed and we were unable to recover it. 00:26:18.482 [2024-04-26 12:22:19.404236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 12:22:19.404584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 12:22:19.404594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.482 qpair failed and we were unable to recover it. 00:26:18.482 [2024-04-26 12:22:19.404979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 12:22:19.405294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 12:22:19.405303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.482 qpair failed and we were unable to recover it. 00:26:18.482 [2024-04-26 12:22:19.405621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 12:22:19.405780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 12:22:19.405789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.482 qpair failed and we were unable to recover it. 00:26:18.482 [2024-04-26 12:22:19.405991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 12:22:19.406329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 12:22:19.406338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.482 qpair failed and we were unable to recover it. 00:26:18.482 [2024-04-26 12:22:19.406646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 12:22:19.406943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 12:22:19.406953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.482 qpair failed and we were unable to recover it. 00:26:18.482 [2024-04-26 12:22:19.407297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 12:22:19.407472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 12:22:19.407482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.482 qpair failed and we were unable to recover it. 00:26:18.482 [2024-04-26 12:22:19.407820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 12:22:19.408170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 12:22:19.408181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.482 qpair failed and we were unable to recover it. 00:26:18.482 [2024-04-26 12:22:19.408444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 12:22:19.408733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 12:22:19.408743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.482 qpair failed and we were unable to recover it. 00:26:18.482 [2024-04-26 12:22:19.408986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 12:22:19.409317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 12:22:19.409327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.482 qpair failed and we were unable to recover it. 00:26:18.482 [2024-04-26 12:22:19.409655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 12:22:19.409858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 12:22:19.409868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.482 qpair failed and we were unable to recover it. 00:26:18.482 [2024-04-26 12:22:19.410250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 12:22:19.410541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 12:22:19.410550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.482 qpair failed and we were unable to recover it. 00:26:18.482 [2024-04-26 12:22:19.410885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 12:22:19.411220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 12:22:19.411230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.483 qpair failed and we were unable to recover it. 00:26:18.483 [2024-04-26 12:22:19.411568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 12:22:19.411882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 12:22:19.411892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.483 qpair failed and we were unable to recover it. 00:26:18.483 [2024-04-26 12:22:19.412231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 12:22:19.412595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 12:22:19.412605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.483 qpair failed and we were unable to recover it. 00:26:18.483 [2024-04-26 12:22:19.412921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 12:22:19.413266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 12:22:19.413276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.483 qpair failed and we were unable to recover it. 00:26:18.483 [2024-04-26 12:22:19.413531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 12:22:19.413778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 12:22:19.413789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.483 qpair failed and we were unable to recover it. 00:26:18.483 [2024-04-26 12:22:19.414104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 12:22:19.414451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 12:22:19.414460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.483 qpair failed and we were unable to recover it. 00:26:18.483 [2024-04-26 12:22:19.414753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 12:22:19.415086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 12:22:19.415096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.483 qpair failed and we were unable to recover it. 00:26:18.483 [2024-04-26 12:22:19.415326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 12:22:19.415627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 12:22:19.415637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.483 qpair failed and we were unable to recover it. 00:26:18.483 [2024-04-26 12:22:19.415980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 12:22:19.416205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 12:22:19.416215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.483 qpair failed and we were unable to recover it. 00:26:18.483 [2024-04-26 12:22:19.416536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 12:22:19.416899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 12:22:19.416909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.483 qpair failed and we were unable to recover it. 00:26:18.483 [2024-04-26 12:22:19.417210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 12:22:19.417552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 12:22:19.417562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.483 qpair failed and we were unable to recover it. 00:26:18.483 [2024-04-26 12:22:19.417877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 12:22:19.418204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 12:22:19.418214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.483 qpair failed and we were unable to recover it. 00:26:18.483 [2024-04-26 12:22:19.418544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 12:22:19.418747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 12:22:19.418757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.483 qpair failed and we were unable to recover it. 00:26:18.483 [2024-04-26 12:22:19.419051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 12:22:19.419250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 12:22:19.419260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.483 qpair failed and we were unable to recover it. 00:26:18.483 [2024-04-26 12:22:19.419567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 12:22:19.419874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 12:22:19.419884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.483 qpair failed and we were unable to recover it. 00:26:18.483 [2024-04-26 12:22:19.420088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 12:22:19.420434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 12:22:19.420444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.483 qpair failed and we were unable to recover it. 00:26:18.483 [2024-04-26 12:22:19.420759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 12:22:19.420865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 12:22:19.420875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.483 qpair failed and we were unable to recover it. 00:26:18.483 [2024-04-26 12:22:19.421183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 12:22:19.421534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 12:22:19.421543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.483 qpair failed and we were unable to recover it. 00:26:18.483 [2024-04-26 12:22:19.421721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 12:22:19.421923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 12:22:19.421932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.483 qpair failed and we were unable to recover it. 00:26:18.483 [2024-04-26 12:22:19.422238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 12:22:19.422550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 12:22:19.422561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.483 qpair failed and we were unable to recover it. 00:26:18.483 [2024-04-26 12:22:19.422882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 12:22:19.423114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 12:22:19.423123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.483 qpair failed and we were unable to recover it. 00:26:18.483 [2024-04-26 12:22:19.423435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 12:22:19.423752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 12:22:19.423762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.483 qpair failed and we were unable to recover it. 00:26:18.483 [2024-04-26 12:22:19.424084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 12:22:19.424394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 12:22:19.424403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.483 qpair failed and we were unable to recover it. 00:26:18.483 [2024-04-26 12:22:19.424730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 12:22:19.425056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 12:22:19.425066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.483 qpair failed and we were unable to recover it. 00:26:18.483 [2024-04-26 12:22:19.425360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 12:22:19.425670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 12:22:19.425679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.483 qpair failed and we were unable to recover it. 00:26:18.483 [2024-04-26 12:22:19.426050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 12:22:19.426393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 12:22:19.426403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.483 qpair failed and we were unable to recover it. 00:26:18.483 [2024-04-26 12:22:19.426723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 12:22:19.426944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 12:22:19.426954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.483 qpair failed and we were unable to recover it. 00:26:18.484 [2024-04-26 12:22:19.427323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 12:22:19.427606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 12:22:19.427615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.484 qpair failed and we were unable to recover it. 00:26:18.484 [2024-04-26 12:22:19.427972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 12:22:19.428260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 12:22:19.428270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.484 qpair failed and we were unable to recover it. 00:26:18.484 [2024-04-26 12:22:19.428601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 12:22:19.428912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 12:22:19.428922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.484 qpair failed and we were unable to recover it. 00:26:18.484 [2024-04-26 12:22:19.429236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 12:22:19.429550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 12:22:19.429559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.484 qpair failed and we were unable to recover it. 00:26:18.484 [2024-04-26 12:22:19.429873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 12:22:19.430269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 12:22:19.430278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.484 qpair failed and we were unable to recover it. 00:26:18.484 [2024-04-26 12:22:19.430569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 12:22:19.430880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 12:22:19.430889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.484 qpair failed and we were unable to recover it. 00:26:18.484 [2024-04-26 12:22:19.431214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 12:22:19.431384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 12:22:19.431393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.484 qpair failed and we were unable to recover it. 00:26:18.484 [2024-04-26 12:22:19.431697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 12:22:19.431909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 12:22:19.431919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.484 qpair failed and we were unable to recover it. 00:26:18.484 [2024-04-26 12:22:19.432231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 12:22:19.432554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 12:22:19.432563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.484 qpair failed and we were unable to recover it. 00:26:18.484 [2024-04-26 12:22:19.432769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 12:22:19.433070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 12:22:19.433080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.484 qpair failed and we were unable to recover it. 00:26:18.484 [2024-04-26 12:22:19.433379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 12:22:19.433692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 12:22:19.433701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.484 qpair failed and we were unable to recover it. 00:26:18.484 [2024-04-26 12:22:19.433896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 12:22:19.434189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 12:22:19.434198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.484 qpair failed and we were unable to recover it. 00:26:18.484 [2024-04-26 12:22:19.434529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 12:22:19.434880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 12:22:19.434890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.484 qpair failed and we were unable to recover it. 00:26:18.484 [2024-04-26 12:22:19.435195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 12:22:19.435514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 12:22:19.435523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.484 qpair failed and we were unable to recover it. 00:26:18.484 [2024-04-26 12:22:19.435913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 12:22:19.436314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 12:22:19.436324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.484 qpair failed and we were unable to recover it. 00:26:18.484 [2024-04-26 12:22:19.436643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 12:22:19.436957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 12:22:19.436967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.484 qpair failed and we were unable to recover it. 00:26:18.484 [2024-04-26 12:22:19.437264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 12:22:19.437627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 12:22:19.437636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.484 qpair failed and we were unable to recover it. 00:26:18.484 [2024-04-26 12:22:19.437832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 12:22:19.438118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 12:22:19.438127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.484 qpair failed and we were unable to recover it. 00:26:18.484 [2024-04-26 12:22:19.438515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 12:22:19.438734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 12:22:19.438743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.484 qpair failed and we were unable to recover it. 00:26:18.484 [2024-04-26 12:22:19.438947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 12:22:19.439249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 12:22:19.439258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.484 qpair failed and we were unable to recover it. 00:26:18.484 [2024-04-26 12:22:19.439459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 12:22:19.439691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 12:22:19.439700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.484 qpair failed and we were unable to recover it. 00:26:18.484 [2024-04-26 12:22:19.440030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 12:22:19.440328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 12:22:19.440336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.484 qpair failed and we were unable to recover it. 00:26:18.484 [2024-04-26 12:22:19.440666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 12:22:19.440844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 12:22:19.440854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.484 qpair failed and we were unable to recover it. 00:26:18.484 [2024-04-26 12:22:19.441166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 12:22:19.441481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 12:22:19.441490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.484 qpair failed and we were unable to recover it. 00:26:18.484 [2024-04-26 12:22:19.441763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 12:22:19.442149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 12:22:19.442159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.484 qpair failed and we were unable to recover it. 00:26:18.484 [2024-04-26 12:22:19.442463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 12:22:19.442783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 12:22:19.442792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.484 qpair failed and we were unable to recover it. 00:26:18.484 [2024-04-26 12:22:19.443222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 12:22:19.443415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 12:22:19.443424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.484 qpair failed and we were unable to recover it. 00:26:18.484 [2024-04-26 12:22:19.443778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 12:22:19.444205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 12:22:19.444215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.485 qpair failed and we were unable to recover it. 00:26:18.485 [2024-04-26 12:22:19.444504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 12:22:19.444740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 12:22:19.444749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.485 qpair failed and we were unable to recover it. 00:26:18.485 [2024-04-26 12:22:19.445149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 12:22:19.445326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 12:22:19.445334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.485 qpair failed and we were unable to recover it. 00:26:18.485 [2024-04-26 12:22:19.445713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 12:22:19.446030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 12:22:19.446040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.485 qpair failed and we were unable to recover it. 00:26:18.485 [2024-04-26 12:22:19.446354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 12:22:19.446687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 12:22:19.446696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.485 qpair failed and we were unable to recover it. 00:26:18.485 [2024-04-26 12:22:19.447062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 12:22:19.447335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 12:22:19.447344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.485 qpair failed and we were unable to recover it. 00:26:18.485 [2024-04-26 12:22:19.447642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 12:22:19.447867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 12:22:19.447877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.485 qpair failed and we were unable to recover it. 00:26:18.485 [2024-04-26 12:22:19.448145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 12:22:19.448447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 12:22:19.448456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.485 qpair failed and we were unable to recover it. 00:26:18.485 [2024-04-26 12:22:19.448688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 12:22:19.449030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 12:22:19.449039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.485 qpair failed and we were unable to recover it. 00:26:18.485 [2024-04-26 12:22:19.449331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 12:22:19.449525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 12:22:19.449535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.485 qpair failed and we were unable to recover it. 00:26:18.485 [2024-04-26 12:22:19.449824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 12:22:19.450038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 12:22:19.450047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.485 qpair failed and we were unable to recover it. 00:26:18.485 [2024-04-26 12:22:19.450380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 12:22:19.450679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 12:22:19.450688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.485 qpair failed and we were unable to recover it. 00:26:18.485 [2024-04-26 12:22:19.451023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 12:22:19.451328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 12:22:19.451337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.485 qpair failed and we were unable to recover it. 00:26:18.485 [2024-04-26 12:22:19.451652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 12:22:19.451846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 12:22:19.451855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.485 qpair failed and we were unable to recover it. 00:26:18.485 [2024-04-26 12:22:19.452175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 12:22:19.452495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 12:22:19.452504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.485 qpair failed and we were unable to recover it. 00:26:18.485 [2024-04-26 12:22:19.452694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 12:22:19.453085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 12:22:19.453094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.485 qpair failed and we were unable to recover it. 00:26:18.485 [2024-04-26 12:22:19.453295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 12:22:19.453582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 12:22:19.453593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.485 qpair failed and we were unable to recover it. 00:26:18.485 [2024-04-26 12:22:19.453791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 12:22:19.454002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 12:22:19.454011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.485 qpair failed and we were unable to recover it. 00:26:18.485 [2024-04-26 12:22:19.454289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 12:22:19.454443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 12:22:19.454452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.485 qpair failed and we were unable to recover it. 00:26:18.485 [2024-04-26 12:22:19.454754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 12:22:19.455091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 12:22:19.455100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.485 qpair failed and we were unable to recover it. 00:26:18.485 [2024-04-26 12:22:19.455290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 12:22:19.455565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 12:22:19.455575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.485 qpair failed and we were unable to recover it. 00:26:18.485 [2024-04-26 12:22:19.455962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 12:22:19.456227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 12:22:19.456236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.485 qpair failed and we were unable to recover it. 00:26:18.485 [2024-04-26 12:22:19.456562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 12:22:19.456890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 12:22:19.456899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.485 qpair failed and we were unable to recover it. 00:26:18.485 [2024-04-26 12:22:19.457224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 12:22:19.457517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 12:22:19.457526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.485 qpair failed and we were unable to recover it. 00:26:18.485 [2024-04-26 12:22:19.457823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 12:22:19.458074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 12:22:19.458083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.485 qpair failed and we were unable to recover it. 00:26:18.485 [2024-04-26 12:22:19.458419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 12:22:19.458735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 12:22:19.458744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.486 qpair failed and we were unable to recover it. 00:26:18.486 [2024-04-26 12:22:19.459154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 12:22:19.459424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 12:22:19.459433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.486 qpair failed and we were unable to recover it. 00:26:18.486 [2024-04-26 12:22:19.459762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 12:22:19.459954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 12:22:19.459963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.486 qpair failed and we were unable to recover it. 00:26:18.486 [2024-04-26 12:22:19.460288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 12:22:19.460605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 12:22:19.460614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.486 qpair failed and we were unable to recover it. 00:26:18.486 [2024-04-26 12:22:19.460914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 12:22:19.461253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 12:22:19.461262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.486 qpair failed and we were unable to recover it. 00:26:18.486 [2024-04-26 12:22:19.461596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 12:22:19.461787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 12:22:19.461797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.486 qpair failed and we were unable to recover it. 00:26:18.486 [2024-04-26 12:22:19.462000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 12:22:19.462331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 12:22:19.462341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.486 qpair failed and we were unable to recover it. 00:26:18.486 [2024-04-26 12:22:19.462529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 12:22:19.462814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 12:22:19.462823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.486 qpair failed and we were unable to recover it. 00:26:18.486 [2024-04-26 12:22:19.463185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 12:22:19.463512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 12:22:19.463521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.486 qpair failed and we were unable to recover it. 00:26:18.486 [2024-04-26 12:22:19.463840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 12:22:19.464146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 12:22:19.464155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.486 qpair failed and we were unable to recover it. 00:26:18.486 [2024-04-26 12:22:19.464540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 12:22:19.464888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 12:22:19.464897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.486 qpair failed and we were unable to recover it. 00:26:18.486 [2024-04-26 12:22:19.465214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 12:22:19.465547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 12:22:19.465556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.486 qpair failed and we were unable to recover it. 00:26:18.486 [2024-04-26 12:22:19.465861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 12:22:19.466167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 12:22:19.466175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.486 qpair failed and we were unable to recover it. 00:26:18.486 [2024-04-26 12:22:19.466484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 12:22:19.466668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 12:22:19.466678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.486 qpair failed and we were unable to recover it. 00:26:18.486 [2024-04-26 12:22:19.466971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 12:22:19.467268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 12:22:19.467277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.486 qpair failed and we were unable to recover it. 00:26:18.486 [2024-04-26 12:22:19.467660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 12:22:19.467949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 12:22:19.467959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.486 qpair failed and we were unable to recover it. 00:26:18.486 [2024-04-26 12:22:19.468135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 12:22:19.468415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 12:22:19.468423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.486 qpair failed and we were unable to recover it. 00:26:18.486 [2024-04-26 12:22:19.468753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 12:22:19.469075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 12:22:19.469085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.486 qpair failed and we were unable to recover it. 00:26:18.486 [2024-04-26 12:22:19.469417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 12:22:19.469768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 12:22:19.469777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.486 qpair failed and we were unable to recover it. 00:26:18.486 [2024-04-26 12:22:19.470128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 12:22:19.470446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 12:22:19.470456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.486 qpair failed and we were unable to recover it. 00:26:18.486 [2024-04-26 12:22:19.470778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 12:22:19.471089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 12:22:19.471099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.486 qpair failed and we were unable to recover it. 00:26:18.486 [2024-04-26 12:22:19.471455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 12:22:19.471635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 12:22:19.471644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.486 qpair failed and we were unable to recover it. 00:26:18.486 [2024-04-26 12:22:19.472011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 12:22:19.472313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 12:22:19.472322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.486 qpair failed and we were unable to recover it. 00:26:18.486 [2024-04-26 12:22:19.472626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 12:22:19.472953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 12:22:19.472963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.486 qpair failed and we were unable to recover it. 00:26:18.486 [2024-04-26 12:22:19.473284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 12:22:19.473566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 12:22:19.473575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.486 qpair failed and we were unable to recover it. 00:26:18.486 [2024-04-26 12:22:19.473962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 12:22:19.474275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 12:22:19.474285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.486 qpair failed and we were unable to recover it. 00:26:18.486 [2024-04-26 12:22:19.474631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 12:22:19.474907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 12:22:19.474917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.486 qpair failed and we were unable to recover it. 00:26:18.486 [2024-04-26 12:22:19.475241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 12:22:19.475632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 12:22:19.475641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.486 qpair failed and we were unable to recover it. 00:26:18.486 [2024-04-26 12:22:19.475934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 12:22:19.476110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 12:22:19.476120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.486 qpair failed and we were unable to recover it. 00:26:18.486 [2024-04-26 12:22:19.476426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 12:22:19.476692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 12:22:19.476701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.487 qpair failed and we were unable to recover it. 00:26:18.487 [2024-04-26 12:22:19.477027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 12:22:19.477335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 12:22:19.477344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.487 qpair failed and we were unable to recover it. 00:26:18.487 [2024-04-26 12:22:19.477631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 12:22:19.477928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 12:22:19.477938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.487 qpair failed and we were unable to recover it. 00:26:18.487 [2024-04-26 12:22:19.478227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 12:22:19.478540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 12:22:19.478549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.487 qpair failed and we were unable to recover it. 00:26:18.487 [2024-04-26 12:22:19.478869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 12:22:19.479154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 12:22:19.479163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.487 qpair failed and we were unable to recover it. 00:26:18.487 [2024-04-26 12:22:19.479477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 12:22:19.479777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 12:22:19.479786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.487 qpair failed and we were unable to recover it. 00:26:18.487 [2024-04-26 12:22:19.480127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 12:22:19.480440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 12:22:19.480450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.487 qpair failed and we were unable to recover it. 00:26:18.487 [2024-04-26 12:22:19.480760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 12:22:19.481075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 12:22:19.481085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.487 qpair failed and we were unable to recover it. 00:26:18.487 [2024-04-26 12:22:19.481235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 12:22:19.481534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 12:22:19.481544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.487 qpair failed and we were unable to recover it. 00:26:18.487 [2024-04-26 12:22:19.481873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 12:22:19.482192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 12:22:19.482201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.487 qpair failed and we were unable to recover it. 00:26:18.487 [2024-04-26 12:22:19.482526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 12:22:19.482850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 12:22:19.482860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.487 qpair failed and we were unable to recover it. 00:26:18.487 [2024-04-26 12:22:19.483168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 12:22:19.483463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 12:22:19.483472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.487 qpair failed and we were unable to recover it. 00:26:18.487 [2024-04-26 12:22:19.483697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 12:22:19.483966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 12:22:19.483975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.487 qpair failed and we were unable to recover it. 00:26:18.487 [2024-04-26 12:22:19.484277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 12:22:19.484532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 12:22:19.484543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.487 qpair failed and we were unable to recover it. 00:26:18.487 [2024-04-26 12:22:19.484846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 12:22:19.485131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 12:22:19.485140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.487 qpair failed and we were unable to recover it. 00:26:18.487 [2024-04-26 12:22:19.485469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 12:22:19.485784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 12:22:19.485794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.487 qpair failed and we were unable to recover it. 00:26:18.487 [2024-04-26 12:22:19.486106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 12:22:19.486415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 12:22:19.486424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.487 qpair failed and we were unable to recover it. 00:26:18.487 [2024-04-26 12:22:19.486571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 12:22:19.486828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 12:22:19.486846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.487 qpair failed and we were unable to recover it. 00:26:18.487 [2024-04-26 12:22:19.487135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 12:22:19.487446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 12:22:19.487456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.487 qpair failed and we were unable to recover it. 00:26:18.487 [2024-04-26 12:22:19.487775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 12:22:19.487959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 12:22:19.487968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.487 qpair failed and we were unable to recover it. 00:26:18.487 [2024-04-26 12:22:19.488255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 12:22:19.488588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 12:22:19.488597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.488 qpair failed and we were unable to recover it. 00:26:18.488 [2024-04-26 12:22:19.488881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 12:22:19.489069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 12:22:19.489079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.488 qpair failed and we were unable to recover it. 00:26:18.488 [2024-04-26 12:22:19.489383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 12:22:19.489589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 12:22:19.489598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.488 qpair failed and we were unable to recover it. 00:26:18.488 [2024-04-26 12:22:19.489909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 12:22:19.490230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 12:22:19.490243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.488 qpair failed and we were unable to recover it. 00:26:18.488 [2024-04-26 12:22:19.490558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 12:22:19.490800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 12:22:19.490809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.488 qpair failed and we were unable to recover it. 00:26:18.488 [2024-04-26 12:22:19.491139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 12:22:19.491460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 12:22:19.491470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.488 qpair failed and we were unable to recover it. 00:26:18.488 [2024-04-26 12:22:19.491684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 12:22:19.492016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 12:22:19.492025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.488 qpair failed and we were unable to recover it. 00:26:18.488 [2024-04-26 12:22:19.492317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 12:22:19.492597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 12:22:19.492606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.488 qpair failed and we were unable to recover it. 00:26:18.488 [2024-04-26 12:22:19.492768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 12:22:19.493147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 12:22:19.493156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.488 qpair failed and we were unable to recover it. 00:26:18.488 [2024-04-26 12:22:19.493461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 12:22:19.493771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 12:22:19.493780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.488 qpair failed and we were unable to recover it. 00:26:18.488 [2024-04-26 12:22:19.494129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 12:22:19.494447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 12:22:19.494457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.488 qpair failed and we were unable to recover it. 00:26:18.488 [2024-04-26 12:22:19.494720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 12:22:19.494911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 12:22:19.494922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.488 qpair failed and we were unable to recover it. 00:26:18.488 [2024-04-26 12:22:19.495277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 12:22:19.495556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 12:22:19.495565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.488 qpair failed and we were unable to recover it. 00:26:18.488 [2024-04-26 12:22:19.495946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 12:22:19.496225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 12:22:19.496234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.488 qpair failed and we were unable to recover it. 00:26:18.488 [2024-04-26 12:22:19.496527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 12:22:19.496849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 12:22:19.496859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.488 qpair failed and we were unable to recover it. 00:26:18.488 [2024-04-26 12:22:19.497168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 12:22:19.497483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 12:22:19.497492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.488 qpair failed and we were unable to recover it. 00:26:18.488 [2024-04-26 12:22:19.497707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 12:22:19.497927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 12:22:19.497937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.488 qpair failed and we were unable to recover it. 00:26:18.488 [2024-04-26 12:22:19.498247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 12:22:19.498518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 12:22:19.498528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.488 qpair failed and we were unable to recover it. 00:26:18.488 [2024-04-26 12:22:19.498817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 12:22:19.499129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 12:22:19.499139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.488 qpair failed and we were unable to recover it. 00:26:18.488 [2024-04-26 12:22:19.499469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 12:22:19.499785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 12:22:19.499794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.488 qpair failed and we were unable to recover it. 00:26:18.488 [2024-04-26 12:22:19.500141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 12:22:19.500443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 12:22:19.500452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.488 qpair failed and we were unable to recover it. 00:26:18.488 [2024-04-26 12:22:19.500773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 12:22:19.501077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 12:22:19.501087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.488 qpair failed and we were unable to recover it. 00:26:18.488 [2024-04-26 12:22:19.501414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 12:22:19.501721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 12:22:19.501731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.488 qpair failed and we were unable to recover it. 00:26:18.488 [2024-04-26 12:22:19.502045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 12:22:19.502437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 12:22:19.502446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.488 qpair failed and we were unable to recover it. 00:26:18.488 [2024-04-26 12:22:19.502751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 12:22:19.503060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 12:22:19.503069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.488 qpair failed and we were unable to recover it. 00:26:18.488 [2024-04-26 12:22:19.503388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 12:22:19.503702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 12:22:19.503710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.488 qpair failed and we were unable to recover it. 00:26:18.488 [2024-04-26 12:22:19.504004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 12:22:19.504281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 12:22:19.504289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.488 qpair failed and we were unable to recover it. 00:26:18.488 [2024-04-26 12:22:19.504605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 12:22:19.504928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 12:22:19.504938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.488 qpair failed and we were unable to recover it. 00:26:18.488 [2024-04-26 12:22:19.505303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 12:22:19.505600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 12:22:19.505616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.488 qpair failed and we were unable to recover it. 00:26:18.488 [2024-04-26 12:22:19.505952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 12:22:19.506278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 12:22:19.506287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.489 qpair failed and we were unable to recover it. 00:26:18.489 [2024-04-26 12:22:19.506603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 12:22:19.506910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 12:22:19.506920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.489 qpair failed and we were unable to recover it. 00:26:18.489 [2024-04-26 12:22:19.507245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 12:22:19.507399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 12:22:19.507409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.489 qpair failed and we were unable to recover it. 00:26:18.489 [2024-04-26 12:22:19.507613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 12:22:19.507931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 12:22:19.507940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.489 qpair failed and we were unable to recover it. 00:26:18.489 [2024-04-26 12:22:19.508222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 12:22:19.508534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 12:22:19.508543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.489 qpair failed and we were unable to recover it. 00:26:18.489 [2024-04-26 12:22:19.508834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 12:22:19.509160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 12:22:19.509170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.489 qpair failed and we were unable to recover it. 00:26:18.489 [2024-04-26 12:22:19.509485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 12:22:19.509764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 12:22:19.509773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.489 qpair failed and we were unable to recover it. 00:26:18.489 [2024-04-26 12:22:19.510157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 12:22:19.510499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 12:22:19.510508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.489 qpair failed and we were unable to recover it. 00:26:18.489 [2024-04-26 12:22:19.510812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 12:22:19.511114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 12:22:19.511123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.489 qpair failed and we were unable to recover it. 00:26:18.489 [2024-04-26 12:22:19.511462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 12:22:19.511775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 12:22:19.511783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.489 qpair failed and we were unable to recover it. 00:26:18.489 [2024-04-26 12:22:19.512090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 12:22:19.512411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 12:22:19.512420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.489 qpair failed and we were unable to recover it. 00:26:18.489 [2024-04-26 12:22:19.512792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 12:22:19.513083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 12:22:19.513093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.489 qpair failed and we were unable to recover it. 00:26:18.489 [2024-04-26 12:22:19.513474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 12:22:19.513700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 12:22:19.513709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.489 qpair failed and we were unable to recover it. 00:26:18.489 [2024-04-26 12:22:19.513936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 12:22:19.514247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 12:22:19.514256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.489 qpair failed and we were unable to recover it. 00:26:18.489 [2024-04-26 12:22:19.514535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 12:22:19.514842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 12:22:19.514851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.489 qpair failed and we were unable to recover it. 00:26:18.489 [2024-04-26 12:22:19.515169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 12:22:19.515366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 12:22:19.515375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.489 qpair failed and we were unable to recover it. 00:26:18.489 [2024-04-26 12:22:19.515696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 12:22:19.516034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 12:22:19.516043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.489 qpair failed and we were unable to recover it. 00:26:18.489 [2024-04-26 12:22:19.516427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 12:22:19.516738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 12:22:19.516748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.489 qpair failed and we were unable to recover it. 00:26:18.489 [2024-04-26 12:22:19.517083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 12:22:19.517397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 12:22:19.517406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.489 qpair failed and we were unable to recover it. 00:26:18.489 [2024-04-26 12:22:19.517739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 12:22:19.518031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 12:22:19.518041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.489 qpair failed and we were unable to recover it. 00:26:18.489 [2024-04-26 12:22:19.518250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 12:22:19.518571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 12:22:19.518580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.489 qpair failed and we were unable to recover it. 00:26:18.489 [2024-04-26 12:22:19.518882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 12:22:19.519180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 12:22:19.519189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.489 qpair failed and we were unable to recover it. 00:26:18.489 [2024-04-26 12:22:19.519387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 12:22:19.519681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 12:22:19.519690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.489 qpair failed and we were unable to recover it. 00:26:18.489 [2024-04-26 12:22:19.519982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 12:22:19.520224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 12:22:19.520233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.489 qpair failed and we were unable to recover it. 00:26:18.489 [2024-04-26 12:22:19.520565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 12:22:19.520873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 12:22:19.520882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.489 qpair failed and we were unable to recover it. 00:26:18.489 [2024-04-26 12:22:19.521225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 12:22:19.521524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 12:22:19.521534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.489 qpair failed and we were unable to recover it. 00:26:18.489 [2024-04-26 12:22:19.521751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 12:22:19.522034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 12:22:19.522044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.489 qpair failed and we were unable to recover it. 00:26:18.489 [2024-04-26 12:22:19.522270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 12:22:19.522571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 12:22:19.522580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.489 qpair failed and we were unable to recover it. 00:26:18.489 [2024-04-26 12:22:19.522950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 12:22:19.523225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 12:22:19.523234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.489 qpair failed and we were unable to recover it. 00:26:18.489 [2024-04-26 12:22:19.523547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 12:22:19.523860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.490 [2024-04-26 12:22:19.523870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.490 qpair failed and we were unable to recover it. 00:26:18.490 [2024-04-26 12:22:19.524209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.490 [2024-04-26 12:22:19.524499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.490 [2024-04-26 12:22:19.524509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.490 qpair failed and we were unable to recover it. 00:26:18.490 [2024-04-26 12:22:19.524828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.490 [2024-04-26 12:22:19.525023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.490 [2024-04-26 12:22:19.525033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.490 qpair failed and we were unable to recover it. 00:26:18.490 [2024-04-26 12:22:19.525243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.490 [2024-04-26 12:22:19.525522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.490 [2024-04-26 12:22:19.525531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.490 qpair failed and we were unable to recover it. 00:26:18.490 [2024-04-26 12:22:19.525813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.490 [2024-04-26 12:22:19.526075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.490 [2024-04-26 12:22:19.526084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.490 qpair failed and we were unable to recover it. 00:26:18.490 [2024-04-26 12:22:19.526285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.490 [2024-04-26 12:22:19.526572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.490 [2024-04-26 12:22:19.526581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.490 qpair failed and we were unable to recover it. 00:26:18.490 [2024-04-26 12:22:19.526896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.490 [2024-04-26 12:22:19.527180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.490 [2024-04-26 12:22:19.527189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.490 qpair failed and we were unable to recover it. 00:26:18.490 [2024-04-26 12:22:19.527510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.490 [2024-04-26 12:22:19.527795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.490 [2024-04-26 12:22:19.527804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.490 qpair failed and we were unable to recover it. 00:26:18.490 [2024-04-26 12:22:19.528003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.490 [2024-04-26 12:22:19.528290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.490 [2024-04-26 12:22:19.528298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.490 qpair failed and we were unable to recover it. 00:26:18.490 [2024-04-26 12:22:19.528624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.490 [2024-04-26 12:22:19.528806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.490 [2024-04-26 12:22:19.528815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.490 qpair failed and we were unable to recover it. 00:26:18.490 [2024-04-26 12:22:19.529151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.490 [2024-04-26 12:22:19.529432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.490 [2024-04-26 12:22:19.529441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.490 qpair failed and we were unable to recover it. 00:26:18.490 [2024-04-26 12:22:19.529716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.490 [2024-04-26 12:22:19.530023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.490 [2024-04-26 12:22:19.530032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.490 qpair failed and we were unable to recover it. 00:26:18.490 [2024-04-26 12:22:19.530349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.490 [2024-04-26 12:22:19.530705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.490 [2024-04-26 12:22:19.530715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.490 qpair failed and we were unable to recover it. 00:26:18.490 [2024-04-26 12:22:19.531001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.490 [2024-04-26 12:22:19.531323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.490 [2024-04-26 12:22:19.531333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.490 qpair failed and we were unable to recover it. 00:26:18.490 [2024-04-26 12:22:19.531665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.490 [2024-04-26 12:22:19.531979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.490 [2024-04-26 12:22:19.531988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.490 qpair failed and we were unable to recover it. 00:26:18.490 [2024-04-26 12:22:19.532309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.490 [2024-04-26 12:22:19.532597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.490 [2024-04-26 12:22:19.532606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.490 qpair failed and we were unable to recover it. 00:26:18.490 [2024-04-26 12:22:19.532944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.490 [2024-04-26 12:22:19.533286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.490 [2024-04-26 12:22:19.533295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.490 qpair failed and we were unable to recover it. 00:26:18.490 [2024-04-26 12:22:19.533604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.490 [2024-04-26 12:22:19.533922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.490 [2024-04-26 12:22:19.533932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.490 qpair failed and we were unable to recover it. 00:26:18.490 [2024-04-26 12:22:19.534249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.490 [2024-04-26 12:22:19.534565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.490 [2024-04-26 12:22:19.534574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.490 qpair failed and we were unable to recover it. 00:26:18.490 [2024-04-26 12:22:19.534905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.490 [2024-04-26 12:22:19.535182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.490 [2024-04-26 12:22:19.535191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.490 qpair failed and we were unable to recover it. 00:26:18.490 [2024-04-26 12:22:19.535476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.490 [2024-04-26 12:22:19.535779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.490 [2024-04-26 12:22:19.535788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.490 qpair failed and we were unable to recover it. 00:26:18.490 [2024-04-26 12:22:19.536100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.490 [2024-04-26 12:22:19.536425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.490 [2024-04-26 12:22:19.536434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.490 qpair failed and we were unable to recover it. 00:26:18.490 [2024-04-26 12:22:19.536757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.490 [2024-04-26 12:22:19.537067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.490 [2024-04-26 12:22:19.537076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.490 qpair failed and we were unable to recover it. 00:26:18.490 [2024-04-26 12:22:19.537388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.490 [2024-04-26 12:22:19.537542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.490 [2024-04-26 12:22:19.537551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.490 qpair failed and we were unable to recover it. 00:26:18.490 [2024-04-26 12:22:19.537882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.490 [2024-04-26 12:22:19.538182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.490 [2024-04-26 12:22:19.538191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.490 qpair failed and we were unable to recover it. 00:26:18.490 [2024-04-26 12:22:19.538510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.490 [2024-04-26 12:22:19.538812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.490 [2024-04-26 12:22:19.538822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.490 qpair failed and we were unable to recover it. 00:26:18.490 [2024-04-26 12:22:19.539149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.490 [2024-04-26 12:22:19.539535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.490 [2024-04-26 12:22:19.539544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.490 qpair failed and we were unable to recover it. 00:26:18.490 [2024-04-26 12:22:19.539686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.490 [2024-04-26 12:22:19.539971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.490 [2024-04-26 12:22:19.539980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.490 qpair failed and we were unable to recover it. 00:26:18.490 [2024-04-26 12:22:19.540343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.490 [2024-04-26 12:22:19.540671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.490 [2024-04-26 12:22:19.540681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.490 qpair failed and we were unable to recover it. 00:26:18.490 [2024-04-26 12:22:19.541016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.491 [2024-04-26 12:22:19.541334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.491 [2024-04-26 12:22:19.541342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.491 qpair failed and we were unable to recover it. 00:26:18.491 [2024-04-26 12:22:19.541684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.491 [2024-04-26 12:22:19.541874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.491 [2024-04-26 12:22:19.541883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.491 qpair failed and we were unable to recover it. 00:26:18.491 [2024-04-26 12:22:19.542090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.491 [2024-04-26 12:22:19.542400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.491 [2024-04-26 12:22:19.542409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.491 qpair failed and we were unable to recover it. 00:26:18.491 [2024-04-26 12:22:19.542755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.491 [2024-04-26 12:22:19.543086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.491 [2024-04-26 12:22:19.543095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.491 qpair failed and we were unable to recover it. 00:26:18.491 [2024-04-26 12:22:19.543419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.491 [2024-04-26 12:22:19.543718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.491 [2024-04-26 12:22:19.543727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.491 qpair failed and we were unable to recover it. 00:26:18.491 [2024-04-26 12:22:19.544154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.491 [2024-04-26 12:22:19.544460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.491 [2024-04-26 12:22:19.544469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.491 qpair failed and we were unable to recover it. 00:26:18.491 [2024-04-26 12:22:19.544797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.491 [2024-04-26 12:22:19.545145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.491 [2024-04-26 12:22:19.545154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.491 qpair failed and we were unable to recover it. 00:26:18.491 [2024-04-26 12:22:19.545497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.491 [2024-04-26 12:22:19.545797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.491 [2024-04-26 12:22:19.545806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.491 qpair failed and we were unable to recover it. 00:26:18.491 [2024-04-26 12:22:19.546122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.491 [2024-04-26 12:22:19.546395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.491 [2024-04-26 12:22:19.546404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.491 qpair failed and we were unable to recover it. 00:26:18.491 [2024-04-26 12:22:19.546641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.491 [2024-04-26 12:22:19.546795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.491 [2024-04-26 12:22:19.546805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.491 qpair failed and we were unable to recover it. 00:26:18.491 [2024-04-26 12:22:19.547120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.491 [2024-04-26 12:22:19.547444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.491 [2024-04-26 12:22:19.547454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.491 qpair failed and we were unable to recover it. 00:26:18.491 [2024-04-26 12:22:19.547789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.491 [2024-04-26 12:22:19.548105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.491 [2024-04-26 12:22:19.548115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.491 qpair failed and we were unable to recover it. 00:26:18.491 [2024-04-26 12:22:19.548453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.491 [2024-04-26 12:22:19.548644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.491 [2024-04-26 12:22:19.548654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.491 qpair failed and we were unable to recover it. 00:26:18.491 [2024-04-26 12:22:19.548936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.491 [2024-04-26 12:22:19.549242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.491 [2024-04-26 12:22:19.549252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.491 qpair failed and we were unable to recover it. 00:26:18.491 [2024-04-26 12:22:19.549530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.491 [2024-04-26 12:22:19.549824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.491 [2024-04-26 12:22:19.549833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.491 qpair failed and we were unable to recover it. 00:26:18.491 [2024-04-26 12:22:19.550201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.491 [2024-04-26 12:22:19.550527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.491 [2024-04-26 12:22:19.550536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.491 qpair failed and we were unable to recover it. 00:26:18.491 [2024-04-26 12:22:19.550763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.491 [2024-04-26 12:22:19.551031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.491 [2024-04-26 12:22:19.551041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.491 qpair failed and we were unable to recover it. 00:26:18.491 [2024-04-26 12:22:19.551333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.491 [2024-04-26 12:22:19.551619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.491 [2024-04-26 12:22:19.551628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.491 qpair failed and we were unable to recover it. 00:26:18.491 [2024-04-26 12:22:19.551937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.491 [2024-04-26 12:22:19.552231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.491 [2024-04-26 12:22:19.552242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.491 qpair failed and we were unable to recover it. 00:26:18.491 [2024-04-26 12:22:19.552536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.491 [2024-04-26 12:22:19.552830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.491 [2024-04-26 12:22:19.552842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.491 qpair failed and we were unable to recover it. 00:26:18.491 [2024-04-26 12:22:19.553136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.491 [2024-04-26 12:22:19.553459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.491 [2024-04-26 12:22:19.553468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.491 qpair failed and we were unable to recover it. 00:26:18.491 [2024-04-26 12:22:19.553775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.491 [2024-04-26 12:22:19.553965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.491 [2024-04-26 12:22:19.553976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.491 qpair failed and we were unable to recover it. 00:26:18.491 [2024-04-26 12:22:19.554280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.491 [2024-04-26 12:22:19.554463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.491 [2024-04-26 12:22:19.554473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.491 qpair failed and we were unable to recover it. 00:26:18.491 [2024-04-26 12:22:19.554856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.491 [2024-04-26 12:22:19.555179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.491 [2024-04-26 12:22:19.555187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.491 qpair failed and we were unable to recover it. 00:26:18.491 [2024-04-26 12:22:19.555519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.491 [2024-04-26 12:22:19.555867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.491 [2024-04-26 12:22:19.555877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.491 qpair failed and we were unable to recover it. 00:26:18.491 [2024-04-26 12:22:19.556023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.492 [2024-04-26 12:22:19.556328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.492 [2024-04-26 12:22:19.556337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.492 qpair failed and we were unable to recover it. 00:26:18.492 [2024-04-26 12:22:19.556663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.492 [2024-04-26 12:22:19.556967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.492 [2024-04-26 12:22:19.556976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.492 qpair failed and we were unable to recover it. 00:26:18.492 [2024-04-26 12:22:19.557291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.492 [2024-04-26 12:22:19.557492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.492 [2024-04-26 12:22:19.557500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.492 qpair failed and we were unable to recover it. 00:26:18.492 [2024-04-26 12:22:19.557827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.492 [2024-04-26 12:22:19.558131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.492 [2024-04-26 12:22:19.558140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.492 qpair failed and we were unable to recover it. 00:26:18.492 [2024-04-26 12:22:19.558498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.492 [2024-04-26 12:22:19.558690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.492 [2024-04-26 12:22:19.558700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.492 qpair failed and we were unable to recover it. 00:26:18.492 [2024-04-26 12:22:19.559024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.492 [2024-04-26 12:22:19.559345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.492 [2024-04-26 12:22:19.559354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.492 qpair failed and we were unable to recover it. 00:26:18.492 [2024-04-26 12:22:19.559546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.492 [2024-04-26 12:22:19.559860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.492 [2024-04-26 12:22:19.559869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.492 qpair failed and we were unable to recover it. 00:26:18.492 [2024-04-26 12:22:19.560065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.492 [2024-04-26 12:22:19.560365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.492 [2024-04-26 12:22:19.560375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.492 qpair failed and we were unable to recover it. 00:26:18.492 [2024-04-26 12:22:19.560704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.492 [2024-04-26 12:22:19.561014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.492 [2024-04-26 12:22:19.561024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.492 qpair failed and we were unable to recover it. 00:26:18.492 [2024-04-26 12:22:19.561345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.492 [2024-04-26 12:22:19.561664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.492 [2024-04-26 12:22:19.561673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.492 qpair failed and we were unable to recover it. 00:26:18.492 [2024-04-26 12:22:19.562016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.492 [2024-04-26 12:22:19.562317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.492 [2024-04-26 12:22:19.562326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.492 qpair failed and we were unable to recover it. 00:26:18.492 [2024-04-26 12:22:19.562637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.492 [2024-04-26 12:22:19.562955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.492 [2024-04-26 12:22:19.562964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.492 qpair failed and we were unable to recover it. 00:26:18.492 [2024-04-26 12:22:19.563263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.492 [2024-04-26 12:22:19.563580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.492 [2024-04-26 12:22:19.563589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.492 qpair failed and we were unable to recover it. 00:26:18.492 [2024-04-26 12:22:19.563902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.492 [2024-04-26 12:22:19.564223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.492 [2024-04-26 12:22:19.564233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.492 qpair failed and we were unable to recover it. 00:26:18.492 [2024-04-26 12:22:19.564554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.492 [2024-04-26 12:22:19.564746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.492 [2024-04-26 12:22:19.564755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.492 qpair failed and we were unable to recover it. 00:26:18.492 [2024-04-26 12:22:19.565095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.492 [2024-04-26 12:22:19.565412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.492 [2024-04-26 12:22:19.565420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.492 qpair failed and we were unable to recover it. 00:26:18.492 [2024-04-26 12:22:19.565741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.492 [2024-04-26 12:22:19.566024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.492 [2024-04-26 12:22:19.566033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.492 qpair failed and we were unable to recover it. 00:26:18.492 [2024-04-26 12:22:19.566330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.492 [2024-04-26 12:22:19.566651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.492 [2024-04-26 12:22:19.566660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.492 qpair failed and we were unable to recover it. 00:26:18.492 [2024-04-26 12:22:19.566999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.492 [2024-04-26 12:22:19.567284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.492 [2024-04-26 12:22:19.567293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.492 qpair failed and we were unable to recover it. 00:26:18.492 [2024-04-26 12:22:19.567619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.492 [2024-04-26 12:22:19.567904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.492 [2024-04-26 12:22:19.567913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.492 qpair failed and we were unable to recover it. 00:26:18.492 [2024-04-26 12:22:19.568237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.492 [2024-04-26 12:22:19.568554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.492 [2024-04-26 12:22:19.568564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.492 qpair failed and we were unable to recover it. 00:26:18.492 [2024-04-26 12:22:19.568894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.492 [2024-04-26 12:22:19.569192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.492 [2024-04-26 12:22:19.569201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.492 qpair failed and we were unable to recover it. 00:26:18.492 [2024-04-26 12:22:19.569536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.492 [2024-04-26 12:22:19.569851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.492 [2024-04-26 12:22:19.569860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.492 qpair failed and we were unable to recover it. 00:26:18.492 [2024-04-26 12:22:19.570180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.492 [2024-04-26 12:22:19.570494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.492 [2024-04-26 12:22:19.570503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.492 qpair failed and we were unable to recover it. 00:26:18.492 [2024-04-26 12:22:19.570834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.492 [2024-04-26 12:22:19.571159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.492 [2024-04-26 12:22:19.571168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.492 qpair failed and we were unable to recover it. 00:26:18.492 [2024-04-26 12:22:19.571492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.492 [2024-04-26 12:22:19.571809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.492 [2024-04-26 12:22:19.571818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.492 qpair failed and we were unable to recover it. 00:26:18.492 [2024-04-26 12:22:19.572139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.492 [2024-04-26 12:22:19.572448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.492 [2024-04-26 12:22:19.572457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.492 qpair failed and we were unable to recover it. 00:26:18.492 [2024-04-26 12:22:19.572787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.492 [2024-04-26 12:22:19.573074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.492 [2024-04-26 12:22:19.573084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.492 qpair failed and we were unable to recover it. 00:26:18.493 [2024-04-26 12:22:19.573289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.493 [2024-04-26 12:22:19.573566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.493 [2024-04-26 12:22:19.573575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.493 qpair failed and we were unable to recover it. 00:26:18.493 [2024-04-26 12:22:19.573899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.493 [2024-04-26 12:22:19.574197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.493 [2024-04-26 12:22:19.574207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.493 qpair failed and we were unable to recover it. 00:26:18.493 [2024-04-26 12:22:19.574495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.493 [2024-04-26 12:22:19.574815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.493 [2024-04-26 12:22:19.574825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.493 qpair failed and we were unable to recover it. 00:26:18.493 [2024-04-26 12:22:19.575171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.493 [2024-04-26 12:22:19.575338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.493 [2024-04-26 12:22:19.575348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.493 qpair failed and we were unable to recover it. 00:26:18.493 [2024-04-26 12:22:19.575714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.493 [2024-04-26 12:22:19.576019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.493 [2024-04-26 12:22:19.576029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.493 qpair failed and we were unable to recover it. 00:26:18.493 [2024-04-26 12:22:19.576349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.493 [2024-04-26 12:22:19.576640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.493 [2024-04-26 12:22:19.576649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.493 qpair failed and we were unable to recover it. 00:26:18.493 [2024-04-26 12:22:19.576933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.493 [2024-04-26 12:22:19.577237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.493 [2024-04-26 12:22:19.577247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.493 qpair failed and we were unable to recover it. 00:26:18.493 [2024-04-26 12:22:19.577590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.493 [2024-04-26 12:22:19.577892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.493 [2024-04-26 12:22:19.577902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.493 qpair failed and we were unable to recover it. 00:26:18.493 [2024-04-26 12:22:19.578195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.493 [2024-04-26 12:22:19.578497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.493 [2024-04-26 12:22:19.578506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.493 qpair failed and we were unable to recover it. 00:26:18.493 [2024-04-26 12:22:19.578802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.493 [2024-04-26 12:22:19.579097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.493 [2024-04-26 12:22:19.579107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.493 qpair failed and we were unable to recover it. 00:26:18.493 [2024-04-26 12:22:19.579437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.493 [2024-04-26 12:22:19.579749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.493 [2024-04-26 12:22:19.579759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.493 qpair failed and we were unable to recover it. 00:26:18.493 [2024-04-26 12:22:19.580096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.493 [2024-04-26 12:22:19.580410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.493 [2024-04-26 12:22:19.580419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.493 qpair failed and we were unable to recover it. 00:26:18.493 [2024-04-26 12:22:19.580800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.493 [2024-04-26 12:22:19.580971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.493 [2024-04-26 12:22:19.580982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.493 qpair failed and we were unable to recover it. 00:26:18.493 [2024-04-26 12:22:19.581272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.493 [2024-04-26 12:22:19.581571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.493 [2024-04-26 12:22:19.581580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.493 qpair failed and we were unable to recover it. 00:26:18.493 [2024-04-26 12:22:19.581914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.493 [2024-04-26 12:22:19.582115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.493 [2024-04-26 12:22:19.582124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.493 qpair failed and we were unable to recover it. 00:26:18.493 [2024-04-26 12:22:19.582422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.493 [2024-04-26 12:22:19.582700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.493 [2024-04-26 12:22:19.582709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.493 qpair failed and we were unable to recover it. 00:26:18.493 [2024-04-26 12:22:19.583040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.493 [2024-04-26 12:22:19.583230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.493 [2024-04-26 12:22:19.583241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.493 qpair failed and we were unable to recover it. 00:26:18.493 [2024-04-26 12:22:19.583577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.493 [2024-04-26 12:22:19.583845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.493 [2024-04-26 12:22:19.583854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.493 qpair failed and we were unable to recover it. 00:26:18.493 [2024-04-26 12:22:19.584170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.493 [2024-04-26 12:22:19.584494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.493 [2024-04-26 12:22:19.584504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.493 qpair failed and we were unable to recover it. 00:26:18.493 [2024-04-26 12:22:19.584817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.493 [2024-04-26 12:22:19.585139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.493 [2024-04-26 12:22:19.585149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.493 qpair failed and we were unable to recover it. 00:26:18.493 [2024-04-26 12:22:19.585469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.493 [2024-04-26 12:22:19.585741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.493 [2024-04-26 12:22:19.585750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.493 qpair failed and we were unable to recover it. 00:26:18.493 [2024-04-26 12:22:19.585982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.493 [2024-04-26 12:22:19.586311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.493 [2024-04-26 12:22:19.586321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.493 qpair failed and we were unable to recover it. 00:26:18.493 [2024-04-26 12:22:19.586656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.493 [2024-04-26 12:22:19.586877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.493 [2024-04-26 12:22:19.586887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.493 qpair failed and we were unable to recover it. 00:26:18.493 [2024-04-26 12:22:19.587178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.493 [2024-04-26 12:22:19.587489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.493 [2024-04-26 12:22:19.587499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.493 qpair failed and we were unable to recover it. 00:26:18.493 [2024-04-26 12:22:19.587830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.493 [2024-04-26 12:22:19.588168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.493 [2024-04-26 12:22:19.588179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.493 qpair failed and we were unable to recover it. 00:26:18.493 [2024-04-26 12:22:19.588510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.493 [2024-04-26 12:22:19.588848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.493 [2024-04-26 12:22:19.588857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.493 qpair failed and we were unable to recover it. 00:26:18.493 [2024-04-26 12:22:19.589233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.493 [2024-04-26 12:22:19.589553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.493 [2024-04-26 12:22:19.589565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.493 qpair failed and we were unable to recover it. 00:26:18.493 [2024-04-26 12:22:19.589899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.493 [2024-04-26 12:22:19.590102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.493 [2024-04-26 12:22:19.590111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.494 qpair failed and we were unable to recover it. 00:26:18.494 [2024-04-26 12:22:19.590380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.494 [2024-04-26 12:22:19.590712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.494 [2024-04-26 12:22:19.590721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.494 qpair failed and we were unable to recover it. 00:26:18.494 [2024-04-26 12:22:19.590920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.494 [2024-04-26 12:22:19.591216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.494 [2024-04-26 12:22:19.591225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.494 qpair failed and we were unable to recover it. 00:26:18.494 [2024-04-26 12:22:19.591537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.494 [2024-04-26 12:22:19.591867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.494 [2024-04-26 12:22:19.591876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.494 qpair failed and we were unable to recover it. 00:26:18.494 [2024-04-26 12:22:19.592148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.494 [2024-04-26 12:22:19.592463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.494 [2024-04-26 12:22:19.592472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.494 qpair failed and we were unable to recover it. 00:26:18.494 [2024-04-26 12:22:19.592861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.494 [2024-04-26 12:22:19.593170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.494 [2024-04-26 12:22:19.593180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.494 qpair failed and we were unable to recover it. 00:26:18.494 [2024-04-26 12:22:19.593503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.494 [2024-04-26 12:22:19.593820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.494 [2024-04-26 12:22:19.593830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.494 qpair failed and we were unable to recover it. 00:26:18.494 [2024-04-26 12:22:19.594184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.494 [2024-04-26 12:22:19.594524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.494 [2024-04-26 12:22:19.594534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.494 qpair failed and we were unable to recover it. 00:26:18.494 [2024-04-26 12:22:19.594851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.494 [2024-04-26 12:22:19.595164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.494 [2024-04-26 12:22:19.595173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.494 qpair failed and we were unable to recover it. 00:26:18.494 [2024-04-26 12:22:19.595431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.494 [2024-04-26 12:22:19.595768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.494 [2024-04-26 12:22:19.595777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.494 qpair failed and we were unable to recover it. 00:26:18.494 [2024-04-26 12:22:19.596086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.494 [2024-04-26 12:22:19.596376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.494 [2024-04-26 12:22:19.596385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.494 qpair failed and we were unable to recover it. 00:26:18.494 [2024-04-26 12:22:19.596718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.494 [2024-04-26 12:22:19.596967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.494 [2024-04-26 12:22:19.596977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.494 qpair failed and we were unable to recover it. 00:26:18.494 [2024-04-26 12:22:19.597191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.494 [2024-04-26 12:22:19.597464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.494 [2024-04-26 12:22:19.597473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.494 qpair failed and we were unable to recover it. 00:26:18.494 [2024-04-26 12:22:19.597762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.494 [2024-04-26 12:22:19.598054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.494 [2024-04-26 12:22:19.598064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.494 qpair failed and we were unable to recover it. 00:26:18.494 [2024-04-26 12:22:19.598414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.494 [2024-04-26 12:22:19.598725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.494 [2024-04-26 12:22:19.598734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.494 qpair failed and we were unable to recover it. 00:26:18.494 [2024-04-26 12:22:19.599066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.494 [2024-04-26 12:22:19.599362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.494 [2024-04-26 12:22:19.599371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.494 qpair failed and we were unable to recover it. 00:26:18.494 [2024-04-26 12:22:19.599687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.494 [2024-04-26 12:22:19.599953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.494 [2024-04-26 12:22:19.599963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.494 qpair failed and we were unable to recover it. 00:26:18.494 [2024-04-26 12:22:19.600275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.494 [2024-04-26 12:22:19.600548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.494 [2024-04-26 12:22:19.600557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.494 qpair failed and we were unable to recover it. 00:26:18.494 [2024-04-26 12:22:19.600866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.494 [2024-04-26 12:22:19.601249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.494 [2024-04-26 12:22:19.601259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.494 qpair failed and we were unable to recover it. 00:26:18.494 [2024-04-26 12:22:19.601504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.494 [2024-04-26 12:22:19.601796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.494 [2024-04-26 12:22:19.601805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.494 qpair failed and we were unable to recover it. 00:26:18.494 [2024-04-26 12:22:19.602169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.494 [2024-04-26 12:22:19.602502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.494 [2024-04-26 12:22:19.602512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.494 qpair failed and we were unable to recover it. 00:26:18.494 [2024-04-26 12:22:19.602850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.494 [2024-04-26 12:22:19.603141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.494 [2024-04-26 12:22:19.603149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.494 qpair failed and we were unable to recover it. 00:26:18.494 [2024-04-26 12:22:19.603497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.494 [2024-04-26 12:22:19.603861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.494 [2024-04-26 12:22:19.603870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.494 qpair failed and we were unable to recover it. 00:26:18.494 [2024-04-26 12:22:19.604163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.494 [2024-04-26 12:22:19.604441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.494 [2024-04-26 12:22:19.604450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.494 qpair failed and we were unable to recover it. 00:26:18.494 [2024-04-26 12:22:19.604735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.494 [2024-04-26 12:22:19.605002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.494 [2024-04-26 12:22:19.605011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.494 qpair failed and we were unable to recover it. 00:26:18.494 [2024-04-26 12:22:19.605302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.494 [2024-04-26 12:22:19.605635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.494 [2024-04-26 12:22:19.605644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.494 qpair failed and we were unable to recover it. 00:26:18.494 [2024-04-26 12:22:19.605948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.494 [2024-04-26 12:22:19.606121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.494 [2024-04-26 12:22:19.606130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.494 qpair failed and we were unable to recover it. 00:26:18.494 [2024-04-26 12:22:19.606487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.494 [2024-04-26 12:22:19.606681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.494 [2024-04-26 12:22:19.606690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.494 qpair failed and we were unable to recover it. 00:26:18.494 [2024-04-26 12:22:19.607013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.494 [2024-04-26 12:22:19.607343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.494 [2024-04-26 12:22:19.607353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.494 qpair failed and we were unable to recover it. 00:26:18.494 [2024-04-26 12:22:19.607566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.494 [2024-04-26 12:22:19.607846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.495 [2024-04-26 12:22:19.607856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.495 qpair failed and we were unable to recover it. 00:26:18.495 [2024-04-26 12:22:19.608165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.495 [2024-04-26 12:22:19.608483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.495 [2024-04-26 12:22:19.608492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.495 qpair failed and we were unable to recover it. 00:26:18.495 [2024-04-26 12:22:19.608691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.495 [2024-04-26 12:22:19.608922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.495 [2024-04-26 12:22:19.608932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.495 qpair failed and we were unable to recover it. 00:26:18.495 [2024-04-26 12:22:19.609272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.495 [2024-04-26 12:22:19.609454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.495 [2024-04-26 12:22:19.609464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.495 qpair failed and we were unable to recover it. 00:26:18.495 [2024-04-26 12:22:19.609787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.495 [2024-04-26 12:22:19.610122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.495 [2024-04-26 12:22:19.610132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.495 qpair failed and we were unable to recover it. 00:26:18.495 [2024-04-26 12:22:19.610447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.495 [2024-04-26 12:22:19.610792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.495 [2024-04-26 12:22:19.610802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.495 qpair failed and we were unable to recover it. 00:26:18.495 [2024-04-26 12:22:19.611142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.495 [2024-04-26 12:22:19.611435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.495 [2024-04-26 12:22:19.611445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.495 qpair failed and we were unable to recover it. 00:26:18.495 [2024-04-26 12:22:19.611801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.495 [2024-04-26 12:22:19.612006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.495 [2024-04-26 12:22:19.612017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.495 qpair failed and we were unable to recover it. 00:26:18.495 [2024-04-26 12:22:19.612350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.495 [2024-04-26 12:22:19.612662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.495 [2024-04-26 12:22:19.612673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.495 qpair failed and we were unable to recover it. 00:26:18.495 [2024-04-26 12:22:19.613004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.495 [2024-04-26 12:22:19.613316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.495 [2024-04-26 12:22:19.613325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.495 qpair failed and we were unable to recover it. 00:26:18.495 [2024-04-26 12:22:19.613665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.495 [2024-04-26 12:22:19.613979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.495 [2024-04-26 12:22:19.613989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.495 qpair failed and we were unable to recover it. 00:26:18.495 [2024-04-26 12:22:19.614288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.495 [2024-04-26 12:22:19.614510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.495 [2024-04-26 12:22:19.614520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.495 qpair failed and we were unable to recover it. 00:26:18.495 [2024-04-26 12:22:19.614844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.495 [2024-04-26 12:22:19.615151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.495 [2024-04-26 12:22:19.615160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.495 qpair failed and we were unable to recover it. 00:26:18.495 [2024-04-26 12:22:19.615418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.495 [2024-04-26 12:22:19.615639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.495 [2024-04-26 12:22:19.615649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.495 qpair failed and we were unable to recover it. 00:26:18.495 [2024-04-26 12:22:19.615957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.495 [2024-04-26 12:22:19.616272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.495 [2024-04-26 12:22:19.616282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.495 qpair failed and we were unable to recover it. 00:26:18.495 [2024-04-26 12:22:19.616616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.495 [2024-04-26 12:22:19.616812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.495 [2024-04-26 12:22:19.616821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.495 qpair failed and we were unable to recover it. 00:26:18.495 [2024-04-26 12:22:19.617113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.495 [2024-04-26 12:22:19.617440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.495 [2024-04-26 12:22:19.617449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.495 qpair failed and we were unable to recover it. 00:26:18.495 [2024-04-26 12:22:19.617834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.495 [2024-04-26 12:22:19.618185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.495 [2024-04-26 12:22:19.618194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.495 qpair failed and we were unable to recover it. 00:26:18.495 [2024-04-26 12:22:19.618508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.495 [2024-04-26 12:22:19.618703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.495 [2024-04-26 12:22:19.618713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.495 qpair failed and we were unable to recover it. 00:26:18.495 [2024-04-26 12:22:19.619009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.495 [2024-04-26 12:22:19.619338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.495 [2024-04-26 12:22:19.619347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.495 qpair failed and we were unable to recover it. 00:26:18.495 [2024-04-26 12:22:19.619642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.495 [2024-04-26 12:22:19.619939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.495 [2024-04-26 12:22:19.619948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.495 qpair failed and we were unable to recover it. 00:26:18.495 [2024-04-26 12:22:19.620271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.495 [2024-04-26 12:22:19.620593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.495 [2024-04-26 12:22:19.620604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.495 qpair failed and we were unable to recover it. 00:26:18.495 [2024-04-26 12:22:19.620941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.495 [2024-04-26 12:22:19.621234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.495 [2024-04-26 12:22:19.621243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.495 qpair failed and we were unable to recover it. 00:26:18.495 [2024-04-26 12:22:19.621581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.495 [2024-04-26 12:22:19.621901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.495 [2024-04-26 12:22:19.621911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.495 qpair failed and we were unable to recover it. 00:26:18.495 [2024-04-26 12:22:19.622232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.495 [2024-04-26 12:22:19.622547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.495 [2024-04-26 12:22:19.622556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.495 qpair failed and we were unable to recover it. 00:26:18.495 [2024-04-26 12:22:19.622890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.495 [2024-04-26 12:22:19.623191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.495 [2024-04-26 12:22:19.623200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.495 qpair failed and we were unable to recover it. 00:26:18.495 [2024-04-26 12:22:19.623503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.495 [2024-04-26 12:22:19.623848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.495 [2024-04-26 12:22:19.623857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.495 qpair failed and we were unable to recover it. 00:26:18.495 [2024-04-26 12:22:19.624177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.495 [2024-04-26 12:22:19.624497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.495 [2024-04-26 12:22:19.624505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.495 qpair failed and we were unable to recover it. 00:26:18.495 [2024-04-26 12:22:19.624827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.495 [2024-04-26 12:22:19.625119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.495 [2024-04-26 12:22:19.625128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.495 qpair failed and we were unable to recover it. 00:26:18.495 [2024-04-26 12:22:19.625447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.495 [2024-04-26 12:22:19.625733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.496 [2024-04-26 12:22:19.625742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.496 qpair failed and we were unable to recover it. 00:26:18.496 [2024-04-26 12:22:19.626063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.496 [2024-04-26 12:22:19.626376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.496 [2024-04-26 12:22:19.626385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.496 qpair failed and we were unable to recover it. 00:26:18.496 [2024-04-26 12:22:19.626716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.496 [2024-04-26 12:22:19.627042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.496 [2024-04-26 12:22:19.627051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.496 qpair failed and we were unable to recover it. 00:26:18.496 [2024-04-26 12:22:19.627365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.496 [2024-04-26 12:22:19.627716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.496 [2024-04-26 12:22:19.627725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.496 qpair failed and we were unable to recover it. 00:26:18.496 [2024-04-26 12:22:19.628020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.496 [2024-04-26 12:22:19.628312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.496 [2024-04-26 12:22:19.628321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.496 qpair failed and we were unable to recover it. 00:26:18.496 [2024-04-26 12:22:19.628651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.496 [2024-04-26 12:22:19.628936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.496 [2024-04-26 12:22:19.628946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.496 qpair failed and we were unable to recover it. 00:26:18.496 [2024-04-26 12:22:19.629266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.496 [2024-04-26 12:22:19.629550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.496 [2024-04-26 12:22:19.629558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.496 qpair failed and we were unable to recover it. 00:26:18.496 [2024-04-26 12:22:19.629868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.496 [2024-04-26 12:22:19.630150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.496 [2024-04-26 12:22:19.630159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.496 qpair failed and we were unable to recover it. 00:26:18.496 [2024-04-26 12:22:19.630467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.496 [2024-04-26 12:22:19.630785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.496 [2024-04-26 12:22:19.630794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.496 qpair failed and we were unable to recover it. 00:26:18.496 [2024-04-26 12:22:19.631162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.496 [2024-04-26 12:22:19.631489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.496 [2024-04-26 12:22:19.631498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.496 qpair failed and we were unable to recover it. 00:26:18.496 [2024-04-26 12:22:19.631890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.496 [2024-04-26 12:22:19.632096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.496 [2024-04-26 12:22:19.632105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.496 qpair failed and we were unable to recover it. 00:26:18.496 [2024-04-26 12:22:19.632363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.496 [2024-04-26 12:22:19.632557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.496 [2024-04-26 12:22:19.632565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.496 qpair failed and we were unable to recover it. 00:26:18.496 [2024-04-26 12:22:19.632758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.496 [2024-04-26 12:22:19.633056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.496 [2024-04-26 12:22:19.633065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.496 qpair failed and we were unable to recover it. 00:26:18.496 [2024-04-26 12:22:19.633360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.496 [2024-04-26 12:22:19.633593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.496 [2024-04-26 12:22:19.633602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.496 qpair failed and we were unable to recover it. 00:26:18.496 [2024-04-26 12:22:19.633933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.496 [2024-04-26 12:22:19.634263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.496 [2024-04-26 12:22:19.634272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.496 qpair failed and we were unable to recover it. 00:26:18.496 [2024-04-26 12:22:19.634589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.496 [2024-04-26 12:22:19.634932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.496 [2024-04-26 12:22:19.634941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.496 qpair failed and we were unable to recover it. 00:26:18.496 [2024-04-26 12:22:19.635259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.496 [2024-04-26 12:22:19.635465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.496 [2024-04-26 12:22:19.635474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.496 qpair failed and we were unable to recover it. 00:26:18.496 [2024-04-26 12:22:19.635780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.496 [2024-04-26 12:22:19.636085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.496 [2024-04-26 12:22:19.636095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.496 qpair failed and we were unable to recover it. 00:26:18.496 [2024-04-26 12:22:19.636409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.496 [2024-04-26 12:22:19.636730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.496 [2024-04-26 12:22:19.636740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.496 qpair failed and we were unable to recover it. 00:26:18.496 [2024-04-26 12:22:19.637061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.496 [2024-04-26 12:22:19.637299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.496 [2024-04-26 12:22:19.637308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.496 qpair failed and we were unable to recover it. 00:26:18.496 [2024-04-26 12:22:19.638520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.496 [2024-04-26 12:22:19.638831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.496 [2024-04-26 12:22:19.638850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.496 qpair failed and we were unable to recover it. 00:26:18.496 [2024-04-26 12:22:19.639175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.496 [2024-04-26 12:22:19.639472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.496 [2024-04-26 12:22:19.639481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.496 qpair failed and we were unable to recover it. 00:26:18.496 [2024-04-26 12:22:19.639781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.496 [2024-04-26 12:22:19.640076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.496 [2024-04-26 12:22:19.640085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.496 qpair failed and we were unable to recover it. 00:26:18.496 [2024-04-26 12:22:19.640388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.496 [2024-04-26 12:22:19.640600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.496 [2024-04-26 12:22:19.640610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.496 qpair failed and we were unable to recover it. 00:26:18.496 [2024-04-26 12:22:19.640812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.496 [2024-04-26 12:22:19.641130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.496 [2024-04-26 12:22:19.641140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.496 qpair failed and we were unable to recover it. 00:26:18.497 [2024-04-26 12:22:19.641506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.497 [2024-04-26 12:22:19.641820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.497 [2024-04-26 12:22:19.641829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.497 qpair failed and we were unable to recover it. 00:26:18.497 [2024-04-26 12:22:19.642223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.497 [2024-04-26 12:22:19.642518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.497 [2024-04-26 12:22:19.642528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.497 qpair failed and we were unable to recover it. 00:26:18.497 [2024-04-26 12:22:19.642862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.497 [2024-04-26 12:22:19.643174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.497 [2024-04-26 12:22:19.643184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.497 qpair failed and we were unable to recover it. 00:26:18.497 [2024-04-26 12:22:19.643494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.497 [2024-04-26 12:22:19.643815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.497 [2024-04-26 12:22:19.643824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.497 qpair failed and we were unable to recover it. 00:26:18.497 [2024-04-26 12:22:19.644110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.497 [2024-04-26 12:22:19.644313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.497 [2024-04-26 12:22:19.644322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.497 qpair failed and we were unable to recover it. 00:26:18.497 [2024-04-26 12:22:19.644656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.497 [2024-04-26 12:22:19.644972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.497 [2024-04-26 12:22:19.644982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.497 qpair failed and we were unable to recover it. 00:26:18.497 [2024-04-26 12:22:19.645297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.497 [2024-04-26 12:22:19.645629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.497 [2024-04-26 12:22:19.645637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.497 qpair failed and we were unable to recover it. 00:26:18.497 [2024-04-26 12:22:19.645961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.497 [2024-04-26 12:22:19.646253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.497 [2024-04-26 12:22:19.646262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.497 qpair failed and we were unable to recover it. 00:26:18.497 [2024-04-26 12:22:19.646672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.497 [2024-04-26 12:22:19.647014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.497 [2024-04-26 12:22:19.647025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.497 qpair failed and we were unable to recover it. 00:26:18.497 [2024-04-26 12:22:19.647348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.497 [2024-04-26 12:22:19.647663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.497 [2024-04-26 12:22:19.647671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.497 qpair failed and we were unable to recover it. 00:26:18.497 [2024-04-26 12:22:19.648056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.497 [2024-04-26 12:22:19.648341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.497 [2024-04-26 12:22:19.648350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.497 qpair failed and we were unable to recover it. 00:26:18.497 [2024-04-26 12:22:19.648751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.497 [2024-04-26 12:22:19.649070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.497 [2024-04-26 12:22:19.649080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.497 qpair failed and we were unable to recover it. 00:26:18.497 [2024-04-26 12:22:19.649388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.497 [2024-04-26 12:22:19.649721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.497 [2024-04-26 12:22:19.649731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.497 qpair failed and we were unable to recover it. 00:26:18.497 [2024-04-26 12:22:19.650064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.497 [2024-04-26 12:22:19.650405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.497 [2024-04-26 12:22:19.650414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.497 qpair failed and we were unable to recover it. 00:26:18.497 [2024-04-26 12:22:19.650635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.497 [2024-04-26 12:22:19.650984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.497 [2024-04-26 12:22:19.650993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.497 qpair failed and we were unable to recover it. 00:26:18.497 [2024-04-26 12:22:19.651324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.497 [2024-04-26 12:22:19.651599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.497 [2024-04-26 12:22:19.651608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.497 qpair failed and we were unable to recover it. 00:26:18.497 [2024-04-26 12:22:19.651909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.497 [2024-04-26 12:22:19.652218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.497 [2024-04-26 12:22:19.652227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.497 qpair failed and we were unable to recover it. 00:26:18.497 [2024-04-26 12:22:19.652541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.497 [2024-04-26 12:22:19.652827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.497 [2024-04-26 12:22:19.652836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.497 qpair failed and we were unable to recover it. 00:26:18.497 [2024-04-26 12:22:19.653186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.497 [2024-04-26 12:22:19.653441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.497 [2024-04-26 12:22:19.653452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.497 qpair failed and we were unable to recover it. 00:26:18.497 [2024-04-26 12:22:19.653779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.497 [2024-04-26 12:22:19.654062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.497 [2024-04-26 12:22:19.654073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.497 qpair failed and we were unable to recover it. 00:26:18.497 [2024-04-26 12:22:19.654358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.497 [2024-04-26 12:22:19.654675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.497 [2024-04-26 12:22:19.654684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.497 qpair failed and we were unable to recover it. 00:26:18.497 [2024-04-26 12:22:19.655050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.497 [2024-04-26 12:22:19.655320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.497 [2024-04-26 12:22:19.655330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.497 qpair failed and we were unable to recover it. 00:26:18.497 [2024-04-26 12:22:19.655642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.497 [2024-04-26 12:22:19.655966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.497 [2024-04-26 12:22:19.655975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.497 qpair failed and we were unable to recover it. 00:26:18.497 [2024-04-26 12:22:19.656286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.497 [2024-04-26 12:22:19.656595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.497 [2024-04-26 12:22:19.656605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.497 qpair failed and we were unable to recover it. 00:26:18.497 [2024-04-26 12:22:19.656895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.497 [2024-04-26 12:22:19.657227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.497 [2024-04-26 12:22:19.657236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.497 qpair failed and we were unable to recover it. 00:26:18.497 [2024-04-26 12:22:19.657533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.497 [2024-04-26 12:22:19.657874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.497 [2024-04-26 12:22:19.657883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.497 qpair failed and we were unable to recover it. 00:26:18.497 [2024-04-26 12:22:19.658087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.497 [2024-04-26 12:22:19.658376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.497 [2024-04-26 12:22:19.658385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.497 qpair failed and we were unable to recover it. 00:26:18.497 [2024-04-26 12:22:19.658720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.497 [2024-04-26 12:22:19.658908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.497 [2024-04-26 12:22:19.658917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.497 qpair failed and we were unable to recover it. 00:26:18.497 [2024-04-26 12:22:19.659303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.497 [2024-04-26 12:22:19.659492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.497 [2024-04-26 12:22:19.659502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.497 qpair failed and we were unable to recover it. 00:26:18.498 [2024-04-26 12:22:19.659877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.498 [2024-04-26 12:22:19.660180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.498 [2024-04-26 12:22:19.660190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.498 qpair failed and we were unable to recover it. 00:26:18.498 [2024-04-26 12:22:19.660508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.498 [2024-04-26 12:22:19.660863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.498 [2024-04-26 12:22:19.660872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.498 qpair failed and we were unable to recover it. 00:26:18.498 [2024-04-26 12:22:19.661150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.498 [2024-04-26 12:22:19.661473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.498 [2024-04-26 12:22:19.661482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.498 qpair failed and we were unable to recover it. 00:26:18.498 [2024-04-26 12:22:19.661800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.498 [2024-04-26 12:22:19.662069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.498 [2024-04-26 12:22:19.662079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.498 qpair failed and we were unable to recover it. 00:26:18.498 [2024-04-26 12:22:19.662408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.498 [2024-04-26 12:22:19.662733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.498 [2024-04-26 12:22:19.662741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.498 qpair failed and we were unable to recover it. 00:26:18.498 [2024-04-26 12:22:19.662955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.498 [2024-04-26 12:22:19.663191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.498 [2024-04-26 12:22:19.663201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.498 qpair failed and we were unable to recover it. 00:26:18.498 [2024-04-26 12:22:19.663429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.498 [2024-04-26 12:22:19.663703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.498 [2024-04-26 12:22:19.663712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.498 qpair failed and we were unable to recover it. 00:26:18.498 [2024-04-26 12:22:19.663911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.498 [2024-04-26 12:22:19.664190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.498 [2024-04-26 12:22:19.664199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.498 qpair failed and we were unable to recover it. 00:26:18.498 [2024-04-26 12:22:19.664484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.498 [2024-04-26 12:22:19.664778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.498 [2024-04-26 12:22:19.664787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.498 qpair failed and we were unable to recover it. 00:26:18.498 [2024-04-26 12:22:19.665072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.498 [2024-04-26 12:22:19.665344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.498 [2024-04-26 12:22:19.665353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.498 qpair failed and we were unable to recover it. 00:26:18.498 [2024-04-26 12:22:19.665696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.498 [2024-04-26 12:22:19.665994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.498 [2024-04-26 12:22:19.666003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.498 qpair failed and we were unable to recover it. 00:26:18.498 [2024-04-26 12:22:19.666408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.498 [2024-04-26 12:22:19.666685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.498 [2024-04-26 12:22:19.666694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.498 qpair failed and we were unable to recover it. 00:26:18.498 [2024-04-26 12:22:19.667015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.498 [2024-04-26 12:22:19.667308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.498 [2024-04-26 12:22:19.667317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.498 qpair failed and we were unable to recover it. 00:26:18.498 [2024-04-26 12:22:19.667520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.498 [2024-04-26 12:22:19.667763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.498 [2024-04-26 12:22:19.667771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.498 qpair failed and we were unable to recover it. 00:26:18.498 [2024-04-26 12:22:19.668007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.498 [2024-04-26 12:22:19.668374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.498 [2024-04-26 12:22:19.668383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.498 qpair failed and we were unable to recover it. 00:26:18.498 [2024-04-26 12:22:19.668565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.498 [2024-04-26 12:22:19.668764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.498 [2024-04-26 12:22:19.668774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.498 qpair failed and we were unable to recover it. 00:26:18.498 [2024-04-26 12:22:19.669101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.498 [2024-04-26 12:22:19.669400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.498 [2024-04-26 12:22:19.669409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.498 qpair failed and we were unable to recover it. 00:26:18.498 [2024-04-26 12:22:19.669819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.498 [2024-04-26 12:22:19.670045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.498 [2024-04-26 12:22:19.670055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.498 qpair failed and we were unable to recover it. 00:26:18.498 [2024-04-26 12:22:19.670297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.498 [2024-04-26 12:22:19.670516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.498 [2024-04-26 12:22:19.670524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.498 qpair failed and we were unable to recover it. 00:26:18.498 [2024-04-26 12:22:19.670842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.498 [2024-04-26 12:22:19.671195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.498 [2024-04-26 12:22:19.671203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.498 qpair failed and we were unable to recover it. 00:26:18.498 [2024-04-26 12:22:19.671528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.498 [2024-04-26 12:22:19.671726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.498 [2024-04-26 12:22:19.671736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.498 qpair failed and we were unable to recover it. 00:26:18.498 [2024-04-26 12:22:19.671939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.498 [2024-04-26 12:22:19.672236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.498 [2024-04-26 12:22:19.672245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.498 qpair failed and we were unable to recover it. 00:26:18.498 [2024-04-26 12:22:19.672576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.498 [2024-04-26 12:22:19.672904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.498 [2024-04-26 12:22:19.672913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.498 qpair failed and we were unable to recover it. 00:26:18.498 [2024-04-26 12:22:19.673218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.498 [2024-04-26 12:22:19.673290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.498 [2024-04-26 12:22:19.673299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.498 qpair failed and we were unable to recover it. 00:26:18.498 [2024-04-26 12:22:19.673594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.498 [2024-04-26 12:22:19.673873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.498 [2024-04-26 12:22:19.673883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.498 qpair failed and we were unable to recover it. 00:26:18.498 [2024-04-26 12:22:19.674140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.498 [2024-04-26 12:22:19.674375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.498 [2024-04-26 12:22:19.674384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.498 qpair failed and we were unable to recover it. 00:26:18.498 [2024-04-26 12:22:19.674668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.498 [2024-04-26 12:22:19.674944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.498 [2024-04-26 12:22:19.674954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.498 qpair failed and we were unable to recover it. 00:26:18.498 [2024-04-26 12:22:19.675302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.498 [2024-04-26 12:22:19.675623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.498 [2024-04-26 12:22:19.675632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.498 qpair failed and we were unable to recover it. 00:26:18.498 [2024-04-26 12:22:19.675930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.498 [2024-04-26 12:22:19.676253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.498 [2024-04-26 12:22:19.676262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.498 qpair failed and we were unable to recover it. 00:26:18.499 [2024-04-26 12:22:19.676582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.499 [2024-04-26 12:22:19.676901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.499 [2024-04-26 12:22:19.676910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.499 qpair failed and we were unable to recover it. 00:26:18.499 [2024-04-26 12:22:19.677250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.499 [2024-04-26 12:22:19.677589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.499 [2024-04-26 12:22:19.677598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.499 qpair failed and we were unable to recover it. 00:26:18.499 [2024-04-26 12:22:19.677903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.499 [2024-04-26 12:22:19.678120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.499 [2024-04-26 12:22:19.678130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.499 qpair failed and we were unable to recover it. 00:26:18.499 [2024-04-26 12:22:19.678453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.499 [2024-04-26 12:22:19.678772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.499 [2024-04-26 12:22:19.678781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.499 qpair failed and we were unable to recover it. 00:26:18.499 [2024-04-26 12:22:19.679127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.499 [2024-04-26 12:22:19.679445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.499 [2024-04-26 12:22:19.679454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.499 qpair failed and we were unable to recover it. 00:26:18.499 [2024-04-26 12:22:19.679764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.499 [2024-04-26 12:22:19.680095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.499 [2024-04-26 12:22:19.680104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.499 qpair failed and we were unable to recover it. 00:26:18.499 [2024-04-26 12:22:19.680463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.499 [2024-04-26 12:22:19.680785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.499 [2024-04-26 12:22:19.680794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.499 qpair failed and we were unable to recover it. 00:26:18.499 [2024-04-26 12:22:19.681108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.499 [2024-04-26 12:22:19.681424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.499 [2024-04-26 12:22:19.681433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.499 qpair failed and we were unable to recover it. 00:26:18.499 [2024-04-26 12:22:19.681756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.499 [2024-04-26 12:22:19.681980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.499 [2024-04-26 12:22:19.681989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.499 qpair failed and we were unable to recover it. 00:26:18.499 [2024-04-26 12:22:19.682290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.499 [2024-04-26 12:22:19.682621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.499 [2024-04-26 12:22:19.682631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.499 qpair failed and we were unable to recover it. 00:26:18.499 [2024-04-26 12:22:19.682937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.499 [2024-04-26 12:22:19.683146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.499 [2024-04-26 12:22:19.683155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.499 qpair failed and we were unable to recover it. 00:26:18.499 [2024-04-26 12:22:19.683333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.499 [2024-04-26 12:22:19.683635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.499 [2024-04-26 12:22:19.683646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.499 qpair failed and we were unable to recover it. 00:26:18.499 [2024-04-26 12:22:19.683833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.499 [2024-04-26 12:22:19.684167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.499 [2024-04-26 12:22:19.684176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.499 qpair failed and we were unable to recover it. 00:26:18.499 [2024-04-26 12:22:19.684496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.499 [2024-04-26 12:22:19.684815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.499 [2024-04-26 12:22:19.684824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.499 qpair failed and we were unable to recover it. 00:26:18.499 [2024-04-26 12:22:19.685130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.499 [2024-04-26 12:22:19.685465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.499 [2024-04-26 12:22:19.685473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.499 qpair failed and we were unable to recover it. 00:26:18.499 [2024-04-26 12:22:19.685785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.499 [2024-04-26 12:22:19.686034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.499 [2024-04-26 12:22:19.686044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.499 qpair failed and we were unable to recover it. 00:26:18.499 [2024-04-26 12:22:19.686348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.499 [2024-04-26 12:22:19.686656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.499 [2024-04-26 12:22:19.686665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.499 qpair failed and we were unable to recover it. 00:26:18.499 [2024-04-26 12:22:19.687031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.499 [2024-04-26 12:22:19.687367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.499 [2024-04-26 12:22:19.687377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.499 qpair failed and we were unable to recover it. 00:26:18.499 [2024-04-26 12:22:19.687742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.499 [2024-04-26 12:22:19.688041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.499 [2024-04-26 12:22:19.688051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.499 qpair failed and we were unable to recover it. 00:26:18.499 [2024-04-26 12:22:19.688391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.499 [2024-04-26 12:22:19.688711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.499 [2024-04-26 12:22:19.688720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.499 qpair failed and we were unable to recover it. 00:26:18.777 [2024-04-26 12:22:19.689112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.777 [2024-04-26 12:22:19.689302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.777 [2024-04-26 12:22:19.689313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.777 qpair failed and we were unable to recover it. 00:26:18.777 [2024-04-26 12:22:19.689622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.777 [2024-04-26 12:22:19.689926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.777 [2024-04-26 12:22:19.689936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.778 qpair failed and we were unable to recover it. 00:26:18.778 [2024-04-26 12:22:19.690165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.778 [2024-04-26 12:22:19.690446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.778 [2024-04-26 12:22:19.690455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.778 qpair failed and we were unable to recover it. 00:26:18.778 [2024-04-26 12:22:19.690672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.778 [2024-04-26 12:22:19.690955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.778 [2024-04-26 12:22:19.690965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.778 qpair failed and we were unable to recover it. 00:26:18.778 [2024-04-26 12:22:19.691271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.778 [2024-04-26 12:22:19.691637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.778 [2024-04-26 12:22:19.691646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.778 qpair failed and we were unable to recover it. 00:26:18.778 [2024-04-26 12:22:19.691940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.778 [2024-04-26 12:22:19.692232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.778 [2024-04-26 12:22:19.692241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.778 qpair failed and we were unable to recover it. 00:26:18.778 [2024-04-26 12:22:19.692542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.778 [2024-04-26 12:22:19.692855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.778 [2024-04-26 12:22:19.692864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.778 qpair failed and we were unable to recover it. 00:26:18.778 [2024-04-26 12:22:19.693191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.778 [2024-04-26 12:22:19.693480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.778 [2024-04-26 12:22:19.693489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.778 qpair failed and we were unable to recover it. 00:26:18.778 [2024-04-26 12:22:19.693682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.778 [2024-04-26 12:22:19.693965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.778 [2024-04-26 12:22:19.693975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.778 qpair failed and we were unable to recover it. 00:26:18.778 [2024-04-26 12:22:19.694052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.778 [2024-04-26 12:22:19.694416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.778 [2024-04-26 12:22:19.694425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.778 qpair failed and we were unable to recover it. 00:26:18.778 [2024-04-26 12:22:19.694782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.778 [2024-04-26 12:22:19.695131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.778 [2024-04-26 12:22:19.695141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.778 qpair failed and we were unable to recover it. 00:26:18.778 [2024-04-26 12:22:19.695340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.778 [2024-04-26 12:22:19.695584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.778 [2024-04-26 12:22:19.695592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.778 qpair failed and we were unable to recover it. 00:26:18.778 [2024-04-26 12:22:19.695890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.778 [2024-04-26 12:22:19.696200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.778 [2024-04-26 12:22:19.696209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.778 qpair failed and we were unable to recover it. 00:26:18.778 [2024-04-26 12:22:19.696612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.778 [2024-04-26 12:22:19.696961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.778 [2024-04-26 12:22:19.696972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.778 qpair failed and we were unable to recover it. 00:26:18.778 [2024-04-26 12:22:19.697191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.778 [2024-04-26 12:22:19.697458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.778 [2024-04-26 12:22:19.697467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.778 qpair failed and we were unable to recover it. 00:26:18.778 [2024-04-26 12:22:19.697759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.778 [2024-04-26 12:22:19.697975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.778 [2024-04-26 12:22:19.697984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.778 qpair failed and we were unable to recover it. 00:26:18.778 [2024-04-26 12:22:19.698310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.778 [2024-04-26 12:22:19.698625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.778 [2024-04-26 12:22:19.698634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.778 qpair failed and we were unable to recover it. 00:26:18.778 [2024-04-26 12:22:19.698937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.778 [2024-04-26 12:22:19.699253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.778 [2024-04-26 12:22:19.699262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.778 qpair failed and we were unable to recover it. 00:26:18.778 [2024-04-26 12:22:19.699558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.778 [2024-04-26 12:22:19.699921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.778 [2024-04-26 12:22:19.699930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.778 qpair failed and we were unable to recover it. 00:26:18.778 [2024-04-26 12:22:19.700251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.778 [2024-04-26 12:22:19.700531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.778 [2024-04-26 12:22:19.700540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.778 qpair failed and we were unable to recover it. 00:26:18.778 [2024-04-26 12:22:19.700829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.778 [2024-04-26 12:22:19.701185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.778 [2024-04-26 12:22:19.701195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.778 qpair failed and we were unable to recover it. 00:26:18.778 [2024-04-26 12:22:19.701511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.778 [2024-04-26 12:22:19.701714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.778 [2024-04-26 12:22:19.701723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.778 qpair failed and we were unable to recover it. 00:26:18.778 [2024-04-26 12:22:19.702034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.778 [2024-04-26 12:22:19.702236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.778 [2024-04-26 12:22:19.702244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.778 qpair failed and we were unable to recover it. 00:26:18.778 [2024-04-26 12:22:19.702524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.778 [2024-04-26 12:22:19.702731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.778 [2024-04-26 12:22:19.702740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.778 qpair failed and we were unable to recover it. 00:26:18.778 [2024-04-26 12:22:19.702941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.778 [2024-04-26 12:22:19.703260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.778 [2024-04-26 12:22:19.703269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.778 qpair failed and we were unable to recover it. 00:26:18.778 [2024-04-26 12:22:19.703647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.778 [2024-04-26 12:22:19.703824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.778 [2024-04-26 12:22:19.703833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.778 qpair failed and we were unable to recover it. 00:26:18.778 [2024-04-26 12:22:19.704147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.778 [2024-04-26 12:22:19.704419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.778 [2024-04-26 12:22:19.704429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.778 qpair failed and we were unable to recover it. 00:26:18.778 [2024-04-26 12:22:19.704776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.778 [2024-04-26 12:22:19.705027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.778 [2024-04-26 12:22:19.705036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.778 qpair failed and we were unable to recover it. 00:26:18.778 [2024-04-26 12:22:19.705345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.778 [2024-04-26 12:22:19.705642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.778 [2024-04-26 12:22:19.705652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.778 qpair failed and we were unable to recover it. 00:26:18.778 [2024-04-26 12:22:19.705972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.778 [2024-04-26 12:22:19.706305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.778 [2024-04-26 12:22:19.706314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.778 qpair failed and we were unable to recover it. 00:26:18.778 [2024-04-26 12:22:19.706452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.779 [2024-04-26 12:22:19.706645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.779 [2024-04-26 12:22:19.706654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.779 qpair failed and we were unable to recover it. 00:26:18.779 [2024-04-26 12:22:19.706948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.779 [2024-04-26 12:22:19.707174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.779 [2024-04-26 12:22:19.707182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.779 qpair failed and we were unable to recover it. 00:26:18.779 [2024-04-26 12:22:19.707538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.779 [2024-04-26 12:22:19.707822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.779 [2024-04-26 12:22:19.707832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.779 qpair failed and we were unable to recover it. 00:26:18.779 [2024-04-26 12:22:19.708133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.779 [2024-04-26 12:22:19.708438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.779 [2024-04-26 12:22:19.708446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.779 qpair failed and we were unable to recover it. 00:26:18.779 [2024-04-26 12:22:19.708742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.779 [2024-04-26 12:22:19.709032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.779 [2024-04-26 12:22:19.709041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.779 qpair failed and we were unable to recover it. 00:26:18.779 [2024-04-26 12:22:19.709366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.779 [2024-04-26 12:22:19.709544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.779 [2024-04-26 12:22:19.709552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.779 qpair failed and we were unable to recover it. 00:26:18.779 [2024-04-26 12:22:19.709862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.779 [2024-04-26 12:22:19.710161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.779 [2024-04-26 12:22:19.710169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.779 qpair failed and we were unable to recover it. 00:26:18.779 [2024-04-26 12:22:19.710470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.779 [2024-04-26 12:22:19.710792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.779 [2024-04-26 12:22:19.710801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.779 qpair failed and we were unable to recover it. 00:26:18.779 [2024-04-26 12:22:19.711000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.779 [2024-04-26 12:22:19.711184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.779 [2024-04-26 12:22:19.711192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.779 qpair failed and we were unable to recover it. 00:26:18.779 [2024-04-26 12:22:19.711476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.779 [2024-04-26 12:22:19.711810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.779 [2024-04-26 12:22:19.711819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.779 qpair failed and we were unable to recover it. 00:26:18.779 [2024-04-26 12:22:19.712121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.779 [2024-04-26 12:22:19.712353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.779 [2024-04-26 12:22:19.712362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.779 qpair failed and we were unable to recover it. 00:26:18.779 [2024-04-26 12:22:19.712644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.779 [2024-04-26 12:22:19.712994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.779 [2024-04-26 12:22:19.713003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.779 qpair failed and we were unable to recover it. 00:26:18.779 [2024-04-26 12:22:19.713198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.779 [2024-04-26 12:22:19.713514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.779 [2024-04-26 12:22:19.713525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.779 qpair failed and we were unable to recover it. 00:26:18.779 [2024-04-26 12:22:19.713823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.779 [2024-04-26 12:22:19.714023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.779 [2024-04-26 12:22:19.714033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.779 qpair failed and we were unable to recover it. 00:26:18.779 [2024-04-26 12:22:19.714328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.779 [2024-04-26 12:22:19.714606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.779 [2024-04-26 12:22:19.714614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.779 qpair failed and we were unable to recover it. 00:26:18.779 [2024-04-26 12:22:19.714861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.779 [2024-04-26 12:22:19.715182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.779 [2024-04-26 12:22:19.715191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.779 qpair failed and we were unable to recover it. 00:26:18.779 [2024-04-26 12:22:19.715548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.779 [2024-04-26 12:22:19.715855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.779 [2024-04-26 12:22:19.715864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.779 qpair failed and we were unable to recover it. 00:26:18.779 [2024-04-26 12:22:19.716173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.779 [2024-04-26 12:22:19.716521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.779 [2024-04-26 12:22:19.716530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.779 qpair failed and we were unable to recover it. 00:26:18.779 [2024-04-26 12:22:19.716845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.779 [2024-04-26 12:22:19.717134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.779 [2024-04-26 12:22:19.717143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.779 qpair failed and we were unable to recover it. 00:26:18.779 [2024-04-26 12:22:19.717434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.779 [2024-04-26 12:22:19.717717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.779 [2024-04-26 12:22:19.717726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.779 qpair failed and we were unable to recover it. 00:26:18.779 [2024-04-26 12:22:19.718109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.779 [2024-04-26 12:22:19.718462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.779 [2024-04-26 12:22:19.718471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.779 qpair failed and we were unable to recover it. 00:26:18.779 [2024-04-26 12:22:19.718641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.779 [2024-04-26 12:22:19.718950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.779 [2024-04-26 12:22:19.718960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.779 qpair failed and we were unable to recover it. 00:26:18.779 [2024-04-26 12:22:19.719289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.779 [2024-04-26 12:22:19.719605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.779 [2024-04-26 12:22:19.719616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.779 qpair failed and we were unable to recover it. 00:26:18.779 [2024-04-26 12:22:19.719952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.779 [2024-04-26 12:22:19.720244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.779 [2024-04-26 12:22:19.720253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.779 qpair failed and we were unable to recover it. 00:26:18.779 [2024-04-26 12:22:19.720573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.779 [2024-04-26 12:22:19.720888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.779 [2024-04-26 12:22:19.720897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.779 qpair failed and we were unable to recover it. 00:26:18.779 [2024-04-26 12:22:19.721173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.779 [2024-04-26 12:22:19.721488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.779 [2024-04-26 12:22:19.721497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.779 qpair failed and we were unable to recover it. 00:26:18.779 [2024-04-26 12:22:19.721702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.779 [2024-04-26 12:22:19.721939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.779 [2024-04-26 12:22:19.721948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.779 qpair failed and we were unable to recover it. 00:26:18.779 [2024-04-26 12:22:19.722298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.779 [2024-04-26 12:22:19.722642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.779 [2024-04-26 12:22:19.722651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.779 qpair failed and we were unable to recover it. 00:26:18.779 [2024-04-26 12:22:19.722967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.779 [2024-04-26 12:22:19.723295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.779 [2024-04-26 12:22:19.723304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.780 qpair failed and we were unable to recover it. 00:26:18.780 [2024-04-26 12:22:19.723473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.780 [2024-04-26 12:22:19.723795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.780 [2024-04-26 12:22:19.723804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.780 qpair failed and we were unable to recover it. 00:26:18.780 [2024-04-26 12:22:19.724100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.780 [2024-04-26 12:22:19.724411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.780 [2024-04-26 12:22:19.724420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.780 qpair failed and we were unable to recover it. 00:26:18.780 [2024-04-26 12:22:19.724738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.780 [2024-04-26 12:22:19.725067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.780 [2024-04-26 12:22:19.725076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.780 qpair failed and we were unable to recover it. 00:26:18.780 [2024-04-26 12:22:19.725378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.780 [2024-04-26 12:22:19.725668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.780 [2024-04-26 12:22:19.725677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.780 qpair failed and we were unable to recover it. 00:26:18.780 [2024-04-26 12:22:19.725996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.780 [2024-04-26 12:22:19.726285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.780 [2024-04-26 12:22:19.726294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.780 qpair failed and we were unable to recover it. 00:26:18.780 [2024-04-26 12:22:19.726611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.780 [2024-04-26 12:22:19.726932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.780 [2024-04-26 12:22:19.726942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.780 qpair failed and we were unable to recover it. 00:26:18.780 [2024-04-26 12:22:19.727255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.780 [2024-04-26 12:22:19.727445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.780 [2024-04-26 12:22:19.727454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.780 qpair failed and we were unable to recover it. 00:26:18.780 [2024-04-26 12:22:19.727748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.780 [2024-04-26 12:22:19.728062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.780 [2024-04-26 12:22:19.728072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.780 qpair failed and we were unable to recover it. 00:26:18.780 [2024-04-26 12:22:19.728389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.780 [2024-04-26 12:22:19.728704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.780 [2024-04-26 12:22:19.728713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.780 qpair failed and we were unable to recover it. 00:26:18.780 [2024-04-26 12:22:19.729107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.780 [2024-04-26 12:22:19.729454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.780 [2024-04-26 12:22:19.729462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.780 qpair failed and we were unable to recover it. 00:26:18.780 [2024-04-26 12:22:19.729743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.780 [2024-04-26 12:22:19.730024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.780 [2024-04-26 12:22:19.730034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.780 qpair failed and we were unable to recover it. 00:26:18.780 [2024-04-26 12:22:19.730333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.780 [2024-04-26 12:22:19.730627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.780 [2024-04-26 12:22:19.730635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.780 qpair failed and we were unable to recover it. 00:26:18.780 [2024-04-26 12:22:19.730961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.780 [2024-04-26 12:22:19.731250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.780 [2024-04-26 12:22:19.731259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.780 qpair failed and we were unable to recover it. 00:26:18.780 [2024-04-26 12:22:19.731571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.780 [2024-04-26 12:22:19.731872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.780 [2024-04-26 12:22:19.731882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.780 qpair failed and we were unable to recover it. 00:26:18.780 [2024-04-26 12:22:19.732198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.780 [2024-04-26 12:22:19.732484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.780 [2024-04-26 12:22:19.732493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.780 qpair failed and we were unable to recover it. 00:26:18.780 [2024-04-26 12:22:19.732827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.780 [2024-04-26 12:22:19.733165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.780 [2024-04-26 12:22:19.733174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.780 qpair failed and we were unable to recover it. 00:26:18.780 [2024-04-26 12:22:19.733349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.780 [2024-04-26 12:22:19.733608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.780 [2024-04-26 12:22:19.733617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.780 qpair failed and we were unable to recover it. 00:26:18.780 [2024-04-26 12:22:19.733909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.780 [2024-04-26 12:22:19.734233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.780 [2024-04-26 12:22:19.734241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.780 qpair failed and we were unable to recover it. 00:26:18.780 [2024-04-26 12:22:19.734438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.780 [2024-04-26 12:22:19.734767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.780 [2024-04-26 12:22:19.734776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.780 qpair failed and we were unable to recover it. 00:26:18.780 [2024-04-26 12:22:19.735080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.780 [2024-04-26 12:22:19.735403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.780 [2024-04-26 12:22:19.735413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.780 qpair failed and we were unable to recover it. 00:26:18.780 [2024-04-26 12:22:19.735729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.780 [2024-04-26 12:22:19.736016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.780 [2024-04-26 12:22:19.736025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.780 qpair failed and we were unable to recover it. 00:26:18.780 [2024-04-26 12:22:19.736312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.780 [2024-04-26 12:22:19.736627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.780 [2024-04-26 12:22:19.736636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.780 qpair failed and we were unable to recover it. 00:26:18.780 [2024-04-26 12:22:19.736936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.780 [2024-04-26 12:22:19.737209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.780 [2024-04-26 12:22:19.737218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.780 qpair failed and we were unable to recover it. 00:26:18.780 [2024-04-26 12:22:19.737409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.780 [2024-04-26 12:22:19.737702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.780 [2024-04-26 12:22:19.737711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.780 qpair failed and we were unable to recover it. 00:26:18.780 [2024-04-26 12:22:19.738030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.780 [2024-04-26 12:22:19.738314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.780 [2024-04-26 12:22:19.738324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.780 qpair failed and we were unable to recover it. 00:26:18.780 [2024-04-26 12:22:19.738642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.780 [2024-04-26 12:22:19.738952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.780 [2024-04-26 12:22:19.738962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.780 qpair failed and we were unable to recover it. 00:26:18.780 [2024-04-26 12:22:19.739280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.780 [2024-04-26 12:22:19.739459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.780 [2024-04-26 12:22:19.739470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.780 qpair failed and we were unable to recover it. 00:26:18.780 [2024-04-26 12:22:19.739780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.780 [2024-04-26 12:22:19.740099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.780 [2024-04-26 12:22:19.740110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.780 qpair failed and we were unable to recover it. 00:26:18.780 [2024-04-26 12:22:19.740437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.780 [2024-04-26 12:22:19.740736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.781 [2024-04-26 12:22:19.740745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.781 qpair failed and we were unable to recover it. 00:26:18.781 [2024-04-26 12:22:19.741024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.781 [2024-04-26 12:22:19.741315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.781 [2024-04-26 12:22:19.741324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.781 qpair failed and we were unable to recover it. 00:26:18.781 [2024-04-26 12:22:19.741591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.781 [2024-04-26 12:22:19.741892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.781 [2024-04-26 12:22:19.741901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.781 qpair failed and we were unable to recover it. 00:26:18.781 [2024-04-26 12:22:19.742197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.781 [2024-04-26 12:22:19.742516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.781 [2024-04-26 12:22:19.742525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.781 qpair failed and we were unable to recover it. 00:26:18.781 [2024-04-26 12:22:19.742858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.781 [2024-04-26 12:22:19.743175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.781 [2024-04-26 12:22:19.743184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.781 qpair failed and we were unable to recover it. 00:26:18.781 [2024-04-26 12:22:19.743396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.781 [2024-04-26 12:22:19.743662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.781 [2024-04-26 12:22:19.743671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.781 qpair failed and we were unable to recover it. 00:26:18.781 [2024-04-26 12:22:19.743955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.781 [2024-04-26 12:22:19.744293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.781 [2024-04-26 12:22:19.744302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.781 qpair failed and we were unable to recover it. 00:26:18.781 [2024-04-26 12:22:19.744628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.781 [2024-04-26 12:22:19.744946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.781 [2024-04-26 12:22:19.744955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.781 qpair failed and we were unable to recover it. 00:26:18.781 [2024-04-26 12:22:19.745273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.781 [2024-04-26 12:22:19.745555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.781 [2024-04-26 12:22:19.745564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.781 qpair failed and we were unable to recover it. 00:26:18.781 [2024-04-26 12:22:19.745872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.781 [2024-04-26 12:22:19.746078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.781 [2024-04-26 12:22:19.746087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.781 qpair failed and we were unable to recover it. 00:26:18.781 [2024-04-26 12:22:19.746411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.781 [2024-04-26 12:22:19.746718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.781 [2024-04-26 12:22:19.746726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.781 qpair failed and we were unable to recover it. 00:26:18.781 [2024-04-26 12:22:19.747055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.781 [2024-04-26 12:22:19.747379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.781 [2024-04-26 12:22:19.747388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.781 qpair failed and we were unable to recover it. 00:26:18.781 [2024-04-26 12:22:19.747676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.781 [2024-04-26 12:22:19.747966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.781 [2024-04-26 12:22:19.747975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.781 qpair failed and we were unable to recover it. 00:26:18.781 [2024-04-26 12:22:19.748287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.781 [2024-04-26 12:22:19.748614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.781 [2024-04-26 12:22:19.748622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.781 qpair failed and we were unable to recover it. 00:26:18.781 [2024-04-26 12:22:19.748928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.781 [2024-04-26 12:22:19.749219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.781 [2024-04-26 12:22:19.749227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.781 qpair failed and we were unable to recover it. 00:26:18.781 [2024-04-26 12:22:19.749517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.781 [2024-04-26 12:22:19.749823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.781 [2024-04-26 12:22:19.749833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.781 qpair failed and we were unable to recover it. 00:26:18.781 [2024-04-26 12:22:19.750157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.781 [2024-04-26 12:22:19.750471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.781 [2024-04-26 12:22:19.750482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.781 qpair failed and we were unable to recover it. 00:26:18.781 [2024-04-26 12:22:19.750803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.781 [2024-04-26 12:22:19.751121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.781 [2024-04-26 12:22:19.751131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.781 qpair failed and we were unable to recover it. 00:26:18.781 [2024-04-26 12:22:19.751463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.781 [2024-04-26 12:22:19.751778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.781 [2024-04-26 12:22:19.751788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.781 qpair failed and we were unable to recover it. 00:26:18.781 [2024-04-26 12:22:19.752116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.781 [2024-04-26 12:22:19.752425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.781 [2024-04-26 12:22:19.752435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.781 qpair failed and we were unable to recover it. 00:26:18.781 [2024-04-26 12:22:19.752770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.781 [2024-04-26 12:22:19.753084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.781 [2024-04-26 12:22:19.753094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.781 qpair failed and we were unable to recover it. 00:26:18.781 [2024-04-26 12:22:19.753420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.781 [2024-04-26 12:22:19.753608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.781 [2024-04-26 12:22:19.753618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.781 qpair failed and we were unable to recover it. 00:26:18.781 [2024-04-26 12:22:19.753939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.781 [2024-04-26 12:22:19.754267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.781 [2024-04-26 12:22:19.754276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.781 qpair failed and we were unable to recover it. 00:26:18.781 [2024-04-26 12:22:19.754594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.781 [2024-04-26 12:22:19.754917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.781 [2024-04-26 12:22:19.754927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.781 qpair failed and we were unable to recover it. 00:26:18.781 [2024-04-26 12:22:19.755223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.781 [2024-04-26 12:22:19.755515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.781 [2024-04-26 12:22:19.755523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.781 qpair failed and we were unable to recover it. 00:26:18.781 [2024-04-26 12:22:19.755900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.781 [2024-04-26 12:22:19.756260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.781 [2024-04-26 12:22:19.756269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.781 qpair failed and we were unable to recover it. 00:26:18.781 [2024-04-26 12:22:19.756531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.781 [2024-04-26 12:22:19.756737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.781 [2024-04-26 12:22:19.756747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.781 qpair failed and we were unable to recover it. 00:26:18.781 [2024-04-26 12:22:19.757066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.781 [2024-04-26 12:22:19.757379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.781 [2024-04-26 12:22:19.757388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.781 qpair failed and we were unable to recover it. 00:26:18.781 [2024-04-26 12:22:19.757703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.781 [2024-04-26 12:22:19.757994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.781 [2024-04-26 12:22:19.758003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.781 qpair failed and we were unable to recover it. 00:26:18.782 [2024-04-26 12:22:19.758303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.782 [2024-04-26 12:22:19.758664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.782 [2024-04-26 12:22:19.758673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.782 qpair failed and we were unable to recover it. 00:26:18.782 [2024-04-26 12:22:19.758971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.782 [2024-04-26 12:22:19.759251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.782 [2024-04-26 12:22:19.759260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.782 qpair failed and we were unable to recover it. 00:26:18.782 [2024-04-26 12:22:19.759595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.782 [2024-04-26 12:22:19.759906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.782 [2024-04-26 12:22:19.759915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.782 qpair failed and we were unable to recover it. 00:26:18.782 [2024-04-26 12:22:19.760238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.782 [2024-04-26 12:22:19.760522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.782 [2024-04-26 12:22:19.760531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.782 qpair failed and we were unable to recover it. 00:26:18.782 [2024-04-26 12:22:19.760849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.782 [2024-04-26 12:22:19.761124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.782 [2024-04-26 12:22:19.761132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.782 qpair failed and we were unable to recover it. 00:26:18.782 [2024-04-26 12:22:19.761446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.782 [2024-04-26 12:22:19.761788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.782 [2024-04-26 12:22:19.761797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.782 qpair failed and we were unable to recover it. 00:26:18.782 [2024-04-26 12:22:19.762129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.782 [2024-04-26 12:22:19.762312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.782 [2024-04-26 12:22:19.762322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.782 qpair failed and we were unable to recover it. 00:26:18.782 [2024-04-26 12:22:19.762647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.782 [2024-04-26 12:22:19.762949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.782 [2024-04-26 12:22:19.762958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.782 qpair failed and we were unable to recover it. 00:26:18.782 [2024-04-26 12:22:19.763274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.782 [2024-04-26 12:22:19.763576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.782 [2024-04-26 12:22:19.763584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.782 qpair failed and we were unable to recover it. 00:26:18.782 [2024-04-26 12:22:19.763891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.782 [2024-04-26 12:22:19.764241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.782 [2024-04-26 12:22:19.764249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.782 qpair failed and we were unable to recover it. 00:26:18.782 [2024-04-26 12:22:19.764589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.782 [2024-04-26 12:22:19.764932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.782 [2024-04-26 12:22:19.764941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.782 qpair failed and we were unable to recover it. 00:26:18.782 [2024-04-26 12:22:19.765260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.782 [2024-04-26 12:22:19.765540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.782 [2024-04-26 12:22:19.765549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.782 qpair failed and we were unable to recover it. 00:26:18.782 [2024-04-26 12:22:19.765735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.782 [2024-04-26 12:22:19.765960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.782 [2024-04-26 12:22:19.765969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.782 qpair failed and we were unable to recover it. 00:26:18.782 [2024-04-26 12:22:19.766247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.782 [2024-04-26 12:22:19.766548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.782 [2024-04-26 12:22:19.766557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.782 qpair failed and we were unable to recover it. 00:26:18.782 [2024-04-26 12:22:19.766914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.782 [2024-04-26 12:22:19.767258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.782 [2024-04-26 12:22:19.767267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.782 qpair failed and we were unable to recover it. 00:26:18.782 [2024-04-26 12:22:19.767603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.782 [2024-04-26 12:22:19.767952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.782 [2024-04-26 12:22:19.767962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.782 qpair failed and we were unable to recover it. 00:26:18.782 [2024-04-26 12:22:19.768355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.782 [2024-04-26 12:22:19.768653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.782 [2024-04-26 12:22:19.768662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.782 qpair failed and we were unable to recover it. 00:26:18.782 [2024-04-26 12:22:19.768987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.782 [2024-04-26 12:22:19.769316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.782 [2024-04-26 12:22:19.769324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.782 qpair failed and we were unable to recover it. 00:26:18.782 [2024-04-26 12:22:19.769663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.782 [2024-04-26 12:22:19.769984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.782 [2024-04-26 12:22:19.769993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.782 qpair failed and we were unable to recover it. 00:26:18.782 [2024-04-26 12:22:19.770330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.782 [2024-04-26 12:22:19.770602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.782 [2024-04-26 12:22:19.770612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.782 qpair failed and we were unable to recover it. 00:26:18.782 [2024-04-26 12:22:19.770828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.782 [2024-04-26 12:22:19.771162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.782 [2024-04-26 12:22:19.771171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.782 qpair failed and we were unable to recover it. 00:26:18.782 [2024-04-26 12:22:19.771491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.782 [2024-04-26 12:22:19.771691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.782 [2024-04-26 12:22:19.771700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.782 qpair failed and we were unable to recover it. 00:26:18.782 [2024-04-26 12:22:19.771997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.782 [2024-04-26 12:22:19.772322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.782 [2024-04-26 12:22:19.772332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.782 qpair failed and we were unable to recover it. 00:26:18.783 [2024-04-26 12:22:19.772646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.783 [2024-04-26 12:22:19.772962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.783 [2024-04-26 12:22:19.772972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.783 qpair failed and we were unable to recover it. 00:26:18.783 [2024-04-26 12:22:19.773338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.783 [2024-04-26 12:22:19.773636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.783 [2024-04-26 12:22:19.773644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.783 qpair failed and we were unable to recover it. 00:26:18.783 [2024-04-26 12:22:19.773963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.783 [2024-04-26 12:22:19.774331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.783 [2024-04-26 12:22:19.774341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.783 qpair failed and we were unable to recover it. 00:26:18.783 [2024-04-26 12:22:19.774640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.783 [2024-04-26 12:22:19.774915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.783 [2024-04-26 12:22:19.774925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.783 qpair failed and we were unable to recover it. 00:26:18.783 [2024-04-26 12:22:19.775231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.783 [2024-04-26 12:22:19.775533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.783 [2024-04-26 12:22:19.775542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.783 qpair failed and we were unable to recover it. 00:26:18.783 [2024-04-26 12:22:19.775922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.783 [2024-04-26 12:22:19.776239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.783 [2024-04-26 12:22:19.776248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.783 qpair failed and we were unable to recover it. 00:26:18.783 [2024-04-26 12:22:19.776557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.783 [2024-04-26 12:22:19.776762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.783 [2024-04-26 12:22:19.776770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.783 qpair failed and we were unable to recover it. 00:26:18.783 [2024-04-26 12:22:19.776961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.783 [2024-04-26 12:22:19.777274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.783 [2024-04-26 12:22:19.777284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.783 qpair failed and we were unable to recover it. 00:26:18.783 [2024-04-26 12:22:19.777458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.783 [2024-04-26 12:22:19.777783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.783 [2024-04-26 12:22:19.777792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.783 qpair failed and we were unable to recover it. 00:26:18.783 [2024-04-26 12:22:19.778100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.783 [2024-04-26 12:22:19.778303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.783 [2024-04-26 12:22:19.778313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.783 qpair failed and we were unable to recover it. 00:26:18.783 [2024-04-26 12:22:19.778648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.783 [2024-04-26 12:22:19.778854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.783 [2024-04-26 12:22:19.778864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.783 qpair failed and we were unable to recover it. 00:26:18.783 [2024-04-26 12:22:19.779082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.783 [2024-04-26 12:22:19.779424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.783 [2024-04-26 12:22:19.779433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.783 qpair failed and we were unable to recover it. 00:26:18.783 [2024-04-26 12:22:19.779743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.783 [2024-04-26 12:22:19.779955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.783 [2024-04-26 12:22:19.779965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.783 qpair failed and we were unable to recover it. 00:26:18.783 [2024-04-26 12:22:19.780271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.783 [2024-04-26 12:22:19.780470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.783 [2024-04-26 12:22:19.780479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.783 qpair failed and we were unable to recover it. 00:26:18.783 [2024-04-26 12:22:19.780799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.783 [2024-04-26 12:22:19.781087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.783 [2024-04-26 12:22:19.781096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.783 qpair failed and we were unable to recover it. 00:26:18.783 [2024-04-26 12:22:19.781274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.783 [2024-04-26 12:22:19.781598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.783 [2024-04-26 12:22:19.781610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.783 qpair failed and we were unable to recover it. 00:26:18.783 [2024-04-26 12:22:19.781917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.783 [2024-04-26 12:22:19.782238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.783 [2024-04-26 12:22:19.782248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.783 qpair failed and we were unable to recover it. 00:26:18.783 [2024-04-26 12:22:19.782575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.783 [2024-04-26 12:22:19.782878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.783 [2024-04-26 12:22:19.782887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.783 qpair failed and we were unable to recover it. 00:26:18.783 [2024-04-26 12:22:19.783211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.783 [2024-04-26 12:22:19.783402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.783 [2024-04-26 12:22:19.783411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.783 qpair failed and we were unable to recover it. 00:26:18.783 [2024-04-26 12:22:19.783707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.783 [2024-04-26 12:22:19.784019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.783 [2024-04-26 12:22:19.784028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.783 qpair failed and we were unable to recover it. 00:26:18.783 [2024-04-26 12:22:19.784346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.783 [2024-04-26 12:22:19.784658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.783 [2024-04-26 12:22:19.784668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.783 qpair failed and we were unable to recover it. 00:26:18.783 [2024-04-26 12:22:19.784998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.783 [2024-04-26 12:22:19.785322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.783 [2024-04-26 12:22:19.785332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.783 qpair failed and we were unable to recover it. 00:26:18.783 [2024-04-26 12:22:19.785662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.783 [2024-04-26 12:22:19.785958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.783 [2024-04-26 12:22:19.785969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.783 qpair failed and we were unable to recover it. 00:26:18.783 [2024-04-26 12:22:19.786283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.783 [2024-04-26 12:22:19.786583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.783 [2024-04-26 12:22:19.786592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.783 qpair failed and we were unable to recover it. 00:26:18.783 [2024-04-26 12:22:19.786998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.783 [2024-04-26 12:22:19.787268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.783 [2024-04-26 12:22:19.787277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.783 qpair failed and we were unable to recover it. 00:26:18.783 [2024-04-26 12:22:19.787613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.783 [2024-04-26 12:22:19.787925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.783 [2024-04-26 12:22:19.787935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.783 qpair failed and we were unable to recover it. 00:26:18.783 [2024-04-26 12:22:19.788268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.783 [2024-04-26 12:22:19.788557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.783 [2024-04-26 12:22:19.788567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.783 qpair failed and we were unable to recover it. 00:26:18.783 [2024-04-26 12:22:19.788895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.783 [2024-04-26 12:22:19.789250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.783 [2024-04-26 12:22:19.789259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.783 qpair failed and we were unable to recover it. 00:26:18.783 [2024-04-26 12:22:19.789481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.783 [2024-04-26 12:22:19.789781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.784 [2024-04-26 12:22:19.789789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.784 qpair failed and we were unable to recover it. 00:26:18.784 [2024-04-26 12:22:19.790084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.784 [2024-04-26 12:22:19.790451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.784 [2024-04-26 12:22:19.790460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.784 qpair failed and we were unable to recover it. 00:26:18.784 [2024-04-26 12:22:19.790679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.784 [2024-04-26 12:22:19.791002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.784 [2024-04-26 12:22:19.791011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.784 qpair failed and we were unable to recover it. 00:26:18.784 [2024-04-26 12:22:19.791304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.784 [2024-04-26 12:22:19.791499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.784 [2024-04-26 12:22:19.791507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.784 qpair failed and we were unable to recover it. 00:26:18.784 [2024-04-26 12:22:19.791808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.784 [2024-04-26 12:22:19.792006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.784 [2024-04-26 12:22:19.792016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.784 qpair failed and we were unable to recover it. 00:26:18.784 [2024-04-26 12:22:19.792299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.784 [2024-04-26 12:22:19.792631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.784 [2024-04-26 12:22:19.792641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.784 qpair failed and we were unable to recover it. 00:26:18.784 [2024-04-26 12:22:19.792960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.784 [2024-04-26 12:22:19.793303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.784 [2024-04-26 12:22:19.793312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.784 qpair failed and we were unable to recover it. 00:26:18.784 [2024-04-26 12:22:19.793504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.784 [2024-04-26 12:22:19.793840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.784 [2024-04-26 12:22:19.793849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.784 qpair failed and we were unable to recover it. 00:26:18.784 [2024-04-26 12:22:19.794255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.784 [2024-04-26 12:22:19.794567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.784 [2024-04-26 12:22:19.794576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.784 qpair failed and we were unable to recover it. 00:26:18.784 [2024-04-26 12:22:19.794899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.784 [2024-04-26 12:22:19.795227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.784 [2024-04-26 12:22:19.795236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.784 qpair failed and we were unable to recover it. 00:26:18.784 [2024-04-26 12:22:19.795553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.784 [2024-04-26 12:22:19.795929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.784 [2024-04-26 12:22:19.795938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.784 qpair failed and we were unable to recover it. 00:26:18.784 [2024-04-26 12:22:19.796286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.784 [2024-04-26 12:22:19.796616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.784 [2024-04-26 12:22:19.796626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.784 qpair failed and we were unable to recover it. 00:26:18.784 [2024-04-26 12:22:19.796981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.784 [2024-04-26 12:22:19.797282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.784 [2024-04-26 12:22:19.797291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.784 qpair failed and we were unable to recover it. 00:26:18.784 [2024-04-26 12:22:19.797622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.784 [2024-04-26 12:22:19.797941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.784 [2024-04-26 12:22:19.797950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.784 qpair failed and we were unable to recover it. 00:26:18.784 [2024-04-26 12:22:19.798259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.784 [2024-04-26 12:22:19.798550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.784 [2024-04-26 12:22:19.798559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.784 qpair failed and we were unable to recover it. 00:26:18.784 [2024-04-26 12:22:19.798989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.784 [2024-04-26 12:22:19.799316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.784 [2024-04-26 12:22:19.799325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.784 qpair failed and we were unable to recover it. 00:26:18.784 [2024-04-26 12:22:19.799641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.784 [2024-04-26 12:22:19.799866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.784 [2024-04-26 12:22:19.799876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.784 qpair failed and we were unable to recover it. 00:26:18.784 [2024-04-26 12:22:19.800189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.784 [2024-04-26 12:22:19.800381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.784 [2024-04-26 12:22:19.800391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.784 qpair failed and we were unable to recover it. 00:26:18.784 [2024-04-26 12:22:19.800741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.784 [2024-04-26 12:22:19.801059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.784 [2024-04-26 12:22:19.801068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.784 qpair failed and we were unable to recover it. 00:26:18.784 [2024-04-26 12:22:19.801374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.784 [2024-04-26 12:22:19.801572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.784 [2024-04-26 12:22:19.801580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.784 qpair failed and we were unable to recover it. 00:26:18.784 [2024-04-26 12:22:19.801909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.784 [2024-04-26 12:22:19.802184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.784 [2024-04-26 12:22:19.802193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.784 qpair failed and we were unable to recover it. 00:26:18.784 [2024-04-26 12:22:19.802517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.784 [2024-04-26 12:22:19.802818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.784 [2024-04-26 12:22:19.802826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.784 qpair failed and we were unable to recover it. 00:26:18.784 [2024-04-26 12:22:19.803119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.784 [2024-04-26 12:22:19.803419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.784 [2024-04-26 12:22:19.803428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.784 qpair failed and we were unable to recover it. 00:26:18.784 [2024-04-26 12:22:19.803762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.784 [2024-04-26 12:22:19.804086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.784 [2024-04-26 12:22:19.804095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.784 qpair failed and we were unable to recover it. 00:26:18.784 [2024-04-26 12:22:19.804403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.784 [2024-04-26 12:22:19.804738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.784 [2024-04-26 12:22:19.804747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.784 qpair failed and we were unable to recover it. 00:26:18.784 [2024-04-26 12:22:19.805063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.784 [2024-04-26 12:22:19.805377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.784 [2024-04-26 12:22:19.805385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.784 qpair failed and we were unable to recover it. 00:26:18.784 [2024-04-26 12:22:19.805563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.784 [2024-04-26 12:22:19.805881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.784 [2024-04-26 12:22:19.805891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.784 qpair failed and we were unable to recover it. 00:26:18.784 [2024-04-26 12:22:19.806135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.784 [2024-04-26 12:22:19.806468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.784 [2024-04-26 12:22:19.806476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.784 qpair failed and we were unable to recover it. 00:26:18.784 [2024-04-26 12:22:19.806773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.784 [2024-04-26 12:22:19.806975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.784 [2024-04-26 12:22:19.806984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.784 qpair failed and we were unable to recover it. 00:26:18.784 [2024-04-26 12:22:19.807293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.785 [2024-04-26 12:22:19.807625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.785 [2024-04-26 12:22:19.807634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.785 qpair failed and we were unable to recover it. 00:26:18.785 [2024-04-26 12:22:19.807819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.785 [2024-04-26 12:22:19.808127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.785 [2024-04-26 12:22:19.808137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.785 qpair failed and we were unable to recover it. 00:26:18.785 [2024-04-26 12:22:19.808473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.785 [2024-04-26 12:22:19.808673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.785 [2024-04-26 12:22:19.808683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.785 qpair failed and we were unable to recover it. 00:26:18.785 [2024-04-26 12:22:19.808966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.785 [2024-04-26 12:22:19.809187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.785 [2024-04-26 12:22:19.809196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.785 qpair failed and we were unable to recover it. 00:26:18.785 [2024-04-26 12:22:19.809420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.785 [2024-04-26 12:22:19.809770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.785 [2024-04-26 12:22:19.809779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.785 qpair failed and we were unable to recover it. 00:26:18.785 [2024-04-26 12:22:19.810100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.785 [2024-04-26 12:22:19.810440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.785 [2024-04-26 12:22:19.810448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.785 qpair failed and we were unable to recover it. 00:26:18.785 [2024-04-26 12:22:19.810823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.785 [2024-04-26 12:22:19.811138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.785 [2024-04-26 12:22:19.811148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.785 qpair failed and we were unable to recover it. 00:26:18.785 [2024-04-26 12:22:19.811471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.785 [2024-04-26 12:22:19.811786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.785 [2024-04-26 12:22:19.811794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.785 qpair failed and we were unable to recover it. 00:26:18.785 [2024-04-26 12:22:19.812127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.785 [2024-04-26 12:22:19.812438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.785 [2024-04-26 12:22:19.812446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.785 qpair failed and we were unable to recover it. 00:26:18.785 [2024-04-26 12:22:19.812829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.785 [2024-04-26 12:22:19.813152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.785 [2024-04-26 12:22:19.813164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.785 qpair failed and we were unable to recover it. 00:26:18.785 [2024-04-26 12:22:19.813449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.785 [2024-04-26 12:22:19.813751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.785 [2024-04-26 12:22:19.813761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.785 qpair failed and we were unable to recover it. 00:26:18.785 [2024-04-26 12:22:19.814087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.785 [2024-04-26 12:22:19.814408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.785 [2024-04-26 12:22:19.814416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.785 qpair failed and we were unable to recover it. 00:26:18.785 [2024-04-26 12:22:19.814469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.785 [2024-04-26 12:22:19.814786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.785 [2024-04-26 12:22:19.814795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.785 qpair failed and we were unable to recover it. 00:26:18.785 [2024-04-26 12:22:19.815129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.785 [2024-04-26 12:22:19.815431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.785 [2024-04-26 12:22:19.815441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.785 qpair failed and we were unable to recover it. 00:26:18.785 [2024-04-26 12:22:19.815785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.785 [2024-04-26 12:22:19.816092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.785 [2024-04-26 12:22:19.816101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.785 qpair failed and we were unable to recover it. 00:26:18.785 [2024-04-26 12:22:19.816413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.785 [2024-04-26 12:22:19.816701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.785 [2024-04-26 12:22:19.816710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.785 qpair failed and we were unable to recover it. 00:26:18.785 [2024-04-26 12:22:19.816918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.785 [2024-04-26 12:22:19.817196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.785 [2024-04-26 12:22:19.817205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.785 qpair failed and we were unable to recover it. 00:26:18.785 [2024-04-26 12:22:19.817525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.785 [2024-04-26 12:22:19.817814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.785 [2024-04-26 12:22:19.817823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.785 qpair failed and we were unable to recover it. 00:26:18.785 [2024-04-26 12:22:19.818131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.785 [2024-04-26 12:22:19.818423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.785 [2024-04-26 12:22:19.818432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.785 qpair failed and we were unable to recover it. 00:26:18.785 [2024-04-26 12:22:19.818736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.785 [2024-04-26 12:22:19.819041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.785 [2024-04-26 12:22:19.819055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.785 qpair failed and we were unable to recover it. 00:26:18.785 [2024-04-26 12:22:19.819364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.785 [2024-04-26 12:22:19.819667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.785 [2024-04-26 12:22:19.819676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.785 qpair failed and we were unable to recover it. 00:26:18.785 [2024-04-26 12:22:19.819973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.785 [2024-04-26 12:22:19.820153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.785 [2024-04-26 12:22:19.820163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.785 qpair failed and we were unable to recover it. 00:26:18.785 [2024-04-26 12:22:19.820471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.785 [2024-04-26 12:22:19.820749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.785 [2024-04-26 12:22:19.820757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.785 qpair failed and we were unable to recover it. 00:26:18.785 [2024-04-26 12:22:19.821074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.785 [2024-04-26 12:22:19.821269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.785 [2024-04-26 12:22:19.821278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.785 qpair failed and we were unable to recover it. 00:26:18.785 [2024-04-26 12:22:19.821566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.785 [2024-04-26 12:22:19.821851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.785 [2024-04-26 12:22:19.821860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.785 qpair failed and we were unable to recover it. 00:26:18.785 [2024-04-26 12:22:19.822180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.785 [2024-04-26 12:22:19.822506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.785 [2024-04-26 12:22:19.822515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.785 qpair failed and we were unable to recover it. 00:26:18.785 [2024-04-26 12:22:19.822853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.785 [2024-04-26 12:22:19.823150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.785 [2024-04-26 12:22:19.823158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.785 qpair failed and we were unable to recover it. 00:26:18.785 [2024-04-26 12:22:19.823475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.785 [2024-04-26 12:22:19.823821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.785 [2024-04-26 12:22:19.823831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.785 qpair failed and we were unable to recover it. 00:26:18.785 [2024-04-26 12:22:19.824142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.785 [2024-04-26 12:22:19.824458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.785 [2024-04-26 12:22:19.824467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.785 qpair failed and we were unable to recover it. 00:26:18.786 [2024-04-26 12:22:19.824799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.786 [2024-04-26 12:22:19.825109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.786 [2024-04-26 12:22:19.825119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.786 qpair failed and we were unable to recover it. 00:26:18.786 [2024-04-26 12:22:19.825458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.786 [2024-04-26 12:22:19.825763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.786 [2024-04-26 12:22:19.825772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.786 qpair failed and we were unable to recover it. 00:26:18.786 [2024-04-26 12:22:19.826030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.786 [2024-04-26 12:22:19.826334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.786 [2024-04-26 12:22:19.826343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.786 qpair failed and we were unable to recover it. 00:26:18.786 [2024-04-26 12:22:19.826673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.786 [2024-04-26 12:22:19.826861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.786 [2024-04-26 12:22:19.826872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.786 qpair failed and we were unable to recover it. 00:26:18.786 [2024-04-26 12:22:19.827179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.786 [2024-04-26 12:22:19.827495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.786 [2024-04-26 12:22:19.827504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.786 qpair failed and we were unable to recover it. 00:26:18.786 [2024-04-26 12:22:19.827781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.786 [2024-04-26 12:22:19.828103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.786 [2024-04-26 12:22:19.828112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.786 qpair failed and we were unable to recover it. 00:26:18.786 [2024-04-26 12:22:19.828494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.786 [2024-04-26 12:22:19.828792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.786 [2024-04-26 12:22:19.828802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.786 qpair failed and we were unable to recover it. 00:26:18.786 [2024-04-26 12:22:19.829153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.786 [2024-04-26 12:22:19.829491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.786 [2024-04-26 12:22:19.829500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.786 qpair failed and we were unable to recover it. 00:26:18.786 [2024-04-26 12:22:19.829801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.786 [2024-04-26 12:22:19.830098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.786 [2024-04-26 12:22:19.830108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.786 qpair failed and we were unable to recover it. 00:26:18.786 [2024-04-26 12:22:19.830440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.786 [2024-04-26 12:22:19.830752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.786 [2024-04-26 12:22:19.830761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.786 qpair failed and we were unable to recover it. 00:26:18.786 [2024-04-26 12:22:19.831084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.786 [2024-04-26 12:22:19.831399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.786 [2024-04-26 12:22:19.831408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.786 qpair failed and we were unable to recover it. 00:26:18.786 [2024-04-26 12:22:19.831701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.786 [2024-04-26 12:22:19.832036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.786 [2024-04-26 12:22:19.832046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.786 qpair failed and we were unable to recover it. 00:26:18.786 [2024-04-26 12:22:19.832355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.786 [2024-04-26 12:22:19.832595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.786 [2024-04-26 12:22:19.832605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.786 qpair failed and we were unable to recover it. 00:26:18.786 [2024-04-26 12:22:19.832945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.786 [2024-04-26 12:22:19.833265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.786 [2024-04-26 12:22:19.833274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.786 qpair failed and we were unable to recover it. 00:26:18.786 [2024-04-26 12:22:19.833599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.786 [2024-04-26 12:22:19.833910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.786 [2024-04-26 12:22:19.833919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.786 qpair failed and we were unable to recover it. 00:26:18.786 [2024-04-26 12:22:19.834314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.786 [2024-04-26 12:22:19.834626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.786 [2024-04-26 12:22:19.834634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.786 qpair failed and we were unable to recover it. 00:26:18.786 [2024-04-26 12:22:19.834950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.786 [2024-04-26 12:22:19.835266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.786 [2024-04-26 12:22:19.835275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.786 qpair failed and we were unable to recover it. 00:26:18.786 [2024-04-26 12:22:19.835608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.786 [2024-04-26 12:22:19.835920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.786 [2024-04-26 12:22:19.835930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.786 qpair failed and we were unable to recover it. 00:26:18.786 [2024-04-26 12:22:19.836247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.786 [2024-04-26 12:22:19.836528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.786 [2024-04-26 12:22:19.836537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.786 qpair failed and we were unable to recover it. 00:26:18.786 [2024-04-26 12:22:19.836851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.786 [2024-04-26 12:22:19.837131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.786 [2024-04-26 12:22:19.837140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.786 qpair failed and we were unable to recover it. 00:26:18.786 [2024-04-26 12:22:19.837388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.786 [2024-04-26 12:22:19.837739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.786 [2024-04-26 12:22:19.837748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.786 qpair failed and we were unable to recover it. 00:26:18.786 [2024-04-26 12:22:19.838090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.786 [2024-04-26 12:22:19.838384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.786 [2024-04-26 12:22:19.838393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.786 qpair failed and we were unable to recover it. 00:26:18.786 [2024-04-26 12:22:19.838710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.786 [2024-04-26 12:22:19.839034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.786 [2024-04-26 12:22:19.839044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.786 qpair failed and we were unable to recover it. 00:26:18.786 [2024-04-26 12:22:19.839353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.786 [2024-04-26 12:22:19.839667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.786 [2024-04-26 12:22:19.839676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.786 qpair failed and we were unable to recover it. 00:26:18.786 [2024-04-26 12:22:19.839982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.786 [2024-04-26 12:22:19.840284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.786 [2024-04-26 12:22:19.840293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.786 qpair failed and we were unable to recover it. 00:26:18.786 [2024-04-26 12:22:19.840599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.786 [2024-04-26 12:22:19.840976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.786 [2024-04-26 12:22:19.840985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.786 qpair failed and we were unable to recover it. 00:26:18.786 [2024-04-26 12:22:19.841273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.786 [2024-04-26 12:22:19.841603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.786 [2024-04-26 12:22:19.841612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.786 qpair failed and we were unable to recover it. 00:26:18.786 [2024-04-26 12:22:19.841999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.786 [2024-04-26 12:22:19.842314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.786 [2024-04-26 12:22:19.842323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.786 qpair failed and we were unable to recover it. 00:26:18.786 [2024-04-26 12:22:19.842646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.786 [2024-04-26 12:22:19.843024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.786 [2024-04-26 12:22:19.843033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.786 qpair failed and we were unable to recover it. 00:26:18.787 [2024-04-26 12:22:19.843335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.787 [2024-04-26 12:22:19.843647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.787 [2024-04-26 12:22:19.843656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.787 qpair failed and we were unable to recover it. 00:26:18.787 [2024-04-26 12:22:19.843944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.787 [2024-04-26 12:22:19.844262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.787 [2024-04-26 12:22:19.844271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.787 qpair failed and we were unable to recover it. 00:26:18.787 [2024-04-26 12:22:19.844590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.787 [2024-04-26 12:22:19.844881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.787 [2024-04-26 12:22:19.844890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.787 qpair failed and we were unable to recover it. 00:26:18.787 [2024-04-26 12:22:19.845202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.787 [2024-04-26 12:22:19.845525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.787 [2024-04-26 12:22:19.845533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.787 qpair failed and we were unable to recover it. 00:26:18.787 [2024-04-26 12:22:19.845844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.787 [2024-04-26 12:22:19.846156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.787 [2024-04-26 12:22:19.846164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.787 qpair failed and we were unable to recover it. 00:26:18.787 [2024-04-26 12:22:19.846485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.787 [2024-04-26 12:22:19.846757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.787 [2024-04-26 12:22:19.846767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.787 qpair failed and we were unable to recover it. 00:26:18.787 [2024-04-26 12:22:19.847083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.787 [2024-04-26 12:22:19.847386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.787 [2024-04-26 12:22:19.847395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.787 qpair failed and we were unable to recover it. 00:26:18.787 [2024-04-26 12:22:19.847731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.787 [2024-04-26 12:22:19.848053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.787 [2024-04-26 12:22:19.848062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.787 qpair failed and we were unable to recover it. 00:26:18.787 [2024-04-26 12:22:19.848402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.787 [2024-04-26 12:22:19.848721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.787 [2024-04-26 12:22:19.848731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.787 qpair failed and we were unable to recover it. 00:26:18.787 [2024-04-26 12:22:19.849078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.787 [2024-04-26 12:22:19.849394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.787 [2024-04-26 12:22:19.849403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.787 qpair failed and we were unable to recover it. 00:26:18.787 [2024-04-26 12:22:19.849735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.787 [2024-04-26 12:22:19.850022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.787 [2024-04-26 12:22:19.850031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.787 qpair failed and we were unable to recover it. 00:26:18.787 [2024-04-26 12:22:19.850337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.787 [2024-04-26 12:22:19.850671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.787 [2024-04-26 12:22:19.850680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.787 qpair failed and we were unable to recover it. 00:26:18.787 [2024-04-26 12:22:19.850986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.787 [2024-04-26 12:22:19.851329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.787 [2024-04-26 12:22:19.851339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.787 qpair failed and we were unable to recover it. 00:26:18.787 [2024-04-26 12:22:19.851667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.787 [2024-04-26 12:22:19.851995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.787 [2024-04-26 12:22:19.852004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.787 qpair failed and we were unable to recover it. 00:26:18.787 [2024-04-26 12:22:19.852357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.787 [2024-04-26 12:22:19.852678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.787 [2024-04-26 12:22:19.852687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.787 qpair failed and we were unable to recover it. 00:26:18.787 [2024-04-26 12:22:19.853012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.787 [2024-04-26 12:22:19.853310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.787 [2024-04-26 12:22:19.853320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.787 qpair failed and we were unable to recover it. 00:26:18.787 [2024-04-26 12:22:19.853502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.787 [2024-04-26 12:22:19.853813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.787 [2024-04-26 12:22:19.853823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.787 qpair failed and we were unable to recover it. 00:26:18.787 [2024-04-26 12:22:19.854132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.787 [2024-04-26 12:22:19.854423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.787 [2024-04-26 12:22:19.854432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.787 qpair failed and we were unable to recover it. 00:26:18.787 [2024-04-26 12:22:19.854758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.787 [2024-04-26 12:22:19.854981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.787 [2024-04-26 12:22:19.854991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.787 qpair failed and we were unable to recover it. 00:26:18.787 [2024-04-26 12:22:19.855236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.787 [2024-04-26 12:22:19.855416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.787 [2024-04-26 12:22:19.855425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.787 qpair failed and we were unable to recover it. 00:26:18.787 [2024-04-26 12:22:19.855723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.787 [2024-04-26 12:22:19.855983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.787 [2024-04-26 12:22:19.855992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.787 qpair failed and we were unable to recover it. 00:26:18.787 [2024-04-26 12:22:19.856319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.787 [2024-04-26 12:22:19.856635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.787 [2024-04-26 12:22:19.856644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.787 qpair failed and we were unable to recover it. 00:26:18.787 [2024-04-26 12:22:19.856964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.787 [2024-04-26 12:22:19.857263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.787 [2024-04-26 12:22:19.857272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.787 qpair failed and we were unable to recover it. 00:26:18.787 [2024-04-26 12:22:19.857453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.787 [2024-04-26 12:22:19.857735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.787 [2024-04-26 12:22:19.857744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.787 qpair failed and we were unable to recover it. 00:26:18.787 [2024-04-26 12:22:19.857932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.787 [2024-04-26 12:22:19.858146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.788 [2024-04-26 12:22:19.858155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.788 qpair failed and we were unable to recover it. 00:26:18.788 [2024-04-26 12:22:19.858463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.788 [2024-04-26 12:22:19.858758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.788 [2024-04-26 12:22:19.858768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.788 qpair failed and we were unable to recover it. 00:26:18.788 [2024-04-26 12:22:19.859079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.788 [2024-04-26 12:22:19.859379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.788 [2024-04-26 12:22:19.859388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.788 qpair failed and we were unable to recover it. 00:26:18.788 [2024-04-26 12:22:19.859720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.788 [2024-04-26 12:22:19.860000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.788 [2024-04-26 12:22:19.860010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.788 qpair failed and we were unable to recover it. 00:26:18.788 [2024-04-26 12:22:19.860346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.788 [2024-04-26 12:22:19.860656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.788 [2024-04-26 12:22:19.860665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.788 qpair failed and we were unable to recover it. 00:26:18.788 [2024-04-26 12:22:19.860989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.788 [2024-04-26 12:22:19.861291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.788 [2024-04-26 12:22:19.861300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.788 qpair failed and we were unable to recover it. 00:26:18.788 [2024-04-26 12:22:19.861620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.788 [2024-04-26 12:22:19.861927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.788 [2024-04-26 12:22:19.861936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.788 qpair failed and we were unable to recover it. 00:26:18.788 [2024-04-26 12:22:19.862253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.788 [2024-04-26 12:22:19.862534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.788 [2024-04-26 12:22:19.862543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.788 qpair failed and we were unable to recover it. 00:26:18.788 [2024-04-26 12:22:19.862832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.788 [2024-04-26 12:22:19.863147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.788 [2024-04-26 12:22:19.863156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.788 qpair failed and we were unable to recover it. 00:26:18.788 [2024-04-26 12:22:19.863470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.788 [2024-04-26 12:22:19.863793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.788 [2024-04-26 12:22:19.863802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.788 qpair failed and we were unable to recover it. 00:26:18.788 [2024-04-26 12:22:19.864187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.788 [2024-04-26 12:22:19.864525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.788 [2024-04-26 12:22:19.864534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.788 qpair failed and we were unable to recover it. 00:26:18.788 [2024-04-26 12:22:19.864827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.788 [2024-04-26 12:22:19.865066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.788 [2024-04-26 12:22:19.865076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.788 qpair failed and we were unable to recover it. 00:26:18.788 [2024-04-26 12:22:19.865369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.788 [2024-04-26 12:22:19.865650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.788 [2024-04-26 12:22:19.865659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.788 qpair failed and we were unable to recover it. 00:26:18.788 [2024-04-26 12:22:19.865990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.788 [2024-04-26 12:22:19.866327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.788 [2024-04-26 12:22:19.866336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.788 qpair failed and we were unable to recover it. 00:26:18.788 [2024-04-26 12:22:19.866652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.788 [2024-04-26 12:22:19.866967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.788 [2024-04-26 12:22:19.866976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.788 qpair failed and we were unable to recover it. 00:26:18.788 [2024-04-26 12:22:19.867293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.788 [2024-04-26 12:22:19.867608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.788 [2024-04-26 12:22:19.867617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.788 qpair failed and we were unable to recover it. 00:26:18.788 [2024-04-26 12:22:19.868025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.788 [2024-04-26 12:22:19.868304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.788 [2024-04-26 12:22:19.868313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.788 qpair failed and we were unable to recover it. 00:26:18.788 [2024-04-26 12:22:19.868629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.788 [2024-04-26 12:22:19.868914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.788 [2024-04-26 12:22:19.868924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.788 qpair failed and we were unable to recover it. 00:26:18.788 [2024-04-26 12:22:19.869215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.788 [2024-04-26 12:22:19.869400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.788 [2024-04-26 12:22:19.869410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.788 qpair failed and we were unable to recover it. 00:26:18.788 [2024-04-26 12:22:19.869626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.788 [2024-04-26 12:22:19.869950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.788 [2024-04-26 12:22:19.869959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.788 qpair failed and we were unable to recover it. 00:26:18.788 [2024-04-26 12:22:19.870254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.788 [2024-04-26 12:22:19.870595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.788 [2024-04-26 12:22:19.870604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.788 qpair failed and we were unable to recover it. 00:26:18.788 [2024-04-26 12:22:19.870912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.788 [2024-04-26 12:22:19.871247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.788 [2024-04-26 12:22:19.871255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.788 qpair failed and we were unable to recover it. 00:26:18.788 [2024-04-26 12:22:19.871585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.788 [2024-04-26 12:22:19.871925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.788 [2024-04-26 12:22:19.871935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.788 qpair failed and we were unable to recover it. 00:26:18.788 [2024-04-26 12:22:19.872257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.788 [2024-04-26 12:22:19.872578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.788 [2024-04-26 12:22:19.872587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.788 qpair failed and we were unable to recover it. 00:26:18.788 [2024-04-26 12:22:19.872923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.788 [2024-04-26 12:22:19.873202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.788 [2024-04-26 12:22:19.873211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.788 qpair failed and we were unable to recover it. 00:26:18.788 [2024-04-26 12:22:19.873528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.788 [2024-04-26 12:22:19.873844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.788 [2024-04-26 12:22:19.873853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.788 qpair failed and we were unable to recover it. 00:26:18.788 [2024-04-26 12:22:19.874265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.788 [2024-04-26 12:22:19.874556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.788 [2024-04-26 12:22:19.874564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.788 qpair failed and we were unable to recover it. 00:26:18.788 [2024-04-26 12:22:19.874891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.788 [2024-04-26 12:22:19.875211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.788 [2024-04-26 12:22:19.875220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.788 qpair failed and we were unable to recover it. 00:26:18.788 [2024-04-26 12:22:19.875508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.788 [2024-04-26 12:22:19.875833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.788 [2024-04-26 12:22:19.875847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.788 qpair failed and we were unable to recover it. 00:26:18.788 [2024-04-26 12:22:19.876166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.789 [2024-04-26 12:22:19.876466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.789 [2024-04-26 12:22:19.876475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.789 qpair failed and we were unable to recover it. 00:26:18.789 [2024-04-26 12:22:19.876789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.789 [2024-04-26 12:22:19.877083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.789 [2024-04-26 12:22:19.877092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.789 qpair failed and we were unable to recover it. 00:26:18.789 [2024-04-26 12:22:19.877427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.789 [2024-04-26 12:22:19.877728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.789 [2024-04-26 12:22:19.877737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.789 qpair failed and we were unable to recover it. 00:26:18.789 [2024-04-26 12:22:19.878061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.789 [2024-04-26 12:22:19.878368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.789 [2024-04-26 12:22:19.878378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.789 qpair failed and we were unable to recover it. 00:26:18.789 [2024-04-26 12:22:19.878552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.789 [2024-04-26 12:22:19.878879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.789 [2024-04-26 12:22:19.878888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.789 qpair failed and we were unable to recover it. 00:26:18.789 [2024-04-26 12:22:19.879189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.789 [2024-04-26 12:22:19.879518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.789 [2024-04-26 12:22:19.879527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.789 qpair failed and we were unable to recover it. 00:26:18.789 [2024-04-26 12:22:19.879736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.789 [2024-04-26 12:22:19.879972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.789 [2024-04-26 12:22:19.879981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.789 qpair failed and we were unable to recover it. 00:26:18.789 [2024-04-26 12:22:19.880318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.789 [2024-04-26 12:22:19.880606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.789 [2024-04-26 12:22:19.880615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.789 qpair failed and we were unable to recover it. 00:26:18.789 [2024-04-26 12:22:19.880908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.789 [2024-04-26 12:22:19.881302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.789 [2024-04-26 12:22:19.881311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.789 qpair failed and we were unable to recover it. 00:26:18.789 [2024-04-26 12:22:19.881621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.789 [2024-04-26 12:22:19.881948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.789 [2024-04-26 12:22:19.881957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.789 qpair failed and we were unable to recover it. 00:26:18.789 [2024-04-26 12:22:19.882156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.789 [2024-04-26 12:22:19.882429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.789 [2024-04-26 12:22:19.882440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.789 qpair failed and we were unable to recover it. 00:26:18.789 [2024-04-26 12:22:19.882743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.789 [2024-04-26 12:22:19.882909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.789 [2024-04-26 12:22:19.882919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.789 qpair failed and we were unable to recover it. 00:26:18.789 [2024-04-26 12:22:19.883216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.789 [2024-04-26 12:22:19.883520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.789 [2024-04-26 12:22:19.883529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.789 qpair failed and we were unable to recover it. 00:26:18.789 [2024-04-26 12:22:19.883845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.789 [2024-04-26 12:22:19.884118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.789 [2024-04-26 12:22:19.884127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.789 qpair failed and we were unable to recover it. 00:26:18.789 [2024-04-26 12:22:19.884451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.789 [2024-04-26 12:22:19.884740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.789 [2024-04-26 12:22:19.884749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.789 qpair failed and we were unable to recover it. 00:26:18.789 [2024-04-26 12:22:19.885048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.789 [2024-04-26 12:22:19.885378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.789 [2024-04-26 12:22:19.885387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.789 qpair failed and we were unable to recover it. 00:26:18.789 [2024-04-26 12:22:19.885715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.789 [2024-04-26 12:22:19.885903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.789 [2024-04-26 12:22:19.885913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.789 qpair failed and we were unable to recover it. 00:26:18.789 [2024-04-26 12:22:19.886241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.789 [2024-04-26 12:22:19.886557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.789 [2024-04-26 12:22:19.886566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.789 qpair failed and we were unable to recover it. 00:26:18.789 [2024-04-26 12:22:19.886896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.789 [2024-04-26 12:22:19.887069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.789 [2024-04-26 12:22:19.887080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.789 qpair failed and we were unable to recover it. 00:26:18.789 [2024-04-26 12:22:19.887383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.789 [2024-04-26 12:22:19.887654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.789 [2024-04-26 12:22:19.887663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.789 qpair failed and we were unable to recover it. 00:26:18.789 [2024-04-26 12:22:19.888009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.789 [2024-04-26 12:22:19.888290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.789 [2024-04-26 12:22:19.888300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.789 qpair failed and we were unable to recover it. 00:26:18.789 [2024-04-26 12:22:19.888621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.789 [2024-04-26 12:22:19.888908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.789 [2024-04-26 12:22:19.888919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.789 qpair failed and we were unable to recover it. 00:26:18.789 [2024-04-26 12:22:19.888996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.789 [2024-04-26 12:22:19.889283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.789 [2024-04-26 12:22:19.889292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.789 qpair failed and we were unable to recover it. 00:26:18.789 [2024-04-26 12:22:19.889676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.789 [2024-04-26 12:22:19.889907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.789 [2024-04-26 12:22:19.889916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.789 qpair failed and we were unable to recover it. 00:26:18.789 [2024-04-26 12:22:19.890210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.789 [2024-04-26 12:22:19.890541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.789 [2024-04-26 12:22:19.890550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.789 qpair failed and we were unable to recover it. 00:26:18.789 [2024-04-26 12:22:19.890872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.789 [2024-04-26 12:22:19.891211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.789 [2024-04-26 12:22:19.891221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.789 qpair failed and we were unable to recover it. 00:26:18.789 [2024-04-26 12:22:19.891373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.789 [2024-04-26 12:22:19.891641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.789 [2024-04-26 12:22:19.891650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.789 qpair failed and we were unable to recover it. 00:26:18.789 [2024-04-26 12:22:19.891922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.789 [2024-04-26 12:22:19.892253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.789 [2024-04-26 12:22:19.892262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.789 qpair failed and we were unable to recover it. 00:26:18.789 [2024-04-26 12:22:19.892593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.789 [2024-04-26 12:22:19.892982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.789 [2024-04-26 12:22:19.892992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.789 qpair failed and we were unable to recover it. 00:26:18.789 [2024-04-26 12:22:19.893273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.790 [2024-04-26 12:22:19.893610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.790 [2024-04-26 12:22:19.893620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.790 qpair failed and we were unable to recover it. 00:26:18.790 [2024-04-26 12:22:19.893802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.790 [2024-04-26 12:22:19.894155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.790 [2024-04-26 12:22:19.894165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.790 qpair failed and we were unable to recover it. 00:26:18.790 [2024-04-26 12:22:19.894486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.790 [2024-04-26 12:22:19.894804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.790 [2024-04-26 12:22:19.894814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.790 qpair failed and we were unable to recover it. 00:26:18.790 [2024-04-26 12:22:19.895146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.790 [2024-04-26 12:22:19.895457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.790 [2024-04-26 12:22:19.895467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.790 qpair failed and we were unable to recover it. 00:26:18.790 [2024-04-26 12:22:19.895799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.790 [2024-04-26 12:22:19.896089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.790 [2024-04-26 12:22:19.896099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.790 qpair failed and we were unable to recover it. 00:26:18.790 [2024-04-26 12:22:19.896317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.790 [2024-04-26 12:22:19.896582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.790 [2024-04-26 12:22:19.896592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.790 qpair failed and we were unable to recover it. 00:26:18.790 [2024-04-26 12:22:19.896894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.790 [2024-04-26 12:22:19.897233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.790 [2024-04-26 12:22:19.897242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.790 qpair failed and we were unable to recover it. 00:26:18.790 [2024-04-26 12:22:19.897449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.790 [2024-04-26 12:22:19.897746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.790 [2024-04-26 12:22:19.897756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.790 qpair failed and we were unable to recover it. 00:26:18.790 [2024-04-26 12:22:19.898086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.790 [2024-04-26 12:22:19.898371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.790 [2024-04-26 12:22:19.898381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.790 qpair failed and we were unable to recover it. 00:26:18.790 [2024-04-26 12:22:19.898703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.790 [2024-04-26 12:22:19.898897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.790 [2024-04-26 12:22:19.898906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.790 qpair failed and we were unable to recover it. 00:26:18.790 [2024-04-26 12:22:19.899216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.790 [2024-04-26 12:22:19.899499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.790 [2024-04-26 12:22:19.899507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.790 qpair failed and we were unable to recover it. 00:26:18.790 [2024-04-26 12:22:19.899910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.790 [2024-04-26 12:22:19.900216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.790 [2024-04-26 12:22:19.900225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.790 qpair failed and we were unable to recover it. 00:26:18.790 [2024-04-26 12:22:19.900552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.790 [2024-04-26 12:22:19.900925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.790 [2024-04-26 12:22:19.900935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.790 qpair failed and we were unable to recover it. 00:26:18.790 [2024-04-26 12:22:19.901234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.790 [2024-04-26 12:22:19.901545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.790 [2024-04-26 12:22:19.901553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.790 qpair failed and we were unable to recover it. 00:26:18.790 [2024-04-26 12:22:19.901936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.790 [2024-04-26 12:22:19.902243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.790 [2024-04-26 12:22:19.902252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.790 qpair failed and we were unable to recover it. 00:26:18.790 [2024-04-26 12:22:19.902598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.790 [2024-04-26 12:22:19.902961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.790 [2024-04-26 12:22:19.902972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.790 qpair failed and we were unable to recover it. 00:26:18.790 [2024-04-26 12:22:19.903275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.790 [2024-04-26 12:22:19.903616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.790 [2024-04-26 12:22:19.903625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.790 qpair failed and we were unable to recover it. 00:26:18.790 [2024-04-26 12:22:19.903944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.790 [2024-04-26 12:22:19.904259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.790 [2024-04-26 12:22:19.904268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.790 qpair failed and we were unable to recover it. 00:26:18.790 [2024-04-26 12:22:19.904591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.790 [2024-04-26 12:22:19.904908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.790 [2024-04-26 12:22:19.904917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.790 qpair failed and we were unable to recover it. 00:26:18.790 [2024-04-26 12:22:19.905246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.790 [2024-04-26 12:22:19.905555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.790 [2024-04-26 12:22:19.905564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.790 qpair failed and we were unable to recover it. 00:26:18.790 [2024-04-26 12:22:19.905881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.790 [2024-04-26 12:22:19.906200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.790 [2024-04-26 12:22:19.906209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.790 qpair failed and we were unable to recover it. 00:26:18.790 [2024-04-26 12:22:19.906529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.790 [2024-04-26 12:22:19.906849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.790 [2024-04-26 12:22:19.906858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.790 qpair failed and we were unable to recover it. 00:26:18.790 [2024-04-26 12:22:19.907027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.790 [2024-04-26 12:22:19.907348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.790 [2024-04-26 12:22:19.907357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.790 qpair failed and we were unable to recover it. 00:26:18.790 [2024-04-26 12:22:19.907698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.790 [2024-04-26 12:22:19.908053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.790 [2024-04-26 12:22:19.908064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.790 qpair failed and we were unable to recover it. 00:26:18.790 [2024-04-26 12:22:19.908337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.790 [2024-04-26 12:22:19.908655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.790 [2024-04-26 12:22:19.908664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.790 qpair failed and we were unable to recover it. 00:26:18.790 [2024-04-26 12:22:19.909038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.790 [2024-04-26 12:22:19.909363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.790 [2024-04-26 12:22:19.909372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.790 qpair failed and we were unable to recover it. 00:26:18.790 [2024-04-26 12:22:19.909682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.790 [2024-04-26 12:22:19.910010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.790 [2024-04-26 12:22:19.910020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.790 qpair failed and we were unable to recover it. 00:26:18.790 [2024-04-26 12:22:19.910214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.790 [2024-04-26 12:22:19.910418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.790 [2024-04-26 12:22:19.910426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.790 qpair failed and we were unable to recover it. 00:26:18.790 [2024-04-26 12:22:19.910640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.790 [2024-04-26 12:22:19.910846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.790 [2024-04-26 12:22:19.910856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.791 qpair failed and we were unable to recover it. 00:26:18.791 [2024-04-26 12:22:19.911205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.791 [2024-04-26 12:22:19.911420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.791 [2024-04-26 12:22:19.911429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.791 qpair failed and we were unable to recover it. 00:26:18.791 [2024-04-26 12:22:19.911643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.791 [2024-04-26 12:22:19.912007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.791 [2024-04-26 12:22:19.912017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.791 qpair failed and we were unable to recover it. 00:26:18.791 [2024-04-26 12:22:19.912366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.791 [2024-04-26 12:22:19.912715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.791 [2024-04-26 12:22:19.912724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.791 qpair failed and we were unable to recover it. 00:26:18.791 [2024-04-26 12:22:19.913054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.791 [2024-04-26 12:22:19.913361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.791 [2024-04-26 12:22:19.913374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.791 qpair failed and we were unable to recover it. 00:26:18.791 [2024-04-26 12:22:19.913567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.791 [2024-04-26 12:22:19.913858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.791 [2024-04-26 12:22:19.913867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.791 qpair failed and we were unable to recover it. 00:26:18.791 [2024-04-26 12:22:19.914188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.791 [2024-04-26 12:22:19.914517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.791 [2024-04-26 12:22:19.914526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.791 qpair failed and we were unable to recover it. 00:26:18.791 [2024-04-26 12:22:19.914826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.791 [2024-04-26 12:22:19.915138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.791 [2024-04-26 12:22:19.915147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.791 qpair failed and we were unable to recover it. 00:26:18.791 [2024-04-26 12:22:19.915473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.791 [2024-04-26 12:22:19.915644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.791 [2024-04-26 12:22:19.915654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.791 qpair failed and we were unable to recover it. 00:26:18.791 [2024-04-26 12:22:19.915876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.791 [2024-04-26 12:22:19.916221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.791 [2024-04-26 12:22:19.916230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.791 qpair failed and we were unable to recover it. 00:26:18.791 [2024-04-26 12:22:19.916541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.791 [2024-04-26 12:22:19.916876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.791 [2024-04-26 12:22:19.916886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.791 qpair failed and we were unable to recover it. 00:26:18.791 [2024-04-26 12:22:19.917202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.791 [2024-04-26 12:22:19.917541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.791 [2024-04-26 12:22:19.917550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.791 qpair failed and we were unable to recover it. 00:26:18.791 [2024-04-26 12:22:19.917757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.791 [2024-04-26 12:22:19.918038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.791 [2024-04-26 12:22:19.918047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.791 qpair failed and we were unable to recover it. 00:26:18.791 [2024-04-26 12:22:19.918339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.791 [2024-04-26 12:22:19.918614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.791 [2024-04-26 12:22:19.918623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.791 qpair failed and we were unable to recover it. 00:26:18.791 [2024-04-26 12:22:19.918943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.791 [2024-04-26 12:22:19.919248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.791 [2024-04-26 12:22:19.919257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.791 qpair failed and we were unable to recover it. 00:26:18.791 [2024-04-26 12:22:19.919570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.791 [2024-04-26 12:22:19.919913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.791 [2024-04-26 12:22:19.919922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.791 qpair failed and we were unable to recover it. 00:26:18.791 [2024-04-26 12:22:19.920120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.791 [2024-04-26 12:22:19.920388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.791 [2024-04-26 12:22:19.920397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.791 qpair failed and we were unable to recover it. 00:26:18.791 [2024-04-26 12:22:19.920758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.791 [2024-04-26 12:22:19.921047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.791 [2024-04-26 12:22:19.921057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.791 qpair failed and we were unable to recover it. 00:26:18.791 [2024-04-26 12:22:19.921387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.791 [2024-04-26 12:22:19.921580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.791 [2024-04-26 12:22:19.921590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.791 qpair failed and we were unable to recover it. 00:26:18.791 [2024-04-26 12:22:19.921847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.791 [2024-04-26 12:22:19.922151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.791 [2024-04-26 12:22:19.922160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.791 qpair failed and we were unable to recover it. 00:26:18.791 [2024-04-26 12:22:19.922483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.791 [2024-04-26 12:22:19.922793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.791 [2024-04-26 12:22:19.922802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.791 qpair failed and we were unable to recover it. 00:26:18.791 [2024-04-26 12:22:19.923086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.791 [2024-04-26 12:22:19.923291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.791 [2024-04-26 12:22:19.923299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.791 qpair failed and we were unable to recover it. 00:26:18.791 [2024-04-26 12:22:19.923497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.791 [2024-04-26 12:22:19.923802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.791 [2024-04-26 12:22:19.923810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.791 qpair failed and we were unable to recover it. 00:26:18.791 [2024-04-26 12:22:19.924104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.791 [2024-04-26 12:22:19.924339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.791 [2024-04-26 12:22:19.924348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.791 qpair failed and we were unable to recover it. 00:26:18.791 [2024-04-26 12:22:19.924633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.791 [2024-04-26 12:22:19.924801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.792 [2024-04-26 12:22:19.924811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.792 qpair failed and we were unable to recover it. 00:26:18.792 [2024-04-26 12:22:19.925087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.792 [2024-04-26 12:22:19.925286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.792 [2024-04-26 12:22:19.925296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.792 qpair failed and we were unable to recover it. 00:26:18.792 [2024-04-26 12:22:19.925606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.792 [2024-04-26 12:22:19.925825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.792 [2024-04-26 12:22:19.925834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.792 qpair failed and we were unable to recover it. 00:26:18.792 [2024-04-26 12:22:19.926041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.792 [2024-04-26 12:22:19.926382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.792 [2024-04-26 12:22:19.926391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.792 qpair failed and we were unable to recover it. 00:26:18.792 [2024-04-26 12:22:19.926693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.792 [2024-04-26 12:22:19.926874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.792 [2024-04-26 12:22:19.926884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.792 qpair failed and we were unable to recover it. 00:26:18.792 [2024-04-26 12:22:19.927280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.792 [2024-04-26 12:22:19.927506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.792 [2024-04-26 12:22:19.927515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.792 qpair failed and we were unable to recover it. 00:26:18.792 [2024-04-26 12:22:19.927647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.792 [2024-04-26 12:22:19.927966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.792 [2024-04-26 12:22:19.927975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.792 qpair failed and we were unable to recover it. 00:26:18.792 [2024-04-26 12:22:19.928288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.792 [2024-04-26 12:22:19.928567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.792 [2024-04-26 12:22:19.928576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.792 qpair failed and we were unable to recover it. 00:26:18.792 [2024-04-26 12:22:19.928946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.792 [2024-04-26 12:22:19.929295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.792 [2024-04-26 12:22:19.929305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.792 qpair failed and we were unable to recover it. 00:26:18.792 [2024-04-26 12:22:19.929642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.792 [2024-04-26 12:22:19.929955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.792 [2024-04-26 12:22:19.929964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.792 qpair failed and we were unable to recover it. 00:26:18.792 [2024-04-26 12:22:19.930101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.792 [2024-04-26 12:22:19.930410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.792 [2024-04-26 12:22:19.930419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.792 qpair failed and we were unable to recover it. 00:26:18.792 [2024-04-26 12:22:19.930633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.792 [2024-04-26 12:22:19.930864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.792 [2024-04-26 12:22:19.930874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.792 qpair failed and we were unable to recover it. 00:26:18.792 [2024-04-26 12:22:19.931170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.792 [2024-04-26 12:22:19.931491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.792 [2024-04-26 12:22:19.931500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.792 qpair failed and we were unable to recover it. 00:26:18.792 [2024-04-26 12:22:19.931822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.792 [2024-04-26 12:22:19.932145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.792 [2024-04-26 12:22:19.932154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.792 qpair failed and we were unable to recover it. 00:26:18.792 [2024-04-26 12:22:19.932444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.792 [2024-04-26 12:22:19.932541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.792 [2024-04-26 12:22:19.932551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.792 qpair failed and we were unable to recover it. 00:26:18.792 [2024-04-26 12:22:19.932875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.792 [2024-04-26 12:22:19.933094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.792 [2024-04-26 12:22:19.933103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.792 qpair failed and we were unable to recover it. 00:26:18.792 [2024-04-26 12:22:19.933401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.792 [2024-04-26 12:22:19.933696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.792 [2024-04-26 12:22:19.933706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.792 qpair failed and we were unable to recover it. 00:26:18.792 [2024-04-26 12:22:19.933900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.792 [2024-04-26 12:22:19.934095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.792 [2024-04-26 12:22:19.934104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.792 qpair failed and we were unable to recover it. 00:26:18.792 [2024-04-26 12:22:19.934397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.792 [2024-04-26 12:22:19.934694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.792 [2024-04-26 12:22:19.934704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.792 qpair failed and we were unable to recover it. 00:26:18.792 [2024-04-26 12:22:19.935018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.792 [2024-04-26 12:22:19.935330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.792 [2024-04-26 12:22:19.935339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.792 qpair failed and we were unable to recover it. 00:26:18.792 [2024-04-26 12:22:19.935651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.792 [2024-04-26 12:22:19.935993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.792 [2024-04-26 12:22:19.936003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.792 qpair failed and we were unable to recover it. 00:26:18.792 [2024-04-26 12:22:19.936185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.792 [2024-04-26 12:22:19.936543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.792 [2024-04-26 12:22:19.936552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.792 qpair failed and we were unable to recover it. 00:26:18.792 [2024-04-26 12:22:19.936849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.792 [2024-04-26 12:22:19.937212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.792 [2024-04-26 12:22:19.937220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.792 qpair failed and we were unable to recover it. 00:26:18.792 [2024-04-26 12:22:19.937532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.792 [2024-04-26 12:22:19.937738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.792 [2024-04-26 12:22:19.937747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.792 qpair failed and we were unable to recover it. 00:26:18.792 [2024-04-26 12:22:19.938074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.792 [2024-04-26 12:22:19.938379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.792 [2024-04-26 12:22:19.938388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.792 qpair failed and we were unable to recover it. 00:26:18.792 [2024-04-26 12:22:19.938743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.792 [2024-04-26 12:22:19.939057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.792 [2024-04-26 12:22:19.939066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.792 qpair failed and we were unable to recover it. 00:26:18.792 [2024-04-26 12:22:19.939373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.792 [2024-04-26 12:22:19.939543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.792 [2024-04-26 12:22:19.939551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.792 qpair failed and we were unable to recover it. 00:26:18.792 [2024-04-26 12:22:19.939898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.792 [2024-04-26 12:22:19.940081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.792 [2024-04-26 12:22:19.940091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.792 qpair failed and we were unable to recover it. 00:26:18.792 [2024-04-26 12:22:19.940340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.792 [2024-04-26 12:22:19.940666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.792 [2024-04-26 12:22:19.940674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.792 qpair failed and we were unable to recover it. 00:26:18.792 [2024-04-26 12:22:19.940995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.792 [2024-04-26 12:22:19.941306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.793 [2024-04-26 12:22:19.941315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.793 qpair failed and we were unable to recover it. 00:26:18.793 [2024-04-26 12:22:19.941626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.793 [2024-04-26 12:22:19.941928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.793 [2024-04-26 12:22:19.941938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.793 qpair failed and we were unable to recover it. 00:26:18.793 [2024-04-26 12:22:19.942223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.793 [2024-04-26 12:22:19.942544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.793 [2024-04-26 12:22:19.942555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.793 qpair failed and we were unable to recover it. 00:26:18.793 [2024-04-26 12:22:19.942776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.793 [2024-04-26 12:22:19.943090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.793 [2024-04-26 12:22:19.943099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.793 qpair failed and we were unable to recover it. 00:26:18.793 [2024-04-26 12:22:19.943412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.793 [2024-04-26 12:22:19.943742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.793 [2024-04-26 12:22:19.943751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.793 qpair failed and we were unable to recover it. 00:26:18.793 [2024-04-26 12:22:19.944135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.793 [2024-04-26 12:22:19.944453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.793 [2024-04-26 12:22:19.944463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.793 qpair failed and we were unable to recover it. 00:26:18.793 [2024-04-26 12:22:19.944823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.793 [2024-04-26 12:22:19.945131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.793 [2024-04-26 12:22:19.945140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.793 qpair failed and we were unable to recover it. 00:26:18.793 [2024-04-26 12:22:19.945337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.793 [2024-04-26 12:22:19.945636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.793 [2024-04-26 12:22:19.945645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.793 qpair failed and we were unable to recover it. 00:26:18.793 [2024-04-26 12:22:19.945972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.793 [2024-04-26 12:22:19.946276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.793 [2024-04-26 12:22:19.946285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.793 qpair failed and we were unable to recover it. 00:26:18.793 [2024-04-26 12:22:19.946618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.793 [2024-04-26 12:22:19.946951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.793 [2024-04-26 12:22:19.946960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.793 qpair failed and we were unable to recover it. 00:26:18.793 [2024-04-26 12:22:19.947258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.793 [2024-04-26 12:22:19.947410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.793 [2024-04-26 12:22:19.947419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.793 qpair failed and we were unable to recover it. 00:26:18.793 [2024-04-26 12:22:19.947710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.793 [2024-04-26 12:22:19.947905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.793 [2024-04-26 12:22:19.947915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.793 qpair failed and we were unable to recover it. 00:26:18.793 [2024-04-26 12:22:19.948106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.793 [2024-04-26 12:22:19.948440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.793 [2024-04-26 12:22:19.948450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.793 qpair failed and we were unable to recover it. 00:26:18.793 [2024-04-26 12:22:19.948774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.793 [2024-04-26 12:22:19.949076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.793 [2024-04-26 12:22:19.949085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.793 qpair failed and we were unable to recover it. 00:26:18.793 [2024-04-26 12:22:19.949397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.793 [2024-04-26 12:22:19.949716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.793 [2024-04-26 12:22:19.949725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.793 qpair failed and we were unable to recover it. 00:26:18.793 [2024-04-26 12:22:19.950055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.793 [2024-04-26 12:22:19.950331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.793 [2024-04-26 12:22:19.950339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.793 qpair failed and we were unable to recover it. 00:26:18.793 [2024-04-26 12:22:19.950658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.793 [2024-04-26 12:22:19.950968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.793 [2024-04-26 12:22:19.950977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.793 qpair failed and we were unable to recover it. 00:26:18.793 [2024-04-26 12:22:19.951338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.793 [2024-04-26 12:22:19.951512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.793 [2024-04-26 12:22:19.951521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.793 qpair failed and we were unable to recover it. 00:26:18.793 [2024-04-26 12:22:19.951737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.793 [2024-04-26 12:22:19.952100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.793 [2024-04-26 12:22:19.952110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.793 qpair failed and we were unable to recover it. 00:26:18.793 [2024-04-26 12:22:19.952439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.793 [2024-04-26 12:22:19.952761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.793 [2024-04-26 12:22:19.952769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.793 qpair failed and we were unable to recover it. 00:26:18.793 [2024-04-26 12:22:19.952969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.793 [2024-04-26 12:22:19.953039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.793 [2024-04-26 12:22:19.953048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.793 qpair failed and we were unable to recover it. 00:26:18.793 [2024-04-26 12:22:19.953232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.793 [2024-04-26 12:22:19.953396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.793 [2024-04-26 12:22:19.953405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.793 qpair failed and we were unable to recover it. 00:26:18.793 [2024-04-26 12:22:19.953620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.793 [2024-04-26 12:22:19.953852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.793 [2024-04-26 12:22:19.953863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.793 qpair failed and we were unable to recover it. 00:26:18.793 [2024-04-26 12:22:19.954178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.793 [2024-04-26 12:22:19.954518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.793 [2024-04-26 12:22:19.954528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.793 qpair failed and we were unable to recover it. 00:26:18.793 [2024-04-26 12:22:19.954780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.793 [2024-04-26 12:22:19.955077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.793 [2024-04-26 12:22:19.955086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.793 qpair failed and we were unable to recover it. 00:26:18.793 [2024-04-26 12:22:19.955372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.793 [2024-04-26 12:22:19.955710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.793 [2024-04-26 12:22:19.955719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.793 qpair failed and we were unable to recover it. 00:26:18.793 [2024-04-26 12:22:19.956061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.793 [2024-04-26 12:22:19.956382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.793 [2024-04-26 12:22:19.956391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.793 qpair failed and we were unable to recover it. 00:26:18.793 [2024-04-26 12:22:19.956725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.793 [2024-04-26 12:22:19.957009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.793 [2024-04-26 12:22:19.957018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.793 qpair failed and we were unable to recover it. 00:26:18.793 [2024-04-26 12:22:19.957336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.793 [2024-04-26 12:22:19.957654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.793 [2024-04-26 12:22:19.957662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.793 qpair failed and we were unable to recover it. 00:26:18.793 [2024-04-26 12:22:19.957987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.794 [2024-04-26 12:22:19.958326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.794 [2024-04-26 12:22:19.958336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.794 qpair failed and we were unable to recover it. 00:26:18.794 [2024-04-26 12:22:19.958642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.794 [2024-04-26 12:22:19.958946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.794 [2024-04-26 12:22:19.958956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.794 qpair failed and we were unable to recover it. 00:26:18.794 [2024-04-26 12:22:19.959279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.794 [2024-04-26 12:22:19.959370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.794 [2024-04-26 12:22:19.959378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.794 qpair failed and we were unable to recover it. 00:26:18.794 [2024-04-26 12:22:19.959704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.794 [2024-04-26 12:22:19.959873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.794 [2024-04-26 12:22:19.959883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.794 qpair failed and we were unable to recover it. 00:26:18.794 [2024-04-26 12:22:19.960182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.794 [2024-04-26 12:22:19.960498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.794 [2024-04-26 12:22:19.960508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.794 qpair failed and we were unable to recover it. 00:26:18.794 [2024-04-26 12:22:19.960814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.794 [2024-04-26 12:22:19.961218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.794 [2024-04-26 12:22:19.961228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.794 qpair failed and we were unable to recover it. 00:26:18.794 [2024-04-26 12:22:19.961546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.794 [2024-04-26 12:22:19.961861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.794 [2024-04-26 12:22:19.961870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.794 qpair failed and we were unable to recover it. 00:26:18.794 [2024-04-26 12:22:19.962192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.794 [2024-04-26 12:22:19.962516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.794 [2024-04-26 12:22:19.962524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.794 qpair failed and we were unable to recover it. 00:26:18.794 [2024-04-26 12:22:19.962862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.794 [2024-04-26 12:22:19.963183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.794 [2024-04-26 12:22:19.963192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.794 qpair failed and we were unable to recover it. 00:26:18.794 [2024-04-26 12:22:19.963488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.794 [2024-04-26 12:22:19.963829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.794 [2024-04-26 12:22:19.963842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.794 qpair failed and we were unable to recover it. 00:26:18.794 [2024-04-26 12:22:19.964161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.794 [2024-04-26 12:22:19.964494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.794 [2024-04-26 12:22:19.964503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.794 qpair failed and we were unable to recover it. 00:26:18.794 [2024-04-26 12:22:19.964823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.794 [2024-04-26 12:22:19.965127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.794 [2024-04-26 12:22:19.965136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.794 qpair failed and we were unable to recover it. 00:26:18.794 [2024-04-26 12:22:19.965521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.794 [2024-04-26 12:22:19.965845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.794 [2024-04-26 12:22:19.965855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.794 qpair failed and we were unable to recover it. 00:26:18.794 [2024-04-26 12:22:19.966212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.794 [2024-04-26 12:22:19.966525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.794 [2024-04-26 12:22:19.966534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.794 qpair failed and we were unable to recover it. 00:26:18.794 [2024-04-26 12:22:19.966871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.794 [2024-04-26 12:22:19.967085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.794 [2024-04-26 12:22:19.967094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.794 qpair failed and we were unable to recover it. 00:26:18.794 [2024-04-26 12:22:19.967300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.794 [2024-04-26 12:22:19.967606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.794 [2024-04-26 12:22:19.967616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.794 qpair failed and we were unable to recover it. 00:26:18.794 [2024-04-26 12:22:19.967923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.794 [2024-04-26 12:22:19.968231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.794 [2024-04-26 12:22:19.968241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.794 qpair failed and we were unable to recover it. 00:26:18.794 [2024-04-26 12:22:19.968575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.794 [2024-04-26 12:22:19.968760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.794 [2024-04-26 12:22:19.968769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.794 qpair failed and we were unable to recover it. 00:26:18.794 [2024-04-26 12:22:19.969123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.794 [2024-04-26 12:22:19.969446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.794 [2024-04-26 12:22:19.969455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.794 qpair failed and we were unable to recover it. 00:26:18.794 [2024-04-26 12:22:19.969640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.794 [2024-04-26 12:22:19.969964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.794 [2024-04-26 12:22:19.969974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.794 qpair failed and we were unable to recover it. 00:26:18.794 [2024-04-26 12:22:19.970288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.794 [2024-04-26 12:22:19.970605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.794 [2024-04-26 12:22:19.970615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.794 qpair failed and we were unable to recover it. 00:26:18.794 [2024-04-26 12:22:19.970954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.794 [2024-04-26 12:22:19.971163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.794 [2024-04-26 12:22:19.971172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.794 qpair failed and we were unable to recover it. 00:26:18.794 [2024-04-26 12:22:19.971490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.794 [2024-04-26 12:22:19.971804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.794 [2024-04-26 12:22:19.971813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.794 qpair failed and we were unable to recover it. 00:26:18.794 [2024-04-26 12:22:19.972124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.794 [2024-04-26 12:22:19.972303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.794 [2024-04-26 12:22:19.972313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.794 qpair failed and we were unable to recover it. 00:26:18.794 [2024-04-26 12:22:19.972623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.794 [2024-04-26 12:22:19.972929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.794 [2024-04-26 12:22:19.972939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.794 qpair failed and we were unable to recover it. 00:26:18.794 [2024-04-26 12:22:19.973356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.794 [2024-04-26 12:22:19.973624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.794 [2024-04-26 12:22:19.973633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.794 qpair failed and we were unable to recover it. 00:26:18.794 [2024-04-26 12:22:19.973968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.794 [2024-04-26 12:22:19.974152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.794 [2024-04-26 12:22:19.974162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.794 qpair failed and we were unable to recover it. 00:26:18.794 [2024-04-26 12:22:19.974543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.794 [2024-04-26 12:22:19.974854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.794 [2024-04-26 12:22:19.974864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.794 qpair failed and we were unable to recover it. 00:26:18.794 [2024-04-26 12:22:19.975165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.794 [2024-04-26 12:22:19.975478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.794 [2024-04-26 12:22:19.975487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.794 qpair failed and we were unable to recover it. 00:26:18.794 [2024-04-26 12:22:19.975802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.795 [2024-04-26 12:22:19.976125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.795 [2024-04-26 12:22:19.976135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.795 qpair failed and we were unable to recover it. 00:26:18.795 [2024-04-26 12:22:19.976315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.795 [2024-04-26 12:22:19.976635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.795 [2024-04-26 12:22:19.976645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.795 qpair failed and we were unable to recover it. 00:26:18.795 [2024-04-26 12:22:19.976938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.795 [2024-04-26 12:22:19.977255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.795 [2024-04-26 12:22:19.977264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.795 qpair failed and we were unable to recover it. 00:26:18.795 [2024-04-26 12:22:19.977569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.795 [2024-04-26 12:22:19.977895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.795 [2024-04-26 12:22:19.977905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.795 qpair failed and we were unable to recover it. 00:26:18.795 [2024-04-26 12:22:19.978197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.795 [2024-04-26 12:22:19.978405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.795 [2024-04-26 12:22:19.978415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.795 qpair failed and we were unable to recover it. 00:26:18.795 [2024-04-26 12:22:19.978760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.795 [2024-04-26 12:22:19.979078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.795 [2024-04-26 12:22:19.979091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.795 qpair failed and we were unable to recover it. 00:26:18.795 [2024-04-26 12:22:19.979416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.795 [2024-04-26 12:22:19.979733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.795 [2024-04-26 12:22:19.979742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.795 qpair failed and we were unable to recover it. 00:26:18.795 [2024-04-26 12:22:19.980102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.795 [2024-04-26 12:22:19.980409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.795 [2024-04-26 12:22:19.980418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.795 qpair failed and we were unable to recover it. 00:26:18.795 [2024-04-26 12:22:19.980737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.795 [2024-04-26 12:22:19.981057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.795 [2024-04-26 12:22:19.981066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.795 qpair failed and we were unable to recover it. 00:26:18.795 [2024-04-26 12:22:19.981373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.795 [2024-04-26 12:22:19.981686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.795 [2024-04-26 12:22:19.981695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.795 qpair failed and we were unable to recover it. 00:26:18.795 [2024-04-26 12:22:19.981995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.795 [2024-04-26 12:22:19.982317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.795 [2024-04-26 12:22:19.982325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.795 qpair failed and we were unable to recover it. 00:26:18.795 [2024-04-26 12:22:19.982664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.795 [2024-04-26 12:22:19.982855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.795 [2024-04-26 12:22:19.982865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:18.795 qpair failed and we were unable to recover it. 00:26:18.795 [2024-04-26 12:22:19.983198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.099 [2024-04-26 12:22:19.983483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.099 [2024-04-26 12:22:19.983494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.099 qpair failed and we were unable to recover it. 00:26:19.099 [2024-04-26 12:22:19.983804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.099 [2024-04-26 12:22:19.984118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.099 [2024-04-26 12:22:19.984128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.099 qpair failed and we were unable to recover it. 00:26:19.099 [2024-04-26 12:22:19.984455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.099 [2024-04-26 12:22:19.984773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.099 [2024-04-26 12:22:19.984783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.099 qpair failed and we were unable to recover it. 00:26:19.099 [2024-04-26 12:22:19.985122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.099 [2024-04-26 12:22:19.985423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.099 [2024-04-26 12:22:19.985433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.099 qpair failed and we were unable to recover it. 00:26:19.099 [2024-04-26 12:22:19.985749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.099 [2024-04-26 12:22:19.986056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.099 [2024-04-26 12:22:19.986065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.099 qpair failed and we were unable to recover it. 00:26:19.099 [2024-04-26 12:22:19.986346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.099 [2024-04-26 12:22:19.986536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.099 [2024-04-26 12:22:19.986545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.099 qpair failed and we were unable to recover it. 00:26:19.099 [2024-04-26 12:22:19.986848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.099 [2024-04-26 12:22:19.987134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.099 [2024-04-26 12:22:19.987143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.099 qpair failed and we were unable to recover it. 00:26:19.099 [2024-04-26 12:22:19.987473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.099 [2024-04-26 12:22:19.987797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.099 [2024-04-26 12:22:19.987807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.099 qpair failed and we were unable to recover it. 00:26:19.099 [2024-04-26 12:22:19.988116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.099 [2024-04-26 12:22:19.988428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.099 [2024-04-26 12:22:19.988437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.099 qpair failed and we were unable to recover it. 00:26:19.099 [2024-04-26 12:22:19.988796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.099 [2024-04-26 12:22:19.989097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.099 [2024-04-26 12:22:19.989106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.099 qpair failed and we were unable to recover it. 00:26:19.099 [2024-04-26 12:22:19.989454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.099 [2024-04-26 12:22:19.989765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.099 [2024-04-26 12:22:19.989774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.099 qpair failed and we were unable to recover it. 00:26:19.099 [2024-04-26 12:22:19.990099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.099 [2024-04-26 12:22:19.990425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.099 [2024-04-26 12:22:19.990433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.099 qpair failed and we were unable to recover it. 00:26:19.099 [2024-04-26 12:22:19.990747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.099 [2024-04-26 12:22:19.991014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.100 [2024-04-26 12:22:19.991024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.100 qpair failed and we were unable to recover it. 00:26:19.100 [2024-04-26 12:22:19.991340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.100 [2024-04-26 12:22:19.991632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.100 [2024-04-26 12:22:19.991641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.100 qpair failed and we were unable to recover it. 00:26:19.100 [2024-04-26 12:22:19.991960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.100 [2024-04-26 12:22:19.992272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.100 [2024-04-26 12:22:19.992282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.100 qpair failed and we were unable to recover it. 00:26:19.100 [2024-04-26 12:22:19.992569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.100 [2024-04-26 12:22:19.992884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.100 [2024-04-26 12:22:19.992894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.100 qpair failed and we were unable to recover it. 00:26:19.100 [2024-04-26 12:22:19.993176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.100 [2024-04-26 12:22:19.993496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.100 [2024-04-26 12:22:19.993505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.100 qpair failed and we were unable to recover it. 00:26:19.100 [2024-04-26 12:22:19.993820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.100 [2024-04-26 12:22:19.994151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.100 [2024-04-26 12:22:19.994160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.100 qpair failed and we were unable to recover it. 00:26:19.100 [2024-04-26 12:22:19.994368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.100 [2024-04-26 12:22:19.994684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.100 [2024-04-26 12:22:19.994693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.100 qpair failed and we were unable to recover it. 00:26:19.100 [2024-04-26 12:22:19.994982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.100 [2024-04-26 12:22:19.995192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.100 [2024-04-26 12:22:19.995201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.100 qpair failed and we were unable to recover it. 00:26:19.100 [2024-04-26 12:22:19.995520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.100 [2024-04-26 12:22:19.995841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.100 [2024-04-26 12:22:19.995850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.100 qpair failed and we were unable to recover it. 00:26:19.100 [2024-04-26 12:22:19.996180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.100 [2024-04-26 12:22:19.996515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.100 [2024-04-26 12:22:19.996524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.100 qpair failed and we were unable to recover it. 00:26:19.100 [2024-04-26 12:22:19.996860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.100 [2024-04-26 12:22:19.997191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.100 [2024-04-26 12:22:19.997199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.100 qpair failed and we were unable to recover it. 00:26:19.100 [2024-04-26 12:22:19.997533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.100 [2024-04-26 12:22:19.997836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.100 [2024-04-26 12:22:19.997849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.100 qpair failed and we were unable to recover it. 00:26:19.100 [2024-04-26 12:22:19.998255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.100 [2024-04-26 12:22:19.998553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.100 [2024-04-26 12:22:19.998563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.100 qpair failed and we were unable to recover it. 00:26:19.100 [2024-04-26 12:22:19.998875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.100 [2024-04-26 12:22:19.999215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.100 [2024-04-26 12:22:19.999224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.100 qpair failed and we were unable to recover it. 00:26:19.100 [2024-04-26 12:22:19.999547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.100 [2024-04-26 12:22:19.999875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.100 [2024-04-26 12:22:19.999884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.100 qpair failed and we were unable to recover it. 00:26:19.100 [2024-04-26 12:22:20.000194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.100 [2024-04-26 12:22:20.000494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.100 [2024-04-26 12:22:20.000503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.100 qpair failed and we were unable to recover it. 00:26:19.100 [2024-04-26 12:22:20.000831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.100 [2024-04-26 12:22:20.001033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.100 [2024-04-26 12:22:20.001043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.100 qpair failed and we were unable to recover it. 00:26:19.100 [2024-04-26 12:22:20.001338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.100 [2024-04-26 12:22:20.002279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.100 [2024-04-26 12:22:20.002293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.100 qpair failed and we were unable to recover it. 00:26:19.100 [2024-04-26 12:22:20.002556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.100 [2024-04-26 12:22:20.002866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.100 [2024-04-26 12:22:20.002876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.100 qpair failed and we were unable to recover it. 00:26:19.100 [2024-04-26 12:22:20.003260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.100 [2024-04-26 12:22:20.003533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.100 [2024-04-26 12:22:20.003542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.100 qpair failed and we were unable to recover it. 00:26:19.100 [2024-04-26 12:22:20.003858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.100 [2024-04-26 12:22:20.004135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.100 [2024-04-26 12:22:20.004145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.100 qpair failed and we were unable to recover it. 00:26:19.100 [2024-04-26 12:22:20.004460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.100 [2024-04-26 12:22:20.004648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.100 [2024-04-26 12:22:20.004658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.100 qpair failed and we were unable to recover it. 00:26:19.100 [2024-04-26 12:22:20.005008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.100 [2024-04-26 12:22:20.005351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.100 [2024-04-26 12:22:20.005360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.100 qpair failed and we were unable to recover it. 00:26:19.100 [2024-04-26 12:22:20.005661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.100 [2024-04-26 12:22:20.005930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.100 [2024-04-26 12:22:20.005940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.100 qpair failed and we were unable to recover it. 00:26:19.100 [2024-04-26 12:22:20.006249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.100 [2024-04-26 12:22:20.006449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.100 [2024-04-26 12:22:20.006459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.100 qpair failed and we were unable to recover it. 00:26:19.100 [2024-04-26 12:22:20.006764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.100 [2024-04-26 12:22:20.007006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.101 [2024-04-26 12:22:20.007016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.101 qpair failed and we were unable to recover it. 00:26:19.101 [2024-04-26 12:22:20.007313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.101 [2024-04-26 12:22:20.007636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.101 [2024-04-26 12:22:20.007645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.101 qpair failed and we were unable to recover it. 00:26:19.101 [2024-04-26 12:22:20.007823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.101 [2024-04-26 12:22:20.008140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.101 [2024-04-26 12:22:20.008149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.101 qpair failed and we were unable to recover it. 00:26:19.101 [2024-04-26 12:22:20.008469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.101 [2024-04-26 12:22:20.008763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.101 [2024-04-26 12:22:20.008772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.101 qpair failed and we were unable to recover it. 00:26:19.101 [2024-04-26 12:22:20.009086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.101 [2024-04-26 12:22:20.009403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.101 [2024-04-26 12:22:20.009412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.101 qpair failed and we were unable to recover it. 00:26:19.101 [2024-04-26 12:22:20.009716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.101 [2024-04-26 12:22:20.010005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.101 [2024-04-26 12:22:20.010015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.101 qpair failed and we were unable to recover it. 00:26:19.101 [2024-04-26 12:22:20.010316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.101 [2024-04-26 12:22:20.010595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.101 [2024-04-26 12:22:20.010604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.101 qpair failed and we were unable to recover it. 00:26:19.101 [2024-04-26 12:22:20.010935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.101 [2024-04-26 12:22:20.011311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.101 [2024-04-26 12:22:20.011323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.101 qpair failed and we were unable to recover it. 00:26:19.101 [2024-04-26 12:22:20.011650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.101 [2024-04-26 12:22:20.011950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.101 [2024-04-26 12:22:20.011960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.101 qpair failed and we were unable to recover it. 00:26:19.101 [2024-04-26 12:22:20.012292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.101 [2024-04-26 12:22:20.012587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.101 [2024-04-26 12:22:20.012596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.101 qpair failed and we were unable to recover it. 00:26:19.101 [2024-04-26 12:22:20.012927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.101 [2024-04-26 12:22:20.013207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.101 [2024-04-26 12:22:20.013216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.101 qpair failed and we were unable to recover it. 00:26:19.101 [2024-04-26 12:22:20.013567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.101 [2024-04-26 12:22:20.013888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.101 [2024-04-26 12:22:20.013897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.101 qpair failed and we were unable to recover it. 00:26:19.101 [2024-04-26 12:22:20.014202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.101 [2024-04-26 12:22:20.014516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.101 [2024-04-26 12:22:20.014525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.101 qpair failed and we were unable to recover it. 00:26:19.101 [2024-04-26 12:22:20.014834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.101 [2024-04-26 12:22:20.015167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.101 [2024-04-26 12:22:20.015176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.101 qpair failed and we were unable to recover it. 00:26:19.101 [2024-04-26 12:22:20.015494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.101 [2024-04-26 12:22:20.015778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.101 [2024-04-26 12:22:20.015787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.101 qpair failed and we were unable to recover it. 00:26:19.101 [2024-04-26 12:22:20.016126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.101 [2024-04-26 12:22:20.016440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.101 [2024-04-26 12:22:20.016449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.101 qpair failed and we were unable to recover it. 00:26:19.101 [2024-04-26 12:22:20.016762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.101 [2024-04-26 12:22:20.017026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.101 [2024-04-26 12:22:20.017035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.101 qpair failed and we were unable to recover it. 00:26:19.101 [2024-04-26 12:22:20.017312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.101 [2024-04-26 12:22:20.017632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.101 [2024-04-26 12:22:20.017641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.101 qpair failed and we were unable to recover it. 00:26:19.101 [2024-04-26 12:22:20.017959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.101 [2024-04-26 12:22:20.018289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.101 [2024-04-26 12:22:20.018298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.101 qpair failed and we were unable to recover it. 00:26:19.101 [2024-04-26 12:22:20.018595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.101 [2024-04-26 12:22:20.018912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.101 [2024-04-26 12:22:20.018921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.101 qpair failed and we were unable to recover it. 00:26:19.101 [2024-04-26 12:22:20.019239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.101 [2024-04-26 12:22:20.019460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.101 [2024-04-26 12:22:20.019470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.101 qpair failed and we were unable to recover it. 00:26:19.101 [2024-04-26 12:22:20.019789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.101 [2024-04-26 12:22:20.020082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.101 [2024-04-26 12:22:20.020092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.101 qpair failed and we were unable to recover it. 00:26:19.101 [2024-04-26 12:22:20.020385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.101 [2024-04-26 12:22:20.020694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.101 [2024-04-26 12:22:20.020704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.101 qpair failed and we were unable to recover it. 00:26:19.101 [2024-04-26 12:22:20.021053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.101 [2024-04-26 12:22:20.021351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.101 [2024-04-26 12:22:20.021360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.101 qpair failed and we were unable to recover it. 00:26:19.101 [2024-04-26 12:22:20.021675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.101 [2024-04-26 12:22:20.021992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.101 [2024-04-26 12:22:20.022001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.101 qpair failed and we were unable to recover it. 00:26:19.101 [2024-04-26 12:22:20.022320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.101 [2024-04-26 12:22:20.022634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.101 [2024-04-26 12:22:20.022644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.101 qpair failed and we were unable to recover it. 00:26:19.101 [2024-04-26 12:22:20.023022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.101 [2024-04-26 12:22:20.023335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.101 [2024-04-26 12:22:20.023345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.101 qpair failed and we were unable to recover it. 00:26:19.101 [2024-04-26 12:22:20.023688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.101 [2024-04-26 12:22:20.024006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.101 [2024-04-26 12:22:20.024017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.101 qpair failed and we were unable to recover it. 00:26:19.101 [2024-04-26 12:22:20.024385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.101 [2024-04-26 12:22:20.024679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.101 [2024-04-26 12:22:20.024688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.101 qpair failed and we were unable to recover it. 00:26:19.101 [2024-04-26 12:22:20.025011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.102 [2024-04-26 12:22:20.025317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.102 [2024-04-26 12:22:20.025326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.102 qpair failed and we were unable to recover it. 00:26:19.102 [2024-04-26 12:22:20.025655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.102 [2024-04-26 12:22:20.025962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.102 [2024-04-26 12:22:20.025972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.102 qpair failed and we were unable to recover it. 00:26:19.102 [2024-04-26 12:22:20.026292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.102 [2024-04-26 12:22:20.026585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.102 [2024-04-26 12:22:20.026595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.102 qpair failed and we were unable to recover it. 00:26:19.102 [2024-04-26 12:22:20.026917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.102 [2024-04-26 12:22:20.027213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.102 [2024-04-26 12:22:20.027223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.102 qpair failed and we were unable to recover it. 00:26:19.102 [2024-04-26 12:22:20.027532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.102 [2024-04-26 12:22:20.027847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.102 [2024-04-26 12:22:20.027858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.102 qpair failed and we were unable to recover it. 00:26:19.102 [2024-04-26 12:22:20.028185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.102 [2024-04-26 12:22:20.028498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.102 [2024-04-26 12:22:20.028508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.102 qpair failed and we were unable to recover it. 00:26:19.102 [2024-04-26 12:22:20.028842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.102 [2024-04-26 12:22:20.029153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.102 [2024-04-26 12:22:20.029163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.102 qpair failed and we were unable to recover it. 00:26:19.102 [2024-04-26 12:22:20.029474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.102 [2024-04-26 12:22:20.029786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.102 [2024-04-26 12:22:20.029795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.102 qpair failed and we were unable to recover it. 00:26:19.102 [2024-04-26 12:22:20.030149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.102 [2024-04-26 12:22:20.030465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.102 [2024-04-26 12:22:20.030475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.102 qpair failed and we were unable to recover it. 00:26:19.102 [2024-04-26 12:22:20.030792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.102 [2024-04-26 12:22:20.031025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.102 [2024-04-26 12:22:20.031035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.102 qpair failed and we were unable to recover it. 00:26:19.102 [2024-04-26 12:22:20.031349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.102 [2024-04-26 12:22:20.031646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.102 [2024-04-26 12:22:20.031656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.102 qpair failed and we were unable to recover it. 00:26:19.102 [2024-04-26 12:22:20.031985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.102 [2024-04-26 12:22:20.032180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.102 [2024-04-26 12:22:20.032190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.102 qpair failed and we were unable to recover it. 00:26:19.102 [2024-04-26 12:22:20.032503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.102 [2024-04-26 12:22:20.032820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.102 [2024-04-26 12:22:20.032829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.102 qpair failed and we were unable to recover it. 00:26:19.102 [2024-04-26 12:22:20.033136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.102 [2024-04-26 12:22:20.033463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.102 [2024-04-26 12:22:20.033472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.102 qpair failed and we were unable to recover it. 00:26:19.102 [2024-04-26 12:22:20.033782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.102 [2024-04-26 12:22:20.034076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.102 [2024-04-26 12:22:20.034085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.102 qpair failed and we were unable to recover it. 00:26:19.102 [2024-04-26 12:22:20.034401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.102 [2024-04-26 12:22:20.034723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.102 [2024-04-26 12:22:20.034733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.102 qpair failed and we were unable to recover it. 00:26:19.102 [2024-04-26 12:22:20.035042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.102 [2024-04-26 12:22:20.035325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.102 [2024-04-26 12:22:20.035334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.102 qpair failed and we were unable to recover it. 00:26:19.102 [2024-04-26 12:22:20.035712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.102 [2024-04-26 12:22:20.036008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.102 [2024-04-26 12:22:20.036017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.102 qpair failed and we were unable to recover it. 00:26:19.102 [2024-04-26 12:22:20.036329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.102 [2024-04-26 12:22:20.036620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.102 [2024-04-26 12:22:20.036629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.102 qpair failed and we were unable to recover it. 00:26:19.102 [2024-04-26 12:22:20.036945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.102 [2024-04-26 12:22:20.037244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.102 [2024-04-26 12:22:20.037254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.102 qpair failed and we were unable to recover it. 00:26:19.102 [2024-04-26 12:22:20.037579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.102 [2024-04-26 12:22:20.037904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.102 [2024-04-26 12:22:20.037914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.102 qpair failed and we were unable to recover it. 00:26:19.102 [2024-04-26 12:22:20.038208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.102 [2024-04-26 12:22:20.038537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.102 [2024-04-26 12:22:20.038546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.102 qpair failed and we were unable to recover it. 00:26:19.102 [2024-04-26 12:22:20.038861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.102 [2024-04-26 12:22:20.039068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.102 [2024-04-26 12:22:20.039078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.102 qpair failed and we were unable to recover it. 00:26:19.102 [2024-04-26 12:22:20.039411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.102 [2024-04-26 12:22:20.039737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.102 [2024-04-26 12:22:20.039746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.102 qpair failed and we were unable to recover it. 00:26:19.102 [2024-04-26 12:22:20.040069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.102 [2024-04-26 12:22:20.040387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.102 [2024-04-26 12:22:20.040396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.102 qpair failed and we were unable to recover it. 00:26:19.102 [2024-04-26 12:22:20.040734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.102 [2024-04-26 12:22:20.041051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.102 [2024-04-26 12:22:20.041061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.102 qpair failed and we were unable to recover it. 00:26:19.102 [2024-04-26 12:22:20.041378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.102 [2024-04-26 12:22:20.041656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.102 [2024-04-26 12:22:20.041665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.102 qpair failed and we were unable to recover it. 00:26:19.102 [2024-04-26 12:22:20.042017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.102 [2024-04-26 12:22:20.042281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.102 [2024-04-26 12:22:20.042290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.102 qpair failed and we were unable to recover it. 00:26:19.102 [2024-04-26 12:22:20.042612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.103 [2024-04-26 12:22:20.042800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.103 [2024-04-26 12:22:20.042811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.103 qpair failed and we were unable to recover it. 00:26:19.103 [2024-04-26 12:22:20.043100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.103 [2024-04-26 12:22:20.043433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.103 [2024-04-26 12:22:20.043445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.103 qpair failed and we were unable to recover it. 00:26:19.103 [2024-04-26 12:22:20.043761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.103 [2024-04-26 12:22:20.044067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.103 [2024-04-26 12:22:20.044078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.103 qpair failed and we were unable to recover it. 00:26:19.103 [2024-04-26 12:22:20.044389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.103 [2024-04-26 12:22:20.044701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.103 [2024-04-26 12:22:20.044711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.103 qpair failed and we were unable to recover it. 00:26:19.103 [2024-04-26 12:22:20.045050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.103 [2024-04-26 12:22:20.045358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.103 [2024-04-26 12:22:20.045369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.103 qpair failed and we were unable to recover it. 00:26:19.103 [2024-04-26 12:22:20.045712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.103 [2024-04-26 12:22:20.045999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.103 [2024-04-26 12:22:20.046009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.103 qpair failed and we were unable to recover it. 00:26:19.103 [2024-04-26 12:22:20.046331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.103 [2024-04-26 12:22:20.046626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.103 [2024-04-26 12:22:20.046635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.103 qpair failed and we were unable to recover it. 00:26:19.103 [2024-04-26 12:22:20.046963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.103 [2024-04-26 12:22:20.047277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.103 [2024-04-26 12:22:20.047286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.103 qpair failed and we were unable to recover it. 00:26:19.103 [2024-04-26 12:22:20.047624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.103 [2024-04-26 12:22:20.047938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.103 [2024-04-26 12:22:20.047948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.103 qpair failed and we were unable to recover it. 00:26:19.103 [2024-04-26 12:22:20.048235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.103 [2024-04-26 12:22:20.048421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.103 [2024-04-26 12:22:20.048430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.103 qpair failed and we were unable to recover it. 00:26:19.103 [2024-04-26 12:22:20.048634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.103 [2024-04-26 12:22:20.048943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.103 [2024-04-26 12:22:20.048953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.103 qpair failed and we were unable to recover it. 00:26:19.103 [2024-04-26 12:22:20.049226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.103 [2024-04-26 12:22:20.049555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.103 [2024-04-26 12:22:20.049566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.103 qpair failed and we were unable to recover it. 00:26:19.103 [2024-04-26 12:22:20.049861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.103 [2024-04-26 12:22:20.050151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.103 [2024-04-26 12:22:20.050160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.103 qpair failed and we were unable to recover it. 00:26:19.103 [2024-04-26 12:22:20.050496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.103 [2024-04-26 12:22:20.050785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.103 [2024-04-26 12:22:20.050793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.103 qpair failed and we were unable to recover it. 00:26:19.103 [2024-04-26 12:22:20.051087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.103 [2024-04-26 12:22:20.051383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.103 [2024-04-26 12:22:20.051393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.103 qpair failed and we were unable to recover it. 00:26:19.103 [2024-04-26 12:22:20.051797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.103 [2024-04-26 12:22:20.052071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.103 [2024-04-26 12:22:20.052080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.103 qpair failed and we were unable to recover it. 00:26:19.103 [2024-04-26 12:22:20.052393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.103 [2024-04-26 12:22:20.052708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.103 [2024-04-26 12:22:20.052717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.103 qpair failed and we were unable to recover it. 00:26:19.103 [2024-04-26 12:22:20.053053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.103 [2024-04-26 12:22:20.053374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.103 [2024-04-26 12:22:20.053383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.103 qpair failed and we were unable to recover it. 00:26:19.103 [2024-04-26 12:22:20.053673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.103 [2024-04-26 12:22:20.053866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.103 [2024-04-26 12:22:20.053876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.103 qpair failed and we were unable to recover it. 00:26:19.103 [2024-04-26 12:22:20.054187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.103 [2024-04-26 12:22:20.054487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.103 [2024-04-26 12:22:20.054496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.103 qpair failed and we were unable to recover it. 00:26:19.103 [2024-04-26 12:22:20.054802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.103 [2024-04-26 12:22:20.055034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.103 [2024-04-26 12:22:20.055044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.103 qpair failed and we were unable to recover it. 00:26:19.103 [2024-04-26 12:22:20.055350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.103 [2024-04-26 12:22:20.055641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.103 [2024-04-26 12:22:20.055651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.103 qpair failed and we were unable to recover it. 00:26:19.103 [2024-04-26 12:22:20.055980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.103 [2024-04-26 12:22:20.056317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.103 [2024-04-26 12:22:20.056326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.103 qpair failed and we were unable to recover it. 00:26:19.103 [2024-04-26 12:22:20.056641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.103 [2024-04-26 12:22:20.056959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.103 [2024-04-26 12:22:20.056970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.103 qpair failed and we were unable to recover it. 00:26:19.103 [2024-04-26 12:22:20.057290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.103 [2024-04-26 12:22:20.057636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.103 [2024-04-26 12:22:20.057645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.103 qpair failed and we were unable to recover it. 00:26:19.103 [2024-04-26 12:22:20.058023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.103 [2024-04-26 12:22:20.058340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.103 [2024-04-26 12:22:20.058350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.103 qpair failed and we were unable to recover it. 00:26:19.103 [2024-04-26 12:22:20.058552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.103 [2024-04-26 12:22:20.058602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.103 [2024-04-26 12:22:20.058612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.103 qpair failed and we were unable to recover it. 00:26:19.104 [2024-04-26 12:22:20.058927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.104 [2024-04-26 12:22:20.059219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.104 [2024-04-26 12:22:20.059228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.104 qpair failed and we were unable to recover it. 00:26:19.104 [2024-04-26 12:22:20.059548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.104 [2024-04-26 12:22:20.059827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.104 [2024-04-26 12:22:20.059842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.104 qpair failed and we were unable to recover it. 00:26:19.104 [2024-04-26 12:22:20.060155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.104 [2024-04-26 12:22:20.060351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.104 [2024-04-26 12:22:20.060360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.104 qpair failed and we were unable to recover it. 00:26:19.104 [2024-04-26 12:22:20.060682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.104 [2024-04-26 12:22:20.060983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.104 [2024-04-26 12:22:20.060993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.104 qpair failed and we were unable to recover it. 00:26:19.104 [2024-04-26 12:22:20.061304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.104 [2024-04-26 12:22:20.061523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.104 [2024-04-26 12:22:20.061532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.104 qpair failed and we were unable to recover it. 00:26:19.104 [2024-04-26 12:22:20.061869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.104 [2024-04-26 12:22:20.062176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.104 [2024-04-26 12:22:20.062185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.104 qpair failed and we were unable to recover it. 00:26:19.104 [2024-04-26 12:22:20.062492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.104 [2024-04-26 12:22:20.062807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.104 [2024-04-26 12:22:20.062816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.104 qpair failed and we were unable to recover it. 00:26:19.104 [2024-04-26 12:22:20.063139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.104 [2024-04-26 12:22:20.063335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.104 [2024-04-26 12:22:20.063345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.104 qpair failed and we were unable to recover it. 00:26:19.104 [2024-04-26 12:22:20.063666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.104 [2024-04-26 12:22:20.063956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.104 [2024-04-26 12:22:20.063966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.104 qpair failed and we were unable to recover it. 00:26:19.104 [2024-04-26 12:22:20.064308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.104 [2024-04-26 12:22:20.064606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.104 [2024-04-26 12:22:20.064615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.104 qpair failed and we were unable to recover it. 00:26:19.104 [2024-04-26 12:22:20.064950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.104 [2024-04-26 12:22:20.065264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.104 [2024-04-26 12:22:20.065273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.104 qpair failed and we were unable to recover it. 00:26:19.104 [2024-04-26 12:22:20.065552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.104 [2024-04-26 12:22:20.065820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.104 [2024-04-26 12:22:20.065829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.104 qpair failed and we were unable to recover it. 00:26:19.104 [2024-04-26 12:22:20.066160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.104 [2024-04-26 12:22:20.066472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.104 [2024-04-26 12:22:20.066482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.104 qpair failed and we were unable to recover it. 00:26:19.104 [2024-04-26 12:22:20.066685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.104 [2024-04-26 12:22:20.067031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.104 [2024-04-26 12:22:20.067040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.104 qpair failed and we were unable to recover it. 00:26:19.104 [2024-04-26 12:22:20.067343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.104 [2024-04-26 12:22:20.067640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.104 [2024-04-26 12:22:20.067649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.104 qpair failed and we were unable to recover it. 00:26:19.104 [2024-04-26 12:22:20.068016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.104 [2024-04-26 12:22:20.068343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.104 [2024-04-26 12:22:20.068352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.104 qpair failed and we were unable to recover it. 00:26:19.104 [2024-04-26 12:22:20.068662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.104 [2024-04-26 12:22:20.068981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.104 [2024-04-26 12:22:20.068991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.104 qpair failed and we were unable to recover it. 00:26:19.104 [2024-04-26 12:22:20.069308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.104 [2024-04-26 12:22:20.069601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.104 [2024-04-26 12:22:20.069610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.104 qpair failed and we were unable to recover it. 00:26:19.104 [2024-04-26 12:22:20.069992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.104 [2024-04-26 12:22:20.070273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.104 [2024-04-26 12:22:20.070282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.104 qpair failed and we were unable to recover it. 00:26:19.104 [2024-04-26 12:22:20.070596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.104 [2024-04-26 12:22:20.070913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.104 [2024-04-26 12:22:20.070923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.104 qpair failed and we were unable to recover it. 00:26:19.104 [2024-04-26 12:22:20.071245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.104 [2024-04-26 12:22:20.071557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.105 [2024-04-26 12:22:20.071566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.105 qpair failed and we were unable to recover it. 00:26:19.105 [2024-04-26 12:22:20.071899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.105 [2024-04-26 12:22:20.072218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.105 [2024-04-26 12:22:20.072227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.105 qpair failed and we were unable to recover it. 00:26:19.105 [2024-04-26 12:22:20.072527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.105 [2024-04-26 12:22:20.072848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.105 [2024-04-26 12:22:20.072857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.105 qpair failed and we were unable to recover it. 00:26:19.105 [2024-04-26 12:22:20.073183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.105 [2024-04-26 12:22:20.073468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.105 [2024-04-26 12:22:20.073477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.105 qpair failed and we were unable to recover it. 00:26:19.105 [2024-04-26 12:22:20.073794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.105 [2024-04-26 12:22:20.074081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.105 [2024-04-26 12:22:20.074091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.105 qpair failed and we were unable to recover it. 00:26:19.105 [2024-04-26 12:22:20.074427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.105 [2024-04-26 12:22:20.074734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.105 [2024-04-26 12:22:20.074744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.105 qpair failed and we were unable to recover it. 00:26:19.105 [2024-04-26 12:22:20.075067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.105 [2024-04-26 12:22:20.075282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.105 [2024-04-26 12:22:20.075292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.105 qpair failed and we were unable to recover it. 00:26:19.105 [2024-04-26 12:22:20.075604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.105 [2024-04-26 12:22:20.075915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.105 [2024-04-26 12:22:20.075925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.105 qpair failed and we were unable to recover it. 00:26:19.105 [2024-04-26 12:22:20.076253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.105 [2024-04-26 12:22:20.076529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.105 [2024-04-26 12:22:20.076538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.105 qpair failed and we were unable to recover it. 00:26:19.105 [2024-04-26 12:22:20.076831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.105 [2024-04-26 12:22:20.077034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.105 [2024-04-26 12:22:20.077044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.105 qpair failed and we were unable to recover it. 00:26:19.105 [2024-04-26 12:22:20.077339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.105 [2024-04-26 12:22:20.077641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.105 [2024-04-26 12:22:20.077650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.105 qpair failed and we were unable to recover it. 00:26:19.105 [2024-04-26 12:22:20.077954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.105 [2024-04-26 12:22:20.078260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.105 [2024-04-26 12:22:20.078270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.105 qpair failed and we were unable to recover it. 00:26:19.105 [2024-04-26 12:22:20.078554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.105 [2024-04-26 12:22:20.078871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.105 [2024-04-26 12:22:20.078881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.105 qpair failed and we were unable to recover it. 00:26:19.105 [2024-04-26 12:22:20.079202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.105 [2024-04-26 12:22:20.079527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.105 [2024-04-26 12:22:20.079537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.105 qpair failed and we were unable to recover it. 00:26:19.105 [2024-04-26 12:22:20.079751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.105 [2024-04-26 12:22:20.080032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.105 [2024-04-26 12:22:20.080042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.105 qpair failed and we were unable to recover it. 00:26:19.105 [2024-04-26 12:22:20.080238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.105 [2024-04-26 12:22:20.080433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.105 [2024-04-26 12:22:20.080445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.105 qpair failed and we were unable to recover it. 00:26:19.105 [2024-04-26 12:22:20.080761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.105 [2024-04-26 12:22:20.081062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.105 [2024-04-26 12:22:20.081072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.105 qpair failed and we were unable to recover it. 00:26:19.105 [2024-04-26 12:22:20.081392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.105 [2024-04-26 12:22:20.081712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.105 [2024-04-26 12:22:20.081721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.105 qpair failed and we were unable to recover it. 00:26:19.105 [2024-04-26 12:22:20.082088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.105 [2024-04-26 12:22:20.082400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.105 [2024-04-26 12:22:20.082410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.105 qpair failed and we were unable to recover it. 00:26:19.105 [2024-04-26 12:22:20.082728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.105 [2024-04-26 12:22:20.083000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.105 [2024-04-26 12:22:20.083010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.105 qpair failed and we were unable to recover it. 00:26:19.105 [2024-04-26 12:22:20.083320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.105 [2024-04-26 12:22:20.083637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.105 [2024-04-26 12:22:20.083647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.105 qpair failed and we were unable to recover it. 00:26:19.105 [2024-04-26 12:22:20.083968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.105 [2024-04-26 12:22:20.084291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.105 [2024-04-26 12:22:20.084300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.105 qpair failed and we were unable to recover it. 00:26:19.105 [2024-04-26 12:22:20.084676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.105 [2024-04-26 12:22:20.084957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.105 [2024-04-26 12:22:20.084966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.105 qpair failed and we were unable to recover it. 00:26:19.105 [2024-04-26 12:22:20.085275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.105 [2024-04-26 12:22:20.085564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.105 [2024-04-26 12:22:20.085574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.105 qpair failed and we were unable to recover it. 00:26:19.105 [2024-04-26 12:22:20.085902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.105 [2024-04-26 12:22:20.086173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.105 [2024-04-26 12:22:20.086182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.105 qpair failed and we were unable to recover it. 00:26:19.105 [2024-04-26 12:22:20.086502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.105 [2024-04-26 12:22:20.086676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.105 [2024-04-26 12:22:20.086686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.105 qpair failed and we were unable to recover it. 00:26:19.106 [2024-04-26 12:22:20.087044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.106 [2024-04-26 12:22:20.087236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.106 [2024-04-26 12:22:20.087245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.106 qpair failed and we were unable to recover it. 00:26:19.106 [2024-04-26 12:22:20.087574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.106 [2024-04-26 12:22:20.087883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.106 [2024-04-26 12:22:20.087892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.106 qpair failed and we were unable to recover it. 00:26:19.106 [2024-04-26 12:22:20.088284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.106 [2024-04-26 12:22:20.088553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.106 [2024-04-26 12:22:20.088562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.106 qpair failed and we were unable to recover it. 00:26:19.106 [2024-04-26 12:22:20.088882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.106 [2024-04-26 12:22:20.089207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.106 [2024-04-26 12:22:20.089216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.106 qpair failed and we were unable to recover it. 00:26:19.106 [2024-04-26 12:22:20.089512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.106 [2024-04-26 12:22:20.089842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.106 [2024-04-26 12:22:20.089853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.106 qpair failed and we were unable to recover it. 00:26:19.106 [2024-04-26 12:22:20.090067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.106 [2024-04-26 12:22:20.090272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.106 [2024-04-26 12:22:20.090282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.106 qpair failed and we were unable to recover it. 00:26:19.106 [2024-04-26 12:22:20.090647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.106 [2024-04-26 12:22:20.090927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.106 [2024-04-26 12:22:20.090937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.106 qpair failed and we were unable to recover it. 00:26:19.106 [2024-04-26 12:22:20.091265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.106 [2024-04-26 12:22:20.091566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.106 [2024-04-26 12:22:20.091575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.106 qpair failed and we were unable to recover it. 00:26:19.106 [2024-04-26 12:22:20.091887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.106 [2024-04-26 12:22:20.092205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.106 [2024-04-26 12:22:20.092214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.106 qpair failed and we were unable to recover it. 00:26:19.106 [2024-04-26 12:22:20.092519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.106 [2024-04-26 12:22:20.092801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.106 [2024-04-26 12:22:20.092810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.106 qpair failed and we were unable to recover it. 00:26:19.106 [2024-04-26 12:22:20.093117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.106 [2024-04-26 12:22:20.093446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.106 [2024-04-26 12:22:20.093455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.106 qpair failed and we were unable to recover it. 00:26:19.106 [2024-04-26 12:22:20.093745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.106 [2024-04-26 12:22:20.093991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.106 [2024-04-26 12:22:20.094000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.106 qpair failed and we were unable to recover it. 00:26:19.106 [2024-04-26 12:22:20.094349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.106 [2024-04-26 12:22:20.094646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.106 [2024-04-26 12:22:20.094655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.106 qpair failed and we were unable to recover it. 00:26:19.106 [2024-04-26 12:22:20.094965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.106 [2024-04-26 12:22:20.095269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.106 [2024-04-26 12:22:20.095278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.106 qpair failed and we were unable to recover it. 00:26:19.106 [2024-04-26 12:22:20.095600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.106 [2024-04-26 12:22:20.095933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.107 [2024-04-26 12:22:20.095942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.107 qpair failed and we were unable to recover it. 00:26:19.107 [2024-04-26 12:22:20.096243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.107 [2024-04-26 12:22:20.096535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.107 [2024-04-26 12:22:20.096544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.107 qpair failed and we were unable to recover it. 00:26:19.107 [2024-04-26 12:22:20.096900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.107 [2024-04-26 12:22:20.097232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.107 [2024-04-26 12:22:20.097241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.107 qpair failed and we were unable to recover it. 00:26:19.107 [2024-04-26 12:22:20.097702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.107 [2024-04-26 12:22:20.098012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.107 [2024-04-26 12:22:20.098022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.107 qpair failed and we were unable to recover it. 00:26:19.107 [2024-04-26 12:22:20.098338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.107 [2024-04-26 12:22:20.098659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.107 [2024-04-26 12:22:20.098668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.107 qpair failed and we were unable to recover it. 00:26:19.107 [2024-04-26 12:22:20.098979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.107 [2024-04-26 12:22:20.099336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.107 [2024-04-26 12:22:20.099345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.107 qpair failed and we were unable to recover it. 00:26:19.107 [2024-04-26 12:22:20.099678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.107 [2024-04-26 12:22:20.099963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.107 [2024-04-26 12:22:20.099973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.107 qpair failed and we were unable to recover it. 00:26:19.107 [2024-04-26 12:22:20.101004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.107 [2024-04-26 12:22:20.101386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.107 [2024-04-26 12:22:20.101398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.107 qpair failed and we were unable to recover it. 00:26:19.107 [2024-04-26 12:22:20.101713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.107 [2024-04-26 12:22:20.102014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.107 [2024-04-26 12:22:20.102024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.107 qpair failed and we were unable to recover it. 00:26:19.107 [2024-04-26 12:22:20.102255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.107 [2024-04-26 12:22:20.102516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.107 [2024-04-26 12:22:20.102525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.107 qpair failed and we were unable to recover it. 00:26:19.107 [2024-04-26 12:22:20.102855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.107 [2024-04-26 12:22:20.103219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.107 [2024-04-26 12:22:20.103228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.107 qpair failed and we were unable to recover it. 00:26:19.107 [2024-04-26 12:22:20.103407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.107 [2024-04-26 12:22:20.103692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.107 [2024-04-26 12:22:20.103701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.107 qpair failed and we were unable to recover it. 00:26:19.107 [2024-04-26 12:22:20.103915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.107 [2024-04-26 12:22:20.104313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.107 [2024-04-26 12:22:20.104322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.107 qpair failed and we were unable to recover it. 00:26:19.107 [2024-04-26 12:22:20.104641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.107 [2024-04-26 12:22:20.104948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.107 [2024-04-26 12:22:20.104958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.107 qpair failed and we were unable to recover it. 00:26:19.107 [2024-04-26 12:22:20.105369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.107 [2024-04-26 12:22:20.105701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.107 [2024-04-26 12:22:20.105710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.107 qpair failed and we were unable to recover it. 00:26:19.107 [2024-04-26 12:22:20.106025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.107 [2024-04-26 12:22:20.106353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.107 [2024-04-26 12:22:20.106362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.107 qpair failed and we were unable to recover it. 00:26:19.107 [2024-04-26 12:22:20.106531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.107 [2024-04-26 12:22:20.106767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.107 [2024-04-26 12:22:20.106777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.107 qpair failed and we were unable to recover it. 00:26:19.107 [2024-04-26 12:22:20.107103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.107 [2024-04-26 12:22:20.107406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.107 [2024-04-26 12:22:20.107416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.107 qpair failed and we were unable to recover it. 00:26:19.107 [2024-04-26 12:22:20.107754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.107 [2024-04-26 12:22:20.108059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.107 [2024-04-26 12:22:20.108068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.107 qpair failed and we were unable to recover it. 00:26:19.107 [2024-04-26 12:22:20.108372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.107 [2024-04-26 12:22:20.108676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.108 [2024-04-26 12:22:20.108685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.108 qpair failed and we were unable to recover it. 00:26:19.108 [2024-04-26 12:22:20.108957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.108 [2024-04-26 12:22:20.109299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.108 [2024-04-26 12:22:20.109308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.108 qpair failed and we were unable to recover it. 00:26:19.108 [2024-04-26 12:22:20.109638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.108 [2024-04-26 12:22:20.109915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.108 [2024-04-26 12:22:20.109925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.108 qpair failed and we were unable to recover it. 00:26:19.108 [2024-04-26 12:22:20.110081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.108 [2024-04-26 12:22:20.110391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.108 [2024-04-26 12:22:20.110400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.108 qpair failed and we were unable to recover it. 00:26:19.108 [2024-04-26 12:22:20.110719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.108 [2024-04-26 12:22:20.110907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.108 [2024-04-26 12:22:20.110926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.108 qpair failed and we were unable to recover it. 00:26:19.108 [2024-04-26 12:22:20.111245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.108 [2024-04-26 12:22:20.111536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.108 [2024-04-26 12:22:20.111545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.108 qpair failed and we were unable to recover it. 00:26:19.108 [2024-04-26 12:22:20.111843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.108 [2024-04-26 12:22:20.112127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.108 [2024-04-26 12:22:20.112136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.108 qpair failed and we were unable to recover it. 00:26:19.108 [2024-04-26 12:22:20.112421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.108 [2024-04-26 12:22:20.112621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.108 [2024-04-26 12:22:20.112632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.108 qpair failed and we were unable to recover it. 00:26:19.108 [2024-04-26 12:22:20.112919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.108 [2024-04-26 12:22:20.113227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.108 [2024-04-26 12:22:20.113236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.108 qpair failed and we were unable to recover it. 00:26:19.108 [2024-04-26 12:22:20.113553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.108 [2024-04-26 12:22:20.113868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.108 [2024-04-26 12:22:20.113877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.108 qpair failed and we were unable to recover it. 00:26:19.108 [2024-04-26 12:22:20.114192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.108 [2024-04-26 12:22:20.114504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.108 [2024-04-26 12:22:20.114513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.108 qpair failed and we were unable to recover it. 00:26:19.108 [2024-04-26 12:22:20.114706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.108 [2024-04-26 12:22:20.114952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.108 [2024-04-26 12:22:20.114961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.108 qpair failed and we were unable to recover it. 00:26:19.108 [2024-04-26 12:22:20.115284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.108 [2024-04-26 12:22:20.115600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.108 [2024-04-26 12:22:20.115610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.108 qpair failed and we were unable to recover it. 00:26:19.108 [2024-04-26 12:22:20.115948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.108 [2024-04-26 12:22:20.116272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.108 [2024-04-26 12:22:20.116281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.108 qpair failed and we were unable to recover it. 00:26:19.108 [2024-04-26 12:22:20.116612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.108 [2024-04-26 12:22:20.116891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.108 [2024-04-26 12:22:20.116900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.108 qpair failed and we were unable to recover it. 00:26:19.108 [2024-04-26 12:22:20.117199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.108 [2024-04-26 12:22:20.117520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.108 [2024-04-26 12:22:20.117529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.108 qpair failed and we were unable to recover it. 00:26:19.108 [2024-04-26 12:22:20.117820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.108 [2024-04-26 12:22:20.118130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.108 [2024-04-26 12:22:20.118140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.109 qpair failed and we were unable to recover it. 00:26:19.109 [2024-04-26 12:22:20.118426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.109 [2024-04-26 12:22:20.118705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.109 [2024-04-26 12:22:20.118714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.109 qpair failed and we were unable to recover it. 00:26:19.109 [2024-04-26 12:22:20.119050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.109 [2024-04-26 12:22:20.119360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.109 [2024-04-26 12:22:20.119369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.109 qpair failed and we were unable to recover it. 00:26:19.109 [2024-04-26 12:22:20.119724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.109 [2024-04-26 12:22:20.120017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.109 [2024-04-26 12:22:20.120027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.109 qpair failed and we were unable to recover it. 00:26:19.109 [2024-04-26 12:22:20.120341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.109 [2024-04-26 12:22:20.120724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.109 [2024-04-26 12:22:20.120733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.109 qpair failed and we were unable to recover it. 00:26:19.109 [2024-04-26 12:22:20.120954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.109 [2024-04-26 12:22:20.121273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.109 [2024-04-26 12:22:20.121282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.109 qpair failed and we were unable to recover it. 00:26:19.109 [2024-04-26 12:22:20.121612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.109 [2024-04-26 12:22:20.121782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.109 [2024-04-26 12:22:20.121792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.109 qpair failed and we were unable to recover it. 00:26:19.109 [2024-04-26 12:22:20.122105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.109 [2024-04-26 12:22:20.122460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.109 [2024-04-26 12:22:20.122469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.109 qpair failed and we were unable to recover it. 00:26:19.109 [2024-04-26 12:22:20.122789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.109 [2024-04-26 12:22:20.123137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.109 [2024-04-26 12:22:20.123148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.109 qpair failed and we were unable to recover it. 00:26:19.109 [2024-04-26 12:22:20.123483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.109 [2024-04-26 12:22:20.123828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.109 [2024-04-26 12:22:20.123842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.109 qpair failed and we were unable to recover it. 00:26:19.109 [2024-04-26 12:22:20.124157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.109 [2024-04-26 12:22:20.124469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.109 [2024-04-26 12:22:20.124479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.109 qpair failed and we were unable to recover it. 00:26:19.109 [2024-04-26 12:22:20.124753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.109 [2024-04-26 12:22:20.125054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.109 [2024-04-26 12:22:20.125064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.109 qpair failed and we were unable to recover it. 00:26:19.109 [2024-04-26 12:22:20.125384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.109 [2024-04-26 12:22:20.125652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.109 [2024-04-26 12:22:20.125661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.109 qpair failed and we were unable to recover it. 00:26:19.109 [2024-04-26 12:22:20.125904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.109 [2024-04-26 12:22:20.126097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.109 [2024-04-26 12:22:20.126106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.109 qpair failed and we were unable to recover it. 00:26:19.109 [2024-04-26 12:22:20.126327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.109 [2024-04-26 12:22:20.126594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.109 [2024-04-26 12:22:20.126602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.109 qpair failed and we were unable to recover it. 00:26:19.109 [2024-04-26 12:22:20.126898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.109 [2024-04-26 12:22:20.127211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.109 [2024-04-26 12:22:20.127220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.109 qpair failed and we were unable to recover it. 00:26:19.109 [2024-04-26 12:22:20.127449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.109 [2024-04-26 12:22:20.127797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.109 [2024-04-26 12:22:20.127805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.109 qpair failed and we were unable to recover it. 00:26:19.109 [2024-04-26 12:22:20.127968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.109 [2024-04-26 12:22:20.128174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.109 [2024-04-26 12:22:20.128184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.109 qpair failed and we were unable to recover it. 00:26:19.109 [2024-04-26 12:22:20.128509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.110 [2024-04-26 12:22:20.128824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.110 [2024-04-26 12:22:20.128834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.110 qpair failed and we were unable to recover it. 00:26:19.110 [2024-04-26 12:22:20.129143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.110 [2024-04-26 12:22:20.129456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.110 [2024-04-26 12:22:20.129465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.110 qpair failed and we were unable to recover it. 00:26:19.110 [2024-04-26 12:22:20.129680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.110 [2024-04-26 12:22:20.129888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.110 [2024-04-26 12:22:20.129897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.110 qpair failed and we were unable to recover it. 00:26:19.110 [2024-04-26 12:22:20.130179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.110 [2024-04-26 12:22:20.130392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.110 [2024-04-26 12:22:20.130401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.110 qpair failed and we were unable to recover it. 00:26:19.110 [2024-04-26 12:22:20.130752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.110 [2024-04-26 12:22:20.131053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.110 [2024-04-26 12:22:20.131063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.110 qpair failed and we were unable to recover it. 00:26:19.110 [2024-04-26 12:22:20.131361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.110 [2024-04-26 12:22:20.131674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.110 [2024-04-26 12:22:20.131683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.110 qpair failed and we were unable to recover it. 00:26:19.110 [2024-04-26 12:22:20.131997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.110 [2024-04-26 12:22:20.132311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.110 [2024-04-26 12:22:20.132320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.110 qpair failed and we were unable to recover it. 00:26:19.110 [2024-04-26 12:22:20.132543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.110 [2024-04-26 12:22:20.132795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.110 [2024-04-26 12:22:20.132804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.110 qpair failed and we were unable to recover it. 00:26:19.110 [2024-04-26 12:22:20.133112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.110 [2024-04-26 12:22:20.133403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.110 [2024-04-26 12:22:20.133412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.110 qpair failed and we were unable to recover it. 00:26:19.110 [2024-04-26 12:22:20.133747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.110 [2024-04-26 12:22:20.134057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.110 [2024-04-26 12:22:20.134066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.110 qpair failed and we were unable to recover it. 00:26:19.110 [2024-04-26 12:22:20.134400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.110 [2024-04-26 12:22:20.134726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.110 [2024-04-26 12:22:20.134736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.110 qpair failed and we were unable to recover it. 00:26:19.110 [2024-04-26 12:22:20.134970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.110 [2024-04-26 12:22:20.135305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.110 [2024-04-26 12:22:20.135314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.110 qpair failed and we were unable to recover it. 00:26:19.110 [2024-04-26 12:22:20.135646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.110 [2024-04-26 12:22:20.135875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.110 [2024-04-26 12:22:20.135885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.110 qpair failed and we were unable to recover it. 00:26:19.110 [2024-04-26 12:22:20.136250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.110 [2024-04-26 12:22:20.136532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.110 [2024-04-26 12:22:20.136540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.110 qpair failed and we were unable to recover it. 00:26:19.110 [2024-04-26 12:22:20.136835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.110 [2024-04-26 12:22:20.137186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.110 [2024-04-26 12:22:20.137195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.110 qpair failed and we were unable to recover it. 00:26:19.110 [2024-04-26 12:22:20.137512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.110 [2024-04-26 12:22:20.137825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.110 [2024-04-26 12:22:20.137834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.110 qpair failed and we were unable to recover it. 00:26:19.110 [2024-04-26 12:22:20.138129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.110 [2024-04-26 12:22:20.138442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.110 [2024-04-26 12:22:20.138452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.110 qpair failed and we were unable to recover it. 00:26:19.110 [2024-04-26 12:22:20.138825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.110 [2024-04-26 12:22:20.139128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.110 [2024-04-26 12:22:20.139137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.110 qpair failed and we were unable to recover it. 00:26:19.110 [2024-04-26 12:22:20.139453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.110 [2024-04-26 12:22:20.139798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.111 [2024-04-26 12:22:20.139807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.111 qpair failed and we were unable to recover it. 00:26:19.111 [2024-04-26 12:22:20.140026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.111 [2024-04-26 12:22:20.140354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.111 [2024-04-26 12:22:20.140364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.111 qpair failed and we were unable to recover it. 00:26:19.111 [2024-04-26 12:22:20.140757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.111 [2024-04-26 12:22:20.141033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.111 [2024-04-26 12:22:20.141043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.111 qpair failed and we were unable to recover it. 00:26:19.111 [2024-04-26 12:22:20.141339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.111 [2024-04-26 12:22:20.141555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.111 [2024-04-26 12:22:20.141564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.111 qpair failed and we were unable to recover it. 00:26:19.111 [2024-04-26 12:22:20.141870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.111 [2024-04-26 12:22:20.142177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.111 [2024-04-26 12:22:20.142186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.111 qpair failed and we were unable to recover it. 00:26:19.111 [2024-04-26 12:22:20.142505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.111 [2024-04-26 12:22:20.142783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.111 [2024-04-26 12:22:20.142792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.111 qpair failed and we were unable to recover it. 00:26:19.111 [2024-04-26 12:22:20.143108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.111 [2024-04-26 12:22:20.143387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.111 [2024-04-26 12:22:20.143398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.111 qpair failed and we were unable to recover it. 00:26:19.111 [2024-04-26 12:22:20.143737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.111 [2024-04-26 12:22:20.143930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.111 [2024-04-26 12:22:20.143940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.111 qpair failed and we were unable to recover it. 00:26:19.111 [2024-04-26 12:22:20.144278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.111 [2024-04-26 12:22:20.144595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.111 [2024-04-26 12:22:20.144604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.111 qpair failed and we were unable to recover it. 00:26:19.111 [2024-04-26 12:22:20.144943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.111 [2024-04-26 12:22:20.145248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.111 [2024-04-26 12:22:20.145257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.111 qpair failed and we were unable to recover it. 00:26:19.111 [2024-04-26 12:22:20.145575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.111 [2024-04-26 12:22:20.145860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.111 [2024-04-26 12:22:20.145869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.111 qpair failed and we were unable to recover it. 00:26:19.111 [2024-04-26 12:22:20.146231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.111 [2024-04-26 12:22:20.146579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.111 [2024-04-26 12:22:20.146588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.111 qpair failed and we were unable to recover it. 00:26:19.111 [2024-04-26 12:22:20.146803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.111 [2024-04-26 12:22:20.147126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.111 [2024-04-26 12:22:20.147136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.111 qpair failed and we were unable to recover it. 00:26:19.111 [2024-04-26 12:22:20.147474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.111 [2024-04-26 12:22:20.147729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.111 [2024-04-26 12:22:20.147739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.111 qpair failed and we were unable to recover it. 00:26:19.111 [2024-04-26 12:22:20.148048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.111 [2024-04-26 12:22:20.148348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.111 [2024-04-26 12:22:20.148357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.112 qpair failed and we were unable to recover it. 00:26:19.112 [2024-04-26 12:22:20.148695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.112 [2024-04-26 12:22:20.149012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.112 [2024-04-26 12:22:20.149021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.112 qpair failed and we were unable to recover it. 00:26:19.112 [2024-04-26 12:22:20.149338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.112 [2024-04-26 12:22:20.149631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.112 [2024-04-26 12:22:20.149647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.112 qpair failed and we were unable to recover it. 00:26:19.112 [2024-04-26 12:22:20.149848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.112 [2024-04-26 12:22:20.150210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.112 [2024-04-26 12:22:20.150218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.112 qpair failed and we were unable to recover it. 00:26:19.112 [2024-04-26 12:22:20.150536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.112 [2024-04-26 12:22:20.150804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.112 [2024-04-26 12:22:20.150812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.112 qpair failed and we were unable to recover it. 00:26:19.112 [2024-04-26 12:22:20.151012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.112 [2024-04-26 12:22:20.151373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.112 [2024-04-26 12:22:20.151382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.112 qpair failed and we were unable to recover it. 00:26:19.112 [2024-04-26 12:22:20.151675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.112 [2024-04-26 12:22:20.151957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.112 [2024-04-26 12:22:20.151966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.112 qpair failed and we were unable to recover it. 00:26:19.112 [2024-04-26 12:22:20.152282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.112 [2024-04-26 12:22:20.152598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.112 [2024-04-26 12:22:20.152607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.112 qpair failed and we were unable to recover it. 00:26:19.112 [2024-04-26 12:22:20.152914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.112 [2024-04-26 12:22:20.153224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.112 [2024-04-26 12:22:20.153233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.112 qpair failed and we were unable to recover it. 00:26:19.112 [2024-04-26 12:22:20.153410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.112 [2024-04-26 12:22:20.153683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.112 [2024-04-26 12:22:20.153691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.112 qpair failed and we were unable to recover it. 00:26:19.112 [2024-04-26 12:22:20.154027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.112 [2024-04-26 12:22:20.154353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.112 [2024-04-26 12:22:20.154362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.112 qpair failed and we were unable to recover it. 00:26:19.112 [2024-04-26 12:22:20.154599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.112 [2024-04-26 12:22:20.154985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.112 [2024-04-26 12:22:20.154994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.112 qpair failed and we were unable to recover it. 00:26:19.112 [2024-04-26 12:22:20.155313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.112 [2024-04-26 12:22:20.155625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.112 [2024-04-26 12:22:20.155634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.112 qpair failed and we were unable to recover it. 00:26:19.112 [2024-04-26 12:22:20.155971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.112 [2024-04-26 12:22:20.156251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.112 [2024-04-26 12:22:20.156260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.112 qpair failed and we were unable to recover it. 00:26:19.112 [2024-04-26 12:22:20.156557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.112 [2024-04-26 12:22:20.156866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.112 [2024-04-26 12:22:20.156877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.112 qpair failed and we were unable to recover it. 00:26:19.112 [2024-04-26 12:22:20.157214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.112 [2024-04-26 12:22:20.157533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.112 [2024-04-26 12:22:20.157543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.112 qpair failed and we were unable to recover it. 00:26:19.112 [2024-04-26 12:22:20.157875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.112 [2024-04-26 12:22:20.158209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.112 [2024-04-26 12:22:20.158217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.112 qpair failed and we were unable to recover it. 00:26:19.112 [2024-04-26 12:22:20.158513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.112 [2024-04-26 12:22:20.158822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.112 [2024-04-26 12:22:20.158830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.112 qpair failed and we were unable to recover it. 00:26:19.112 [2024-04-26 12:22:20.159117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.112 [2024-04-26 12:22:20.159435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.112 [2024-04-26 12:22:20.159444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.112 qpair failed and we were unable to recover it. 00:26:19.113 [2024-04-26 12:22:20.159758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.113 [2024-04-26 12:22:20.159990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.113 [2024-04-26 12:22:20.160000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.113 qpair failed and we were unable to recover it. 00:26:19.113 [2024-04-26 12:22:20.160192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.113 [2024-04-26 12:22:20.160537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.113 [2024-04-26 12:22:20.160546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.113 qpair failed and we were unable to recover it. 00:26:19.113 [2024-04-26 12:22:20.160851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.113 [2024-04-26 12:22:20.161200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.113 [2024-04-26 12:22:20.161209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.113 qpair failed and we were unable to recover it. 00:26:19.113 [2024-04-26 12:22:20.161479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.113 [2024-04-26 12:22:20.161799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.113 [2024-04-26 12:22:20.161808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.113 qpair failed and we were unable to recover it. 00:26:19.113 [2024-04-26 12:22:20.161949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.113 [2024-04-26 12:22:20.162345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.113 [2024-04-26 12:22:20.162355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.113 qpair failed and we were unable to recover it. 00:26:19.113 [2024-04-26 12:22:20.162676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.113 [2024-04-26 12:22:20.162887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.113 [2024-04-26 12:22:20.162896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.113 qpair failed and we were unable to recover it. 00:26:19.113 [2024-04-26 12:22:20.163133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.113 [2024-04-26 12:22:20.163408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.113 [2024-04-26 12:22:20.163417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.113 qpair failed and we were unable to recover it. 00:26:19.113 [2024-04-26 12:22:20.163754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.113 [2024-04-26 12:22:20.164079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.113 [2024-04-26 12:22:20.164089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.113 qpair failed and we were unable to recover it. 00:26:19.113 [2024-04-26 12:22:20.164408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.113 [2024-04-26 12:22:20.164742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.113 [2024-04-26 12:22:20.164751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.113 qpair failed and we were unable to recover it. 00:26:19.113 [2024-04-26 12:22:20.165053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.113 [2024-04-26 12:22:20.165357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.113 [2024-04-26 12:22:20.165366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.113 qpair failed and we were unable to recover it. 00:26:19.113 [2024-04-26 12:22:20.165658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.113 [2024-04-26 12:22:20.165997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.113 [2024-04-26 12:22:20.166007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.113 qpair failed and we were unable to recover it. 00:26:19.113 [2024-04-26 12:22:20.166218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.113 [2024-04-26 12:22:20.166397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.113 [2024-04-26 12:22:20.166406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.113 qpair failed and we were unable to recover it. 00:26:19.113 [2024-04-26 12:22:20.166626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.113 [2024-04-26 12:22:20.166946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.113 [2024-04-26 12:22:20.166956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.113 qpair failed and we were unable to recover it. 00:26:19.113 [2024-04-26 12:22:20.167333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.113 [2024-04-26 12:22:20.167687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.113 [2024-04-26 12:22:20.167696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.113 qpair failed and we were unable to recover it. 00:26:19.113 [2024-04-26 12:22:20.167893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.114 [2024-04-26 12:22:20.168170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.114 [2024-04-26 12:22:20.168179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.114 qpair failed and we were unable to recover it. 00:26:19.114 [2024-04-26 12:22:20.168403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.114 [2024-04-26 12:22:20.168668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.114 [2024-04-26 12:22:20.168677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.114 qpair failed and we were unable to recover it. 00:26:19.114 [2024-04-26 12:22:20.169005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.114 [2024-04-26 12:22:20.169326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.114 [2024-04-26 12:22:20.169335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.114 qpair failed and we were unable to recover it. 00:26:19.114 [2024-04-26 12:22:20.169514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.114 [2024-04-26 12:22:20.169801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.114 [2024-04-26 12:22:20.169810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.114 qpair failed and we were unable to recover it. 00:26:19.114 [2024-04-26 12:22:20.170007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.114 [2024-04-26 12:22:20.170375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.114 [2024-04-26 12:22:20.170384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.114 qpair failed and we were unable to recover it. 00:26:19.114 [2024-04-26 12:22:20.170676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.114 [2024-04-26 12:22:20.170881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.114 [2024-04-26 12:22:20.170890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.114 qpair failed and we were unable to recover it. 00:26:19.114 [2024-04-26 12:22:20.171091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.114 [2024-04-26 12:22:20.171339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.114 [2024-04-26 12:22:20.171348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.114 qpair failed and we were unable to recover it. 00:26:19.114 [2024-04-26 12:22:20.171575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.114 [2024-04-26 12:22:20.171883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.114 [2024-04-26 12:22:20.171892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.114 qpair failed and we were unable to recover it. 00:26:19.114 [2024-04-26 12:22:20.172208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.114 [2024-04-26 12:22:20.172422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.114 [2024-04-26 12:22:20.172431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.114 qpair failed and we were unable to recover it. 00:26:19.114 [2024-04-26 12:22:20.172737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.114 [2024-04-26 12:22:20.173041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.114 [2024-04-26 12:22:20.173051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.114 qpair failed and we were unable to recover it. 00:26:19.114 [2024-04-26 12:22:20.173177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.114 [2024-04-26 12:22:20.173500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.114 [2024-04-26 12:22:20.173512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.114 qpair failed and we were unable to recover it. 00:26:19.114 [2024-04-26 12:22:20.173834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.114 [2024-04-26 12:22:20.174028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.114 [2024-04-26 12:22:20.174037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.114 qpair failed and we were unable to recover it. 00:26:19.114 [2024-04-26 12:22:20.174461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.114 [2024-04-26 12:22:20.174767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.114 [2024-04-26 12:22:20.174776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.114 qpair failed and we were unable to recover it. 00:26:19.114 [2024-04-26 12:22:20.175141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.114 [2024-04-26 12:22:20.175308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.114 [2024-04-26 12:22:20.175316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.114 qpair failed and we were unable to recover it. 00:26:19.114 [2024-04-26 12:22:20.175636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.114 [2024-04-26 12:22:20.175937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.114 [2024-04-26 12:22:20.175947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.114 qpair failed and we were unable to recover it. 00:26:19.114 [2024-04-26 12:22:20.176269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.114 [2024-04-26 12:22:20.176580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.114 [2024-04-26 12:22:20.176588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.114 qpair failed and we were unable to recover it. 00:26:19.114 [2024-04-26 12:22:20.177006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.114 [2024-04-26 12:22:20.177302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.114 [2024-04-26 12:22:20.177311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.114 qpair failed and we were unable to recover it. 00:26:19.114 [2024-04-26 12:22:20.177663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.114 [2024-04-26 12:22:20.178012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.114 [2024-04-26 12:22:20.178021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.114 qpair failed and we were unable to recover it. 00:26:19.114 [2024-04-26 12:22:20.178343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.114 [2024-04-26 12:22:20.178669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.114 [2024-04-26 12:22:20.178678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.114 qpair failed and we were unable to recover it. 00:26:19.114 [2024-04-26 12:22:20.178872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.114 [2024-04-26 12:22:20.179224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.114 [2024-04-26 12:22:20.179233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.114 qpair failed and we were unable to recover it. 00:26:19.114 [2024-04-26 12:22:20.179528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.114 [2024-04-26 12:22:20.179812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.115 [2024-04-26 12:22:20.179823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.115 qpair failed and we were unable to recover it. 00:26:19.115 [2024-04-26 12:22:20.180153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.115 [2024-04-26 12:22:20.180441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.115 [2024-04-26 12:22:20.180449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.115 qpair failed and we were unable to recover it. 00:26:19.115 [2024-04-26 12:22:20.180758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.115 [2024-04-26 12:22:20.181002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.115 [2024-04-26 12:22:20.181011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.115 qpair failed and we were unable to recover it. 00:26:19.115 [2024-04-26 12:22:20.181334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.115 [2024-04-26 12:22:20.181543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.115 [2024-04-26 12:22:20.181552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.115 qpair failed and we were unable to recover it. 00:26:19.115 [2024-04-26 12:22:20.181856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.115 [2024-04-26 12:22:20.182194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.115 [2024-04-26 12:22:20.182202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.115 qpair failed and we were unable to recover it. 00:26:19.115 [2024-04-26 12:22:20.182375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.115 [2024-04-26 12:22:20.182673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.115 [2024-04-26 12:22:20.182682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.115 qpair failed and we were unable to recover it. 00:26:19.115 [2024-04-26 12:22:20.182973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.115 [2024-04-26 12:22:20.183301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.115 [2024-04-26 12:22:20.183310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.115 qpair failed and we were unable to recover it. 00:26:19.115 [2024-04-26 12:22:20.183693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.115 [2024-04-26 12:22:20.183979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.115 [2024-04-26 12:22:20.183988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.115 qpair failed and we were unable to recover it. 00:26:19.115 [2024-04-26 12:22:20.184290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.115 [2024-04-26 12:22:20.184620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.115 [2024-04-26 12:22:20.184629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.115 qpair failed and we were unable to recover it. 00:26:19.115 [2024-04-26 12:22:20.184940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.115 [2024-04-26 12:22:20.185276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.115 [2024-04-26 12:22:20.185285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.115 qpair failed and we were unable to recover it. 00:26:19.115 [2024-04-26 12:22:20.185611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.115 [2024-04-26 12:22:20.185914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.115 [2024-04-26 12:22:20.185923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.115 qpair failed and we were unable to recover it. 00:26:19.115 [2024-04-26 12:22:20.186257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.115 [2024-04-26 12:22:20.186612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.115 [2024-04-26 12:22:20.186621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.115 qpair failed and we were unable to recover it. 00:26:19.115 [2024-04-26 12:22:20.186915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.115 [2024-04-26 12:22:20.187199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.115 [2024-04-26 12:22:20.187208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.115 qpair failed and we were unable to recover it. 00:26:19.115 [2024-04-26 12:22:20.187534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.115 [2024-04-26 12:22:20.187819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.115 [2024-04-26 12:22:20.187827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.115 qpair failed and we were unable to recover it. 00:26:19.115 [2024-04-26 12:22:20.188165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.115 [2024-04-26 12:22:20.188448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.115 [2024-04-26 12:22:20.188456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.115 qpair failed and we were unable to recover it. 00:26:19.115 [2024-04-26 12:22:20.188814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.115 [2024-04-26 12:22:20.189096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.115 [2024-04-26 12:22:20.189106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.115 qpair failed and we were unable to recover it. 00:26:19.115 [2024-04-26 12:22:20.189417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.115 [2024-04-26 12:22:20.189731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.115 [2024-04-26 12:22:20.189741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.115 qpair failed and we were unable to recover it. 00:26:19.115 [2024-04-26 12:22:20.189932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.115 [2024-04-26 12:22:20.190215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.115 [2024-04-26 12:22:20.190224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.115 qpair failed and we were unable to recover it. 00:26:19.115 [2024-04-26 12:22:20.190507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.115 [2024-04-26 12:22:20.190845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.115 [2024-04-26 12:22:20.190854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.115 qpair failed and we were unable to recover it. 00:26:19.115 [2024-04-26 12:22:20.191220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.115 [2024-04-26 12:22:20.191533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.115 [2024-04-26 12:22:20.191542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.115 qpair failed and we were unable to recover it. 00:26:19.115 [2024-04-26 12:22:20.191881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.115 [2024-04-26 12:22:20.192087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.115 [2024-04-26 12:22:20.192096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.115 qpair failed and we were unable to recover it. 00:26:19.115 [2024-04-26 12:22:20.192376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.115 [2024-04-26 12:22:20.192659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.115 [2024-04-26 12:22:20.192668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.115 qpair failed and we were unable to recover it. 00:26:19.115 [2024-04-26 12:22:20.192980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.116 [2024-04-26 12:22:20.193309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.116 [2024-04-26 12:22:20.193318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.116 qpair failed and we were unable to recover it. 00:26:19.116 [2024-04-26 12:22:20.193613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.116 [2024-04-26 12:22:20.193786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.116 [2024-04-26 12:22:20.193796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.116 qpair failed and we were unable to recover it. 00:26:19.116 [2024-04-26 12:22:20.194290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.116 [2024-04-26 12:22:20.194583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.116 [2024-04-26 12:22:20.194592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.116 qpair failed and we were unable to recover it. 00:26:19.116 [2024-04-26 12:22:20.194904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.116 [2024-04-26 12:22:20.195102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.116 [2024-04-26 12:22:20.195111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.116 qpair failed and we were unable to recover it. 00:26:19.116 [2024-04-26 12:22:20.195430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.116 [2024-04-26 12:22:20.195763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.116 [2024-04-26 12:22:20.195772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.116 qpair failed and we were unable to recover it. 00:26:19.116 [2024-04-26 12:22:20.196100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.116 [2024-04-26 12:22:20.196409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.116 [2024-04-26 12:22:20.196418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.116 qpair failed and we were unable to recover it. 00:26:19.116 [2024-04-26 12:22:20.196718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.116 [2024-04-26 12:22:20.196927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.116 [2024-04-26 12:22:20.196937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.116 qpair failed and we were unable to recover it. 00:26:19.116 [2024-04-26 12:22:20.197253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.116 [2024-04-26 12:22:20.197571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.116 [2024-04-26 12:22:20.197579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.116 qpair failed and we were unable to recover it. 00:26:19.116 [2024-04-26 12:22:20.197794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.116 [2024-04-26 12:22:20.198158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.116 [2024-04-26 12:22:20.198167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.116 qpair failed and we were unable to recover it. 00:26:19.116 [2024-04-26 12:22:20.198338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.116 [2024-04-26 12:22:20.198614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.116 [2024-04-26 12:22:20.198623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.116 qpair failed and we were unable to recover it. 00:26:19.116 [2024-04-26 12:22:20.198926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.116 [2024-04-26 12:22:20.199252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.116 [2024-04-26 12:22:20.199261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.116 qpair failed and we were unable to recover it. 00:26:19.116 [2024-04-26 12:22:20.199651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.116 [2024-04-26 12:22:20.199871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.116 [2024-04-26 12:22:20.199880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.116 qpair failed and we were unable to recover it. 00:26:19.116 [2024-04-26 12:22:20.200178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.116 [2024-04-26 12:22:20.200522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.116 [2024-04-26 12:22:20.200531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.116 qpair failed and we were unable to recover it. 00:26:19.116 [2024-04-26 12:22:20.200812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.116 [2024-04-26 12:22:20.201117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.116 [2024-04-26 12:22:20.201126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.116 qpair failed and we were unable to recover it. 00:26:19.116 [2024-04-26 12:22:20.201506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.116 [2024-04-26 12:22:20.201817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.116 [2024-04-26 12:22:20.201826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.116 qpair failed and we were unable to recover it. 00:26:19.116 [2024-04-26 12:22:20.202188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.116 [2024-04-26 12:22:20.202380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.116 [2024-04-26 12:22:20.202389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.116 qpair failed and we were unable to recover it. 00:26:19.116 [2024-04-26 12:22:20.202699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.116 [2024-04-26 12:22:20.203007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.116 [2024-04-26 12:22:20.203016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.116 qpair failed and we were unable to recover it. 00:26:19.116 [2024-04-26 12:22:20.203338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.116 [2024-04-26 12:22:20.203595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.116 [2024-04-26 12:22:20.203604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.116 qpair failed and we were unable to recover it. 00:26:19.116 [2024-04-26 12:22:20.203911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.116 [2024-04-26 12:22:20.204126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.116 [2024-04-26 12:22:20.204136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.116 qpair failed and we were unable to recover it. 00:26:19.116 [2024-04-26 12:22:20.204432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.116 [2024-04-26 12:22:20.204753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.116 [2024-04-26 12:22:20.204763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.116 qpair failed and we were unable to recover it. 00:26:19.116 [2024-04-26 12:22:20.205089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.116 [2024-04-26 12:22:20.205377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.116 [2024-04-26 12:22:20.205386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.116 qpair failed and we were unable to recover it. 00:26:19.116 [2024-04-26 12:22:20.205674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.116 [2024-04-26 12:22:20.205844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.116 [2024-04-26 12:22:20.205854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.116 qpair failed and we were unable to recover it. 00:26:19.116 [2024-04-26 12:22:20.206127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.116 [2024-04-26 12:22:20.206433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.116 [2024-04-26 12:22:20.206441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.116 qpair failed and we were unable to recover it. 00:26:19.116 [2024-04-26 12:22:20.206613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.116 [2024-04-26 12:22:20.206941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.116 [2024-04-26 12:22:20.206950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.116 qpair failed and we were unable to recover it. 00:26:19.116 [2024-04-26 12:22:20.207274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.116 [2024-04-26 12:22:20.207465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.116 [2024-04-26 12:22:20.207473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.117 qpair failed and we were unable to recover it. 00:26:19.117 [2024-04-26 12:22:20.207668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.117 [2024-04-26 12:22:20.208107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.117 [2024-04-26 12:22:20.208116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.117 qpair failed and we were unable to recover it. 00:26:19.117 [2024-04-26 12:22:20.208411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.117 [2024-04-26 12:22:20.208736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.117 [2024-04-26 12:22:20.208745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.117 qpair failed and we were unable to recover it. 00:26:19.117 [2024-04-26 12:22:20.209057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.117 [2024-04-26 12:22:20.209377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.117 [2024-04-26 12:22:20.209386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.117 qpair failed and we were unable to recover it. 00:26:19.117 [2024-04-26 12:22:20.209561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.117 [2024-04-26 12:22:20.209904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.117 [2024-04-26 12:22:20.209913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.117 qpair failed and we were unable to recover it. 00:26:19.117 [2024-04-26 12:22:20.210213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.117 [2024-04-26 12:22:20.210503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.117 [2024-04-26 12:22:20.210514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.117 qpair failed and we were unable to recover it. 00:26:19.117 [2024-04-26 12:22:20.210851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.117 [2024-04-26 12:22:20.211144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.117 [2024-04-26 12:22:20.211153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.117 qpair failed and we were unable to recover it. 00:26:19.117 [2024-04-26 12:22:20.211469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.117 [2024-04-26 12:22:20.211799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.117 [2024-04-26 12:22:20.211808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.117 qpair failed and we were unable to recover it. 00:26:19.117 [2024-04-26 12:22:20.212193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.117 [2024-04-26 12:22:20.212504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.117 [2024-04-26 12:22:20.212522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.117 qpair failed and we were unable to recover it. 00:26:19.117 [2024-04-26 12:22:20.212735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.117 [2024-04-26 12:22:20.213019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.117 [2024-04-26 12:22:20.213029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.117 qpair failed and we were unable to recover it. 00:26:19.117 [2024-04-26 12:22:20.213387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.117 [2024-04-26 12:22:20.213683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.117 [2024-04-26 12:22:20.213692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.117 qpair failed and we were unable to recover it. 00:26:19.117 [2024-04-26 12:22:20.214002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.117 [2024-04-26 12:22:20.214304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.117 [2024-04-26 12:22:20.214313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.117 qpair failed and we were unable to recover it. 00:26:19.117 [2024-04-26 12:22:20.214684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.117 [2024-04-26 12:22:20.215002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.117 [2024-04-26 12:22:20.215012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.117 qpair failed and we were unable to recover it. 00:26:19.117 [2024-04-26 12:22:20.215346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.117 [2024-04-26 12:22:20.215657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.117 [2024-04-26 12:22:20.215666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.117 qpair failed and we were unable to recover it. 00:26:19.117 [2024-04-26 12:22:20.216003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.117 [2024-04-26 12:22:20.216284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.117 [2024-04-26 12:22:20.216293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.117 qpair failed and we were unable to recover it. 00:26:19.117 [2024-04-26 12:22:20.216603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.117 [2024-04-26 12:22:20.216903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.117 [2024-04-26 12:22:20.216912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.117 qpair failed and we were unable to recover it. 00:26:19.117 [2024-04-26 12:22:20.217231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.117 [2024-04-26 12:22:20.217538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.117 [2024-04-26 12:22:20.217547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.117 qpair failed and we were unable to recover it. 00:26:19.117 [2024-04-26 12:22:20.217860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.117 [2024-04-26 12:22:20.218215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.117 [2024-04-26 12:22:20.218224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.117 qpair failed and we were unable to recover it. 00:26:19.117 [2024-04-26 12:22:20.218599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.117 [2024-04-26 12:22:20.218943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.117 [2024-04-26 12:22:20.218953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.117 qpair failed and we were unable to recover it. 00:26:19.117 [2024-04-26 12:22:20.219264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.117 [2024-04-26 12:22:20.219603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.117 [2024-04-26 12:22:20.219612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.117 qpair failed and we were unable to recover it. 00:26:19.117 [2024-04-26 12:22:20.219901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.117 [2024-04-26 12:22:20.220204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.117 [2024-04-26 12:22:20.220213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.118 qpair failed and we were unable to recover it. 00:26:19.118 [2024-04-26 12:22:20.220534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.118 [2024-04-26 12:22:20.220700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.118 [2024-04-26 12:22:20.220710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.118 qpair failed and we were unable to recover it. 00:26:19.118 [2024-04-26 12:22:20.221034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.118 [2024-04-26 12:22:20.221313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.118 [2024-04-26 12:22:20.221322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.118 qpair failed and we were unable to recover it. 00:26:19.118 [2024-04-26 12:22:20.221645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.118 [2024-04-26 12:22:20.221843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.118 [2024-04-26 12:22:20.221854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.118 qpair failed and we were unable to recover it. 00:26:19.118 [2024-04-26 12:22:20.222087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.118 [2024-04-26 12:22:20.222386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.118 [2024-04-26 12:22:20.222396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.118 qpair failed and we were unable to recover it. 00:26:19.118 [2024-04-26 12:22:20.222716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.118 [2024-04-26 12:22:20.223021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.118 [2024-04-26 12:22:20.223030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.118 qpair failed and we were unable to recover it. 00:26:19.118 [2024-04-26 12:22:20.223336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.118 [2024-04-26 12:22:20.223653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.118 [2024-04-26 12:22:20.223662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.118 qpair failed and we were unable to recover it. 00:26:19.118 [2024-04-26 12:22:20.223966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.118 [2024-04-26 12:22:20.224267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.118 [2024-04-26 12:22:20.224276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.118 qpair failed and we were unable to recover it. 00:26:19.118 [2024-04-26 12:22:20.224594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.118 [2024-04-26 12:22:20.224861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.118 [2024-04-26 12:22:20.224870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.118 qpair failed and we were unable to recover it. 00:26:19.118 [2024-04-26 12:22:20.225085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.118 [2024-04-26 12:22:20.225383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.118 [2024-04-26 12:22:20.225392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.118 qpair failed and we were unable to recover it. 00:26:19.118 [2024-04-26 12:22:20.225724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.118 [2024-04-26 12:22:20.226047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.118 [2024-04-26 12:22:20.226056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.118 qpair failed and we were unable to recover it. 00:26:19.118 [2024-04-26 12:22:20.226240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.118 [2024-04-26 12:22:20.226577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.119 [2024-04-26 12:22:20.226586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.119 qpair failed and we were unable to recover it. 00:26:19.119 [2024-04-26 12:22:20.226876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.119 [2024-04-26 12:22:20.227209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.119 [2024-04-26 12:22:20.227218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.119 qpair failed and we were unable to recover it. 00:26:19.119 [2024-04-26 12:22:20.227510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.119 [2024-04-26 12:22:20.227853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.119 [2024-04-26 12:22:20.227862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.119 qpair failed and we were unable to recover it. 00:26:19.119 [2024-04-26 12:22:20.228222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.119 [2024-04-26 12:22:20.228551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.119 [2024-04-26 12:22:20.228561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.119 qpair failed and we were unable to recover it. 00:26:19.119 [2024-04-26 12:22:20.228922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.119 [2024-04-26 12:22:20.229222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.119 [2024-04-26 12:22:20.229231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.119 qpair failed and we were unable to recover it. 00:26:19.119 [2024-04-26 12:22:20.229515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.119 [2024-04-26 12:22:20.229693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.119 [2024-04-26 12:22:20.229702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.119 qpair failed and we were unable to recover it. 00:26:19.119 [2024-04-26 12:22:20.230004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.119 [2024-04-26 12:22:20.230311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.119 [2024-04-26 12:22:20.230320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.119 qpair failed and we were unable to recover it. 00:26:19.119 [2024-04-26 12:22:20.230655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.119 [2024-04-26 12:22:20.230958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.119 [2024-04-26 12:22:20.230967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.119 qpair failed and we were unable to recover it. 00:26:19.119 [2024-04-26 12:22:20.231284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.119 [2024-04-26 12:22:20.231598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.119 [2024-04-26 12:22:20.231607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.119 qpair failed and we were unable to recover it. 00:26:19.119 [2024-04-26 12:22:20.232000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.119 [2024-04-26 12:22:20.232315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.119 [2024-04-26 12:22:20.232325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.119 qpair failed and we were unable to recover it. 00:26:19.119 [2024-04-26 12:22:20.232658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.119 [2024-04-26 12:22:20.232970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.119 [2024-04-26 12:22:20.232980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.119 qpair failed and we were unable to recover it. 00:26:19.119 [2024-04-26 12:22:20.233319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.119 [2024-04-26 12:22:20.233631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.119 [2024-04-26 12:22:20.233640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.119 qpair failed and we were unable to recover it. 00:26:19.119 [2024-04-26 12:22:20.233690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.119 [2024-04-26 12:22:20.234050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.119 [2024-04-26 12:22:20.234059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.119 qpair failed and we were unable to recover it. 00:26:19.119 [2024-04-26 12:22:20.234388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.119 [2024-04-26 12:22:20.234682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.119 [2024-04-26 12:22:20.234690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.119 qpair failed and we were unable to recover it. 00:26:19.120 [2024-04-26 12:22:20.235003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.120 [2024-04-26 12:22:20.235300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.120 [2024-04-26 12:22:20.235309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.120 qpair failed and we were unable to recover it. 00:26:19.120 [2024-04-26 12:22:20.235642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.120 [2024-04-26 12:22:20.236024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.120 [2024-04-26 12:22:20.236034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.120 qpair failed and we were unable to recover it. 00:26:19.120 [2024-04-26 12:22:20.236324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.120 [2024-04-26 12:22:20.236443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.120 [2024-04-26 12:22:20.236452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.120 qpair failed and we were unable to recover it. 00:26:19.120 [2024-04-26 12:22:20.236742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.120 [2024-04-26 12:22:20.237050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.120 [2024-04-26 12:22:20.237060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.120 qpair failed and we were unable to recover it. 00:26:19.120 [2024-04-26 12:22:20.237371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.120 [2024-04-26 12:22:20.237677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.120 [2024-04-26 12:22:20.237686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.120 qpair failed and we were unable to recover it. 00:26:19.120 [2024-04-26 12:22:20.237974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.120 [2024-04-26 12:22:20.238295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.120 [2024-04-26 12:22:20.238304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.120 qpair failed and we were unable to recover it. 00:26:19.120 [2024-04-26 12:22:20.238591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.120 [2024-04-26 12:22:20.238909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.120 [2024-04-26 12:22:20.238918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.120 qpair failed and we were unable to recover it. 00:26:19.120 [2024-04-26 12:22:20.239265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.120 [2024-04-26 12:22:20.239576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.120 [2024-04-26 12:22:20.239585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.120 qpair failed and we were unable to recover it. 00:26:19.120 [2024-04-26 12:22:20.239905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.120 [2024-04-26 12:22:20.240197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.120 [2024-04-26 12:22:20.240206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.120 qpair failed and we were unable to recover it. 00:26:19.120 [2024-04-26 12:22:20.240497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.120 [2024-04-26 12:22:20.240725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.120 [2024-04-26 12:22:20.240733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.120 qpair failed and we were unable to recover it. 00:26:19.120 [2024-04-26 12:22:20.241016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.120 [2024-04-26 12:22:20.241188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.120 [2024-04-26 12:22:20.241198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.120 qpair failed and we were unable to recover it. 00:26:19.120 [2024-04-26 12:22:20.241484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.120 [2024-04-26 12:22:20.241809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.120 [2024-04-26 12:22:20.241821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.120 qpair failed and we were unable to recover it. 00:26:19.120 [2024-04-26 12:22:20.242117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.120 [2024-04-26 12:22:20.242324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.120 [2024-04-26 12:22:20.242333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.120 qpair failed and we were unable to recover it. 00:26:19.120 [2024-04-26 12:22:20.242646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.120 [2024-04-26 12:22:20.242965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.120 [2024-04-26 12:22:20.242975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.120 qpair failed and we were unable to recover it. 00:26:19.120 [2024-04-26 12:22:20.243271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.120 [2024-04-26 12:22:20.243550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.120 [2024-04-26 12:22:20.243559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.120 qpair failed and we were unable to recover it. 00:26:19.120 [2024-04-26 12:22:20.243875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.120 [2024-04-26 12:22:20.244175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.120 [2024-04-26 12:22:20.244184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.120 qpair failed and we were unable to recover it. 00:26:19.120 [2024-04-26 12:22:20.244495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.120 [2024-04-26 12:22:20.244807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.120 [2024-04-26 12:22:20.244816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.120 qpair failed and we were unable to recover it. 00:26:19.120 [2024-04-26 12:22:20.245122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.120 [2024-04-26 12:22:20.245433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.120 [2024-04-26 12:22:20.245441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.120 qpair failed and we were unable to recover it. 00:26:19.120 [2024-04-26 12:22:20.245758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.120 [2024-04-26 12:22:20.246037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.120 [2024-04-26 12:22:20.246046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.120 qpair failed and we were unable to recover it. 00:26:19.120 [2024-04-26 12:22:20.246435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.120 [2024-04-26 12:22:20.246738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.121 [2024-04-26 12:22:20.246748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.121 qpair failed and we were unable to recover it. 00:26:19.121 [2024-04-26 12:22:20.247075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.121 [2024-04-26 12:22:20.247260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.121 [2024-04-26 12:22:20.247269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.121 qpair failed and we were unable to recover it. 00:26:19.121 [2024-04-26 12:22:20.247614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.121 [2024-04-26 12:22:20.247911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.121 [2024-04-26 12:22:20.247921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.121 qpair failed and we were unable to recover it. 00:26:19.121 [2024-04-26 12:22:20.248213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.121 [2024-04-26 12:22:20.248529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.121 [2024-04-26 12:22:20.248538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.121 qpair failed and we were unable to recover it. 00:26:19.121 [2024-04-26 12:22:20.248854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.121 [2024-04-26 12:22:20.249138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.121 [2024-04-26 12:22:20.249147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.121 qpair failed and we were unable to recover it. 00:26:19.121 [2024-04-26 12:22:20.249470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.121 [2024-04-26 12:22:20.249748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.121 [2024-04-26 12:22:20.249756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.121 qpair failed and we were unable to recover it. 00:26:19.121 [2024-04-26 12:22:20.250134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.121 [2024-04-26 12:22:20.250445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.121 [2024-04-26 12:22:20.250454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.121 qpair failed and we were unable to recover it. 00:26:19.121 [2024-04-26 12:22:20.250760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.121 [2024-04-26 12:22:20.251073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.121 [2024-04-26 12:22:20.251083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.121 qpair failed and we were unable to recover it. 00:26:19.121 [2024-04-26 12:22:20.251385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.121 [2024-04-26 12:22:20.251777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.121 [2024-04-26 12:22:20.251785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.121 qpair failed and we were unable to recover it. 00:26:19.121 [2024-04-26 12:22:20.252154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.121 [2024-04-26 12:22:20.252471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.121 [2024-04-26 12:22:20.252480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.121 qpair failed and we were unable to recover it. 00:26:19.121 [2024-04-26 12:22:20.252815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.121 [2024-04-26 12:22:20.253199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.121 [2024-04-26 12:22:20.253208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.121 qpair failed and we were unable to recover it. 00:26:19.121 [2024-04-26 12:22:20.253551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.121 [2024-04-26 12:22:20.253864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.121 [2024-04-26 12:22:20.253874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.121 qpair failed and we were unable to recover it. 00:26:19.121 [2024-04-26 12:22:20.254154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.121 [2024-04-26 12:22:20.254458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.121 [2024-04-26 12:22:20.254467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.121 qpair failed and we were unable to recover it. 00:26:19.121 [2024-04-26 12:22:20.254806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.121 [2024-04-26 12:22:20.255108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.121 [2024-04-26 12:22:20.255117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.121 qpair failed and we were unable to recover it. 00:26:19.121 [2024-04-26 12:22:20.255453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.121 [2024-04-26 12:22:20.255762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.121 [2024-04-26 12:22:20.255771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.121 qpair failed and we were unable to recover it. 00:26:19.121 [2024-04-26 12:22:20.256157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.121 [2024-04-26 12:22:20.256498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.121 [2024-04-26 12:22:20.256506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.121 qpair failed and we were unable to recover it. 00:26:19.121 [2024-04-26 12:22:20.256911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.121 [2024-04-26 12:22:20.257217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.121 [2024-04-26 12:22:20.257226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.121 qpair failed and we were unable to recover it. 00:26:19.121 [2024-04-26 12:22:20.257542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.121 [2024-04-26 12:22:20.257807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.121 [2024-04-26 12:22:20.257816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.121 qpair failed and we were unable to recover it. 00:26:19.121 [2024-04-26 12:22:20.258137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.121 [2024-04-26 12:22:20.258408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.121 [2024-04-26 12:22:20.258417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.121 qpair failed and we were unable to recover it. 00:26:19.121 [2024-04-26 12:22:20.258737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.121 [2024-04-26 12:22:20.259086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.122 [2024-04-26 12:22:20.259095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.122 qpair failed and we were unable to recover it. 00:26:19.122 [2024-04-26 12:22:20.259410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.122 [2024-04-26 12:22:20.259686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.122 [2024-04-26 12:22:20.259695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.122 qpair failed and we were unable to recover it. 00:26:19.122 [2024-04-26 12:22:20.260015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.122 [2024-04-26 12:22:20.260311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.122 [2024-04-26 12:22:20.260320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.122 qpair failed and we were unable to recover it. 00:26:19.122 [2024-04-26 12:22:20.260696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.122 [2024-04-26 12:22:20.260888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.122 [2024-04-26 12:22:20.260898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.122 qpair failed and we were unable to recover it. 00:26:19.122 [2024-04-26 12:22:20.261088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.122 [2024-04-26 12:22:20.261349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.122 [2024-04-26 12:22:20.261358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.122 qpair failed and we were unable to recover it. 00:26:19.122 [2024-04-26 12:22:20.261702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.122 [2024-04-26 12:22:20.261967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.122 [2024-04-26 12:22:20.261977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.122 qpair failed and we were unable to recover it. 00:26:19.122 [2024-04-26 12:22:20.262306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.122 [2024-04-26 12:22:20.262634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.122 [2024-04-26 12:22:20.262643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.122 qpair failed and we were unable to recover it. 00:26:19.122 [2024-04-26 12:22:20.263007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.122 [2024-04-26 12:22:20.263298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.122 [2024-04-26 12:22:20.263307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.122 qpair failed and we were unable to recover it. 00:26:19.122 [2024-04-26 12:22:20.263626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.122 [2024-04-26 12:22:20.263936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.122 [2024-04-26 12:22:20.263946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.122 qpair failed and we were unable to recover it. 00:26:19.122 [2024-04-26 12:22:20.264260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.122 [2024-04-26 12:22:20.264531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.122 [2024-04-26 12:22:20.264540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.122 qpair failed and we were unable to recover it. 00:26:19.122 [2024-04-26 12:22:20.264859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.122 [2024-04-26 12:22:20.265169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.122 [2024-04-26 12:22:20.265178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.122 qpair failed and we were unable to recover it. 00:26:19.122 [2024-04-26 12:22:20.265515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.122 [2024-04-26 12:22:20.265816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.122 [2024-04-26 12:22:20.265825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.122 qpair failed and we were unable to recover it. 00:26:19.122 [2024-04-26 12:22:20.266166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.122 [2024-04-26 12:22:20.266357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.122 [2024-04-26 12:22:20.266367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.122 qpair failed and we were unable to recover it. 00:26:19.122 [2024-04-26 12:22:20.266609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.122 [2024-04-26 12:22:20.266891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.122 [2024-04-26 12:22:20.266900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.122 qpair failed and we were unable to recover it. 00:26:19.122 [2024-04-26 12:22:20.267182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.122 [2024-04-26 12:22:20.267483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.122 [2024-04-26 12:22:20.267492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.122 qpair failed and we were unable to recover it. 00:26:19.122 [2024-04-26 12:22:20.267775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.122 [2024-04-26 12:22:20.268097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.122 [2024-04-26 12:22:20.268106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.122 qpair failed and we were unable to recover it. 00:26:19.122 [2024-04-26 12:22:20.268418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.122 [2024-04-26 12:22:20.268730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.122 [2024-04-26 12:22:20.268740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.122 qpair failed and we were unable to recover it. 00:26:19.122 [2024-04-26 12:22:20.268921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.122 [2024-04-26 12:22:20.269191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.122 [2024-04-26 12:22:20.269201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.122 qpair failed and we were unable to recover it. 00:26:19.122 [2024-04-26 12:22:20.269507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.122 [2024-04-26 12:22:20.269700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.122 [2024-04-26 12:22:20.269710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.122 qpair failed and we were unable to recover it. 00:26:19.123 [2024-04-26 12:22:20.270015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.123 [2024-04-26 12:22:20.270342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.123 [2024-04-26 12:22:20.270350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.123 qpair failed and we were unable to recover it. 00:26:19.123 [2024-04-26 12:22:20.270670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.123 [2024-04-26 12:22:20.270980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.123 [2024-04-26 12:22:20.270989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.123 qpair failed and we were unable to recover it. 00:26:19.123 [2024-04-26 12:22:20.271229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.123 [2024-04-26 12:22:20.271583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.123 [2024-04-26 12:22:20.271592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.123 qpair failed and we were unable to recover it. 00:26:19.123 [2024-04-26 12:22:20.271909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.123 [2024-04-26 12:22:20.272203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.123 [2024-04-26 12:22:20.272212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.123 qpair failed and we were unable to recover it. 00:26:19.123 [2024-04-26 12:22:20.272508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.123 [2024-04-26 12:22:20.272700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.123 [2024-04-26 12:22:20.272710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.123 qpair failed and we were unable to recover it. 00:26:19.123 [2024-04-26 12:22:20.272975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.123 [2024-04-26 12:22:20.273282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.123 [2024-04-26 12:22:20.273294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.123 qpair failed and we were unable to recover it. 00:26:19.123 [2024-04-26 12:22:20.273612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.123 [2024-04-26 12:22:20.273895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.123 [2024-04-26 12:22:20.273904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.123 qpair failed and we were unable to recover it. 00:26:19.123 [2024-04-26 12:22:20.274213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.123 [2024-04-26 12:22:20.274515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.123 [2024-04-26 12:22:20.274524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.123 qpair failed and we were unable to recover it. 00:26:19.123 [2024-04-26 12:22:20.274848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.123 [2024-04-26 12:22:20.275175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.123 [2024-04-26 12:22:20.275184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.123 qpair failed and we were unable to recover it. 00:26:19.123 [2024-04-26 12:22:20.275488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.123 [2024-04-26 12:22:20.275697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.123 [2024-04-26 12:22:20.275707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.123 qpair failed and we were unable to recover it. 00:26:19.123 [2024-04-26 12:22:20.276016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.123 [2024-04-26 12:22:20.276343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.123 [2024-04-26 12:22:20.276352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.123 qpair failed and we were unable to recover it. 00:26:19.123 [2024-04-26 12:22:20.276576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.123 [2024-04-26 12:22:20.276846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.123 [2024-04-26 12:22:20.276857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.123 qpair failed and we were unable to recover it. 00:26:19.123 [2024-04-26 12:22:20.277175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.123 [2024-04-26 12:22:20.277454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.123 [2024-04-26 12:22:20.277463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.123 qpair failed and we were unable to recover it. 00:26:19.123 [2024-04-26 12:22:20.277769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.123 [2024-04-26 12:22:20.278068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.123 [2024-04-26 12:22:20.278077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.123 qpair failed and we were unable to recover it. 00:26:19.123 [2024-04-26 12:22:20.278376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.123 [2024-04-26 12:22:20.278687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.123 [2024-04-26 12:22:20.278696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.123 qpair failed and we were unable to recover it. 00:26:19.123 [2024-04-26 12:22:20.278969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.123 [2024-04-26 12:22:20.279287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.123 [2024-04-26 12:22:20.279299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.123 qpair failed and we were unable to recover it. 00:26:19.123 [2024-04-26 12:22:20.279632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.123 [2024-04-26 12:22:20.279934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.123 [2024-04-26 12:22:20.279944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.123 qpair failed and we were unable to recover it. 00:26:19.123 [2024-04-26 12:22:20.280260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.123 [2024-04-26 12:22:20.280574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.123 [2024-04-26 12:22:20.280583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.123 qpair failed and we were unable to recover it. 00:26:19.123 [2024-04-26 12:22:20.280898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.123 [2024-04-26 12:22:20.281206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.123 [2024-04-26 12:22:20.281215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.123 qpair failed and we were unable to recover it. 00:26:19.123 [2024-04-26 12:22:20.281511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.123 [2024-04-26 12:22:20.281826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.124 [2024-04-26 12:22:20.281834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.124 qpair failed and we were unable to recover it. 00:26:19.124 [2024-04-26 12:22:20.282212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.124 [2024-04-26 12:22:20.282513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.124 [2024-04-26 12:22:20.282523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.124 qpair failed and we were unable to recover it. 00:26:19.124 [2024-04-26 12:22:20.282840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.124 [2024-04-26 12:22:20.283153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.124 [2024-04-26 12:22:20.283161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.124 qpair failed and we were unable to recover it. 00:26:19.124 [2024-04-26 12:22:20.283466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.124 [2024-04-26 12:22:20.283795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.124 [2024-04-26 12:22:20.283804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.124 qpair failed and we were unable to recover it. 00:26:19.124 [2024-04-26 12:22:20.284160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.124 [2024-04-26 12:22:20.284429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.124 [2024-04-26 12:22:20.284438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.124 qpair failed and we were unable to recover it. 00:26:19.124 [2024-04-26 12:22:20.284731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.124 [2024-04-26 12:22:20.284914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.124 [2024-04-26 12:22:20.284924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.124 qpair failed and we were unable to recover it. 00:26:19.124 [2024-04-26 12:22:20.285222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.124 [2024-04-26 12:22:20.285552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.124 [2024-04-26 12:22:20.285561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.124 qpair failed and we were unable to recover it. 00:26:19.124 [2024-04-26 12:22:20.285881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.124 [2024-04-26 12:22:20.286201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.124 [2024-04-26 12:22:20.286209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.124 qpair failed and we were unable to recover it. 00:26:19.124 [2024-04-26 12:22:20.286459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.124 [2024-04-26 12:22:20.286784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.124 [2024-04-26 12:22:20.286792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.124 qpair failed and we were unable to recover it. 00:26:19.124 [2024-04-26 12:22:20.287103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.124 [2024-04-26 12:22:20.287422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.124 [2024-04-26 12:22:20.287431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.124 qpair failed and we were unable to recover it. 00:26:19.124 [2024-04-26 12:22:20.287749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.124 [2024-04-26 12:22:20.288060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.124 [2024-04-26 12:22:20.288070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.124 qpair failed and we were unable to recover it. 00:26:19.124 [2024-04-26 12:22:20.288380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.124 [2024-04-26 12:22:20.288700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.124 [2024-04-26 12:22:20.288709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.124 qpair failed and we were unable to recover it. 00:26:19.124 [2024-04-26 12:22:20.289051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.124 [2024-04-26 12:22:20.289417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.124 [2024-04-26 12:22:20.289426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.124 qpair failed and we were unable to recover it. 00:26:19.124 [2024-04-26 12:22:20.289731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.124 [2024-04-26 12:22:20.290094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.124 [2024-04-26 12:22:20.290104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.124 qpair failed and we were unable to recover it. 00:26:19.124 [2024-04-26 12:22:20.290392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.124 [2024-04-26 12:22:20.290713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.124 [2024-04-26 12:22:20.290722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.124 qpair failed and we were unable to recover it. 00:26:19.124 [2024-04-26 12:22:20.290976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.124 [2024-04-26 12:22:20.291313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.124 [2024-04-26 12:22:20.291322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.124 qpair failed and we were unable to recover it. 00:26:19.124 [2024-04-26 12:22:20.291634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.124 [2024-04-26 12:22:20.291959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.124 [2024-04-26 12:22:20.291968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.124 qpair failed and we were unable to recover it. 00:26:19.124 [2024-04-26 12:22:20.292308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.124 [2024-04-26 12:22:20.292632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.124 [2024-04-26 12:22:20.292641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.124 qpair failed and we were unable to recover it. 00:26:19.124 [2024-04-26 12:22:20.292973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.124 [2024-04-26 12:22:20.293287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.124 [2024-04-26 12:22:20.293296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.124 qpair failed and we were unable to recover it. 00:26:19.124 [2024-04-26 12:22:20.293625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.124 [2024-04-26 12:22:20.293942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.124 [2024-04-26 12:22:20.293951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.124 qpair failed and we were unable to recover it. 00:26:19.124 [2024-04-26 12:22:20.294341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.125 [2024-04-26 12:22:20.294712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.125 [2024-04-26 12:22:20.294721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.125 qpair failed and we were unable to recover it. 00:26:19.125 [2024-04-26 12:22:20.294984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.125 [2024-04-26 12:22:20.295059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.125 [2024-04-26 12:22:20.295068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.125 qpair failed and we were unable to recover it. 00:26:19.125 [2024-04-26 12:22:20.295408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.125 [2024-04-26 12:22:20.295738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.125 [2024-04-26 12:22:20.295747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.125 qpair failed and we were unable to recover it. 00:26:19.125 [2024-04-26 12:22:20.295932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.406 [2024-04-26 12:22:20.296261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.406 [2024-04-26 12:22:20.296270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.406 qpair failed and we were unable to recover it. 00:26:19.406 [2024-04-26 12:22:20.296553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.406 [2024-04-26 12:22:20.296827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.406 [2024-04-26 12:22:20.296843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.406 qpair failed and we were unable to recover it. 00:26:19.406 [2024-04-26 12:22:20.296989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.406 [2024-04-26 12:22:20.297262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.406 [2024-04-26 12:22:20.297271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.406 qpair failed and we were unable to recover it. 00:26:19.406 [2024-04-26 12:22:20.297592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.406 [2024-04-26 12:22:20.297823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.406 [2024-04-26 12:22:20.297832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.406 qpair failed and we were unable to recover it. 00:26:19.406 [2024-04-26 12:22:20.298133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.406 [2024-04-26 12:22:20.298417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.406 [2024-04-26 12:22:20.298426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.406 qpair failed and we were unable to recover it. 00:26:19.406 [2024-04-26 12:22:20.298675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.406 [2024-04-26 12:22:20.298967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.406 [2024-04-26 12:22:20.298977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.406 qpair failed and we were unable to recover it. 00:26:19.406 [2024-04-26 12:22:20.299280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.406 [2024-04-26 12:22:20.299571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.406 [2024-04-26 12:22:20.299580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.406 qpair failed and we were unable to recover it. 00:26:19.406 [2024-04-26 12:22:20.299897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.406 [2024-04-26 12:22:20.299974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.406 [2024-04-26 12:22:20.299985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.406 qpair failed and we were unable to recover it. 00:26:19.406 [2024-04-26 12:22:20.300272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.406 [2024-04-26 12:22:20.300596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.406 [2024-04-26 12:22:20.300605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.406 qpair failed and we were unable to recover it. 00:26:19.406 [2024-04-26 12:22:20.300887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.406 [2024-04-26 12:22:20.301096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.406 [2024-04-26 12:22:20.301105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.406 qpair failed and we were unable to recover it. 00:26:19.406 [2024-04-26 12:22:20.301327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.406 [2024-04-26 12:22:20.301670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.406 [2024-04-26 12:22:20.301679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.406 qpair failed and we were unable to recover it. 00:26:19.407 [2024-04-26 12:22:20.301985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.302273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.302282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.407 qpair failed and we were unable to recover it. 00:26:19.407 [2024-04-26 12:22:20.302602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.302880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.302890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.407 qpair failed and we were unable to recover it. 00:26:19.407 [2024-04-26 12:22:20.303180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.303487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.303495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.407 qpair failed and we were unable to recover it. 00:26:19.407 [2024-04-26 12:22:20.303832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.304131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.304141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.407 qpair failed and we were unable to recover it. 00:26:19.407 [2024-04-26 12:22:20.304498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.304767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.304777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.407 qpair failed and we were unable to recover it. 00:26:19.407 [2024-04-26 12:22:20.304978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.305261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.305270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.407 qpair failed and we were unable to recover it. 00:26:19.407 [2024-04-26 12:22:20.305595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.305941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.305950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.407 qpair failed and we were unable to recover it. 00:26:19.407 [2024-04-26 12:22:20.306243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.306521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.306530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.407 qpair failed and we were unable to recover it. 00:26:19.407 [2024-04-26 12:22:20.306835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.307126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.307134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.407 qpair failed and we were unable to recover it. 00:26:19.407 [2024-04-26 12:22:20.307457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.307764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.307773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.407 qpair failed and we were unable to recover it. 00:26:19.407 [2024-04-26 12:22:20.308093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.308414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.308423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.407 qpair failed and we were unable to recover it. 00:26:19.407 [2024-04-26 12:22:20.308719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.309000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.309010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.407 qpair failed and we were unable to recover it. 00:26:19.407 [2024-04-26 12:22:20.309329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.309655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.309664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.407 qpair failed and we were unable to recover it. 00:26:19.407 [2024-04-26 12:22:20.310036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.310328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.310339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.407 qpair failed and we were unable to recover it. 00:26:19.407 [2024-04-26 12:22:20.310669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.311016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.311025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.407 qpair failed and we were unable to recover it. 00:26:19.407 [2024-04-26 12:22:20.311323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.311655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.311664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.407 qpair failed and we were unable to recover it. 00:26:19.407 [2024-04-26 12:22:20.311940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.312263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.312272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.407 qpair failed and we were unable to recover it. 00:26:19.407 [2024-04-26 12:22:20.312585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.312860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.312870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.407 qpair failed and we were unable to recover it. 00:26:19.407 [2024-04-26 12:22:20.313201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.313540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.313549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.407 qpair failed and we were unable to recover it. 00:26:19.407 [2024-04-26 12:22:20.313754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.314101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.314110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.407 qpair failed and we were unable to recover it. 00:26:19.407 [2024-04-26 12:22:20.314322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.314641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.314650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.407 qpair failed and we were unable to recover it. 00:26:19.407 [2024-04-26 12:22:20.314951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.315323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.315331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.407 qpair failed and we were unable to recover it. 00:26:19.407 [2024-04-26 12:22:20.315650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.315964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.315973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.407 qpair failed and we were unable to recover it. 00:26:19.407 [2024-04-26 12:22:20.316308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.316478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.316487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.407 qpair failed and we were unable to recover it. 00:26:19.407 [2024-04-26 12:22:20.316799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.317082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.317091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.407 qpair failed and we were unable to recover it. 00:26:19.407 [2024-04-26 12:22:20.317406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.317723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.317732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.407 qpair failed and we were unable to recover it. 00:26:19.407 [2024-04-26 12:22:20.318046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.318365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.318374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.407 qpair failed and we were unable to recover it. 00:26:19.407 [2024-04-26 12:22:20.318710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.319052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.319063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.407 qpair failed and we were unable to recover it. 00:26:19.407 [2024-04-26 12:22:20.319353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.319663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.319672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.407 qpair failed and we were unable to recover it. 00:26:19.407 [2024-04-26 12:22:20.319862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.320149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.320158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.407 qpair failed and we were unable to recover it. 00:26:19.407 [2024-04-26 12:22:20.320542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.320889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.320898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.407 qpair failed and we were unable to recover it. 00:26:19.407 [2024-04-26 12:22:20.321094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.321370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.321379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.407 qpair failed and we were unable to recover it. 00:26:19.407 [2024-04-26 12:22:20.321712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.322010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.322019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.407 qpair failed and we were unable to recover it. 00:26:19.407 [2024-04-26 12:22:20.322335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.322523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.322532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.407 qpair failed and we were unable to recover it. 00:26:19.407 [2024-04-26 12:22:20.322841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.323170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.323179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.407 qpair failed and we were unable to recover it. 00:26:19.407 [2024-04-26 12:22:20.323558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.323826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.323835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.407 qpair failed and we were unable to recover it. 00:26:19.407 [2024-04-26 12:22:20.324061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.324391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.324401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.407 qpair failed and we were unable to recover it. 00:26:19.407 [2024-04-26 12:22:20.324700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.324998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.325009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.407 qpair failed and we were unable to recover it. 00:26:19.407 [2024-04-26 12:22:20.325328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.325628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.325638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.407 qpair failed and we were unable to recover it. 00:26:19.407 [2024-04-26 12:22:20.325971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.326291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.326300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.407 qpair failed and we were unable to recover it. 00:26:19.407 [2024-04-26 12:22:20.326523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.326851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.326861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.407 qpair failed and we were unable to recover it. 00:26:19.407 [2024-04-26 12:22:20.327272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.327571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.327579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.407 qpair failed and we were unable to recover it. 00:26:19.407 [2024-04-26 12:22:20.327905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.328218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.328227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.407 qpair failed and we were unable to recover it. 00:26:19.407 [2024-04-26 12:22:20.328581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.328912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.328922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.407 qpair failed and we were unable to recover it. 00:26:19.407 [2024-04-26 12:22:20.329243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.329434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.329443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.407 qpair failed and we were unable to recover it. 00:26:19.407 [2024-04-26 12:22:20.329758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.330031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.330041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.407 qpair failed and we were unable to recover it. 00:26:19.407 [2024-04-26 12:22:20.330251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.330541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.330550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.407 qpair failed and we were unable to recover it. 00:26:19.407 [2024-04-26 12:22:20.330731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.331008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.331017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.407 qpair failed and we were unable to recover it. 00:26:19.407 [2024-04-26 12:22:20.331343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.331642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.331651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.407 qpair failed and we were unable to recover it. 00:26:19.407 [2024-04-26 12:22:20.331977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.332299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.332308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.407 qpair failed and we were unable to recover it. 00:26:19.407 [2024-04-26 12:22:20.332621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.332916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.332925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.407 qpair failed and we were unable to recover it. 00:26:19.407 [2024-04-26 12:22:20.333235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.333553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.333562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.407 qpair failed and we were unable to recover it. 00:26:19.407 [2024-04-26 12:22:20.333878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.334167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.334176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.407 qpair failed and we were unable to recover it. 00:26:19.407 [2024-04-26 12:22:20.334489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.334803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.407 [2024-04-26 12:22:20.334811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.407 qpair failed and we were unable to recover it. 00:26:19.407 [2024-04-26 12:22:20.335104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.335427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.335436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.408 qpair failed and we were unable to recover it. 00:26:19.408 [2024-04-26 12:22:20.335726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.336009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.336019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.408 qpair failed and we were unable to recover it. 00:26:19.408 [2024-04-26 12:22:20.336339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.336665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.336674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.408 qpair failed and we were unable to recover it. 00:26:19.408 [2024-04-26 12:22:20.336986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.337248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.337257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.408 qpair failed and we were unable to recover it. 00:26:19.408 [2024-04-26 12:22:20.337580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.337902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.337911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.408 qpair failed and we were unable to recover it. 00:26:19.408 [2024-04-26 12:22:20.338218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.338539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.338548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.408 qpair failed and we were unable to recover it. 00:26:19.408 [2024-04-26 12:22:20.338865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.339210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.339220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.408 qpair failed and we were unable to recover it. 00:26:19.408 [2024-04-26 12:22:20.339531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.339848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.339858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.408 qpair failed and we were unable to recover it. 00:26:19.408 [2024-04-26 12:22:20.340042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.340388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.340397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.408 qpair failed and we were unable to recover it. 00:26:19.408 [2024-04-26 12:22:20.340689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.340920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.340930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.408 qpair failed and we were unable to recover it. 00:26:19.408 [2024-04-26 12:22:20.341099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.341314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.341327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.408 qpair failed and we were unable to recover it. 00:26:19.408 [2024-04-26 12:22:20.341658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.341971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.341980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.408 qpair failed and we were unable to recover it. 00:26:19.408 [2024-04-26 12:22:20.342294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.342462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.342472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.408 qpair failed and we were unable to recover it. 00:26:19.408 [2024-04-26 12:22:20.342766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.343090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.343100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.408 qpair failed and we were unable to recover it. 00:26:19.408 [2024-04-26 12:22:20.343415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.343719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.343729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.408 qpair failed and we were unable to recover it. 00:26:19.408 [2024-04-26 12:22:20.344072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.344385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.344394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.408 qpair failed and we were unable to recover it. 00:26:19.408 [2024-04-26 12:22:20.344701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.345016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.345026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.408 qpair failed and we were unable to recover it. 00:26:19.408 [2024-04-26 12:22:20.345360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.345524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.345535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.408 qpair failed and we were unable to recover it. 00:26:19.408 [2024-04-26 12:22:20.345832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.346159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.346168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.408 qpair failed and we were unable to recover it. 00:26:19.408 [2024-04-26 12:22:20.346485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.346781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.346790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.408 qpair failed and we were unable to recover it. 00:26:19.408 [2024-04-26 12:22:20.347136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.347343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.347351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.408 qpair failed and we were unable to recover it. 00:26:19.408 [2024-04-26 12:22:20.347696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.348004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.348014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.408 qpair failed and we were unable to recover it. 00:26:19.408 [2024-04-26 12:22:20.348307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.348646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.348654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.408 qpair failed and we were unable to recover it. 00:26:19.408 [2024-04-26 12:22:20.348956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.349255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.349264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.408 qpair failed and we were unable to recover it. 00:26:19.408 [2024-04-26 12:22:20.349575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.349909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.349919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.408 qpair failed and we were unable to recover it. 00:26:19.408 [2024-04-26 12:22:20.350238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.350584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.350592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.408 qpair failed and we were unable to recover it. 00:26:19.408 [2024-04-26 12:22:20.350882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.351188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.351197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.408 qpair failed and we were unable to recover it. 00:26:19.408 [2024-04-26 12:22:20.351510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.351832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.351845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.408 qpair failed and we were unable to recover it. 00:26:19.408 [2024-04-26 12:22:20.352143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.352477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.352486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.408 qpair failed and we were unable to recover it. 00:26:19.408 [2024-04-26 12:22:20.352821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.353145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.353154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.408 qpair failed and we were unable to recover it. 00:26:19.408 [2024-04-26 12:22:20.353464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.353791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.353801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.408 qpair failed and we were unable to recover it. 00:26:19.408 [2024-04-26 12:22:20.354105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.354422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.354431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.408 qpair failed and we were unable to recover it. 00:26:19.408 [2024-04-26 12:22:20.354736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.355056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.355065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.408 qpair failed and we were unable to recover it. 00:26:19.408 [2024-04-26 12:22:20.355403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.355711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.355720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.408 qpair failed and we were unable to recover it. 00:26:19.408 [2024-04-26 12:22:20.356023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.356386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.356395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.408 qpair failed and we were unable to recover it. 00:26:19.408 [2024-04-26 12:22:20.356686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.356870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.356879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.408 qpair failed and we were unable to recover it. 00:26:19.408 [2024-04-26 12:22:20.357103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.357454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.357463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.408 qpair failed and we were unable to recover it. 00:26:19.408 [2024-04-26 12:22:20.357812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.357970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.357981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.408 qpair failed and we were unable to recover it. 00:26:19.408 [2024-04-26 12:22:20.358280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.358626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.358635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.408 qpair failed and we were unable to recover it. 00:26:19.408 [2024-04-26 12:22:20.358953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.359257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.359265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.408 qpair failed and we were unable to recover it. 00:26:19.408 [2024-04-26 12:22:20.359583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.359901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.359910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.408 qpair failed and we were unable to recover it. 00:26:19.408 [2024-04-26 12:22:20.360227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.360554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.360563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.408 qpair failed and we were unable to recover it. 00:26:19.408 [2024-04-26 12:22:20.360875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.361175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.361184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.408 qpair failed and we were unable to recover it. 00:26:19.408 [2024-04-26 12:22:20.361508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.361825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.361835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.408 qpair failed and we were unable to recover it. 00:26:19.408 [2024-04-26 12:22:20.362183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.362359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.362369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.408 qpair failed and we were unable to recover it. 00:26:19.408 [2024-04-26 12:22:20.362686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.362896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.362905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.408 qpair failed and we were unable to recover it. 00:26:19.408 [2024-04-26 12:22:20.363201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.363531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.363540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.408 qpair failed and we were unable to recover it. 00:26:19.408 [2024-04-26 12:22:20.363828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.364131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.364140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.408 qpair failed and we were unable to recover it. 00:26:19.408 [2024-04-26 12:22:20.364459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.364777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.364786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.408 qpair failed and we were unable to recover it. 00:26:19.408 [2024-04-26 12:22:20.364983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.365329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.365339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.408 qpair failed and we were unable to recover it. 00:26:19.408 [2024-04-26 12:22:20.365656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.365935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.365944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.408 qpair failed and we were unable to recover it. 00:26:19.408 [2024-04-26 12:22:20.366251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.366520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.366529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.408 qpair failed and we were unable to recover it. 00:26:19.408 [2024-04-26 12:22:20.366830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.367124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.367134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.408 qpair failed and we were unable to recover it. 00:26:19.408 [2024-04-26 12:22:20.367308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.367610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.367619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.408 qpair failed and we were unable to recover it. 00:26:19.408 [2024-04-26 12:22:20.367952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.368267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.408 [2024-04-26 12:22:20.368276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.408 qpair failed and we were unable to recover it. 00:26:19.408 [2024-04-26 12:22:20.368565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.368850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.368859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.409 qpair failed and we were unable to recover it. 00:26:19.409 [2024-04-26 12:22:20.369207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.369530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.369539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.409 qpair failed and we were unable to recover it. 00:26:19.409 [2024-04-26 12:22:20.369877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.370208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.370218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.409 qpair failed and we were unable to recover it. 00:26:19.409 [2024-04-26 12:22:20.370550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.370865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.370875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.409 qpair failed and we were unable to recover it. 00:26:19.409 [2024-04-26 12:22:20.371189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.371526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.371535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.409 qpair failed and we were unable to recover it. 00:26:19.409 [2024-04-26 12:22:20.371809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.372109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.372118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.409 qpair failed and we were unable to recover it. 00:26:19.409 [2024-04-26 12:22:20.372331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.372616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.372627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.409 qpair failed and we were unable to recover it. 00:26:19.409 [2024-04-26 12:22:20.372918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.373207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.373216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.409 qpair failed and we were unable to recover it. 00:26:19.409 [2024-04-26 12:22:20.373541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.373852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.373862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.409 qpair failed and we were unable to recover it. 00:26:19.409 [2024-04-26 12:22:20.374232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.374458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.374467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.409 qpair failed and we were unable to recover it. 00:26:19.409 [2024-04-26 12:22:20.374760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.375065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.375074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.409 qpair failed and we were unable to recover it. 00:26:19.409 [2024-04-26 12:22:20.375380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.375699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.375707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.409 qpair failed and we were unable to recover it. 00:26:19.409 [2024-04-26 12:22:20.375997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.376333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.376342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.409 qpair failed and we were unable to recover it. 00:26:19.409 [2024-04-26 12:22:20.376661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.376974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.376984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.409 qpair failed and we were unable to recover it. 00:26:19.409 [2024-04-26 12:22:20.377287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.377582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.377591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.409 qpair failed and we were unable to recover it. 00:26:19.409 [2024-04-26 12:22:20.377894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.378284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.378293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.409 qpair failed and we were unable to recover it. 00:26:19.409 [2024-04-26 12:22:20.378604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.378922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.378933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.409 qpair failed and we were unable to recover it. 00:26:19.409 [2024-04-26 12:22:20.379222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.379416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.379424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.409 qpair failed and we were unable to recover it. 00:26:19.409 [2024-04-26 12:22:20.379719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.379928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.379938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.409 qpair failed and we were unable to recover it. 00:26:19.409 [2024-04-26 12:22:20.380235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.380548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.380557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.409 qpair failed and we were unable to recover it. 00:26:19.409 [2024-04-26 12:22:20.380877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.381183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.381191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.409 qpair failed and we were unable to recover it. 00:26:19.409 [2024-04-26 12:22:20.381509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.381823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.381832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.409 qpair failed and we were unable to recover it. 00:26:19.409 [2024-04-26 12:22:20.382075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.382382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.382391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.409 qpair failed and we were unable to recover it. 00:26:19.409 [2024-04-26 12:22:20.382711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.383089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.383098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.409 qpair failed and we were unable to recover it. 00:26:19.409 [2024-04-26 12:22:20.383393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.383682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.383691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.409 qpair failed and we were unable to recover it. 00:26:19.409 [2024-04-26 12:22:20.383881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.384255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.384264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.409 qpair failed and we were unable to recover it. 00:26:19.409 [2024-04-26 12:22:20.384597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.384900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.384909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.409 qpair failed and we were unable to recover it. 00:26:19.409 [2024-04-26 12:22:20.385125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.385452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.385461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.409 qpair failed and we were unable to recover it. 00:26:19.409 [2024-04-26 12:22:20.385782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.386113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.386122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.409 qpair failed and we were unable to recover it. 00:26:19.409 [2024-04-26 12:22:20.386445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.386637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.386647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.409 qpair failed and we were unable to recover it. 00:26:19.409 [2024-04-26 12:22:20.386936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.387257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.387266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.409 qpair failed and we were unable to recover it. 00:26:19.409 [2024-04-26 12:22:20.387570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.387772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.387781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.409 qpair failed and we were unable to recover it. 00:26:19.409 [2024-04-26 12:22:20.388159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.388490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.388499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.409 qpair failed and we were unable to recover it. 00:26:19.409 [2024-04-26 12:22:20.388829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.389130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.389139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.409 qpair failed and we were unable to recover it. 00:26:19.409 [2024-04-26 12:22:20.389423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.389744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.389753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.409 qpair failed and we were unable to recover it. 00:26:19.409 [2024-04-26 12:22:20.390120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.390419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.390429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.409 qpair failed and we were unable to recover it. 00:26:19.409 [2024-04-26 12:22:20.390747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.391061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.391070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.409 qpair failed and we were unable to recover it. 00:26:19.409 [2024-04-26 12:22:20.391386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.391694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.391702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.409 qpair failed and we were unable to recover it. 00:26:19.409 [2024-04-26 12:22:20.392038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.392374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.392382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.409 qpair failed and we were unable to recover it. 00:26:19.409 [2024-04-26 12:22:20.392692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.392964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.392974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.409 qpair failed and we were unable to recover it. 00:26:19.409 [2024-04-26 12:22:20.393222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.393537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.393546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.409 qpair failed and we were unable to recover it. 00:26:19.409 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 3567608 Killed "${NVMF_APP[@]}" "$@" 00:26:19.409 [2024-04-26 12:22:20.393843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.394143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.394152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.409 qpair failed and we were unable to recover it. 00:26:19.409 [2024-04-26 12:22:20.394319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 12:22:20 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:26:19.409 [2024-04-26 12:22:20.394671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.394681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.409 qpair failed and we were unable to recover it. 00:26:19.409 12:22:20 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:19.409 [2024-04-26 12:22:20.394937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 12:22:20 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:19.409 [2024-04-26 12:22:20.395283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.395293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.409 qpair failed and we were unable to recover it. 00:26:19.409 12:22:20 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:19.409 12:22:20 -- common/autotest_common.sh@10 -- # set +x 00:26:19.409 [2024-04-26 12:22:20.395587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.395924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.395934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.409 qpair failed and we were unable to recover it. 00:26:19.409 [2024-04-26 12:22:20.396235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.396559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.396568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.409 qpair failed and we were unable to recover it. 00:26:19.409 [2024-04-26 12:22:20.396917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.397157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.397166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.409 qpair failed and we were unable to recover it. 00:26:19.409 [2024-04-26 12:22:20.397404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.409 [2024-04-26 12:22:20.397684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.397694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.410 qpair failed and we were unable to recover it. 00:26:19.410 [2024-04-26 12:22:20.398026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.398333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.398342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.410 qpair failed and we were unable to recover it. 00:26:19.410 [2024-04-26 12:22:20.398685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.398963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.398973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.410 qpair failed and we were unable to recover it. 00:26:19.410 [2024-04-26 12:22:20.399169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.399514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.399529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.410 qpair failed and we were unable to recover it. 00:26:19.410 [2024-04-26 12:22:20.399845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.400254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.400263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.410 qpair failed and we were unable to recover it. 00:26:19.410 [2024-04-26 12:22:20.400543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.400887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.400897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.410 qpair failed and we were unable to recover it. 00:26:19.410 [2024-04-26 12:22:20.401116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.401503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.401512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.410 qpair failed and we were unable to recover it. 00:26:19.410 [2024-04-26 12:22:20.401807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.402101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.402110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.410 qpair failed and we were unable to recover it. 00:26:19.410 [2024-04-26 12:22:20.402405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 12:22:20 -- nvmf/common.sh@470 -- # nvmfpid=3568636 00:26:19.410 [2024-04-26 12:22:20.402681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.402690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.410 qpair failed and we were unable to recover it. 00:26:19.410 12:22:20 -- nvmf/common.sh@471 -- # waitforlisten 3568636 00:26:19.410 [2024-04-26 12:22:20.403020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 12:22:20 -- common/autotest_common.sh@817 -- # '[' -z 3568636 ']' 00:26:19.410 12:22:20 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:19.410 [2024-04-26 12:22:20.403281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.403290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.410 qpair failed and we were unable to recover it. 00:26:19.410 12:22:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:19.410 [2024-04-26 12:22:20.403606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 12:22:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:19.410 [2024-04-26 12:22:20.403928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 12:22:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:19.410 [2024-04-26 12:22:20.403937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.410 qpair failed and we were unable to recover it. 00:26:19.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:19.410 12:22:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:19.410 [2024-04-26 12:22:20.404299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 12:22:20 -- common/autotest_common.sh@10 -- # set +x 00:26:19.410 [2024-04-26 12:22:20.404662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.404671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.410 qpair failed and we were unable to recover it. 00:26:19.410 [2024-04-26 12:22:20.404984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.405244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.405254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.410 qpair failed and we were unable to recover it. 00:26:19.410 [2024-04-26 12:22:20.405578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.405852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.405861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.410 qpair failed and we were unable to recover it. 00:26:19.410 [2024-04-26 12:22:20.406231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.406430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.406440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.410 qpair failed and we were unable to recover it. 00:26:19.410 [2024-04-26 12:22:20.406828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.407125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.407135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.410 qpair failed and we were unable to recover it. 00:26:19.410 [2024-04-26 12:22:20.407357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.407619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.407629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.410 qpair failed and we were unable to recover it. 00:26:19.410 [2024-04-26 12:22:20.407967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.408194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.408204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.410 qpair failed and we were unable to recover it. 00:26:19.410 [2024-04-26 12:22:20.408524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.408799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.408809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.410 qpair failed and we were unable to recover it. 00:26:19.410 [2024-04-26 12:22:20.409028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.409330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.409340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.410 qpair failed and we were unable to recover it. 00:26:19.410 [2024-04-26 12:22:20.409661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.409945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.409956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.410 qpair failed and we were unable to recover it. 00:26:19.410 [2024-04-26 12:22:20.410244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.410524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.410534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.410 qpair failed and we were unable to recover it. 00:26:19.410 [2024-04-26 12:22:20.410862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.411214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.411224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.410 qpair failed and we were unable to recover it. 00:26:19.410 [2024-04-26 12:22:20.411520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.411830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.411845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.410 qpair failed and we were unable to recover it. 00:26:19.410 [2024-04-26 12:22:20.412049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.412354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.412364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.410 qpair failed and we were unable to recover it. 00:26:19.410 [2024-04-26 12:22:20.412659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.412857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.412867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.410 qpair failed and we were unable to recover it. 00:26:19.410 [2024-04-26 12:22:20.413207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.413407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.413417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.410 qpair failed and we were unable to recover it. 00:26:19.410 [2024-04-26 12:22:20.413650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.413928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.413940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.410 qpair failed and we were unable to recover it. 00:26:19.410 [2024-04-26 12:22:20.414175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.414493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.414503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.410 qpair failed and we were unable to recover it. 00:26:19.410 [2024-04-26 12:22:20.414715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.415013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.415023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.410 qpair failed and we were unable to recover it. 00:26:19.410 [2024-04-26 12:22:20.415352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.415534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.415544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.410 qpair failed and we were unable to recover it. 00:26:19.410 [2024-04-26 12:22:20.415857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.416169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.416180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.410 qpair failed and we were unable to recover it. 00:26:19.410 [2024-04-26 12:22:20.416491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.416668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.416678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.410 qpair failed and we were unable to recover it. 00:26:19.410 [2024-04-26 12:22:20.417084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.417347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.417357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.410 qpair failed and we were unable to recover it. 00:26:19.410 [2024-04-26 12:22:20.417667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.417946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.417956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.410 qpair failed and we were unable to recover it. 00:26:19.410 [2024-04-26 12:22:20.418265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.418419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.418430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.410 qpair failed and we were unable to recover it. 00:26:19.410 [2024-04-26 12:22:20.418658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.418924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.418934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.410 qpair failed and we were unable to recover it. 00:26:19.410 [2024-04-26 12:22:20.419277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.419600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.419610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.410 qpair failed and we were unable to recover it. 00:26:19.410 [2024-04-26 12:22:20.419761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.420054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.420064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.410 qpair failed and we were unable to recover it. 00:26:19.410 [2024-04-26 12:22:20.420388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.420735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.420745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.410 qpair failed and we were unable to recover it. 00:26:19.410 [2024-04-26 12:22:20.420950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.421235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.421245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.410 qpair failed and we were unable to recover it. 00:26:19.410 [2024-04-26 12:22:20.421433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.421650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.421660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.410 qpair failed and we were unable to recover it. 00:26:19.410 [2024-04-26 12:22:20.422002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.422346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.422357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.410 qpair failed and we were unable to recover it. 00:26:19.410 [2024-04-26 12:22:20.422553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.422739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.422748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.410 qpair failed and we were unable to recover it. 00:26:19.410 [2024-04-26 12:22:20.423057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.423389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.423398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.410 qpair failed and we were unable to recover it. 00:26:19.410 [2024-04-26 12:22:20.423732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.424059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.424069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.410 qpair failed and we were unable to recover it. 00:26:19.410 [2024-04-26 12:22:20.424478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.424797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.424806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.410 qpair failed and we were unable to recover it. 00:26:19.410 [2024-04-26 12:22:20.425113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.425438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.425447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.410 qpair failed and we were unable to recover it. 00:26:19.410 [2024-04-26 12:22:20.425754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.425965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.425975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.410 qpair failed and we were unable to recover it. 00:26:19.410 [2024-04-26 12:22:20.426287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.426590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.426599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.410 qpair failed and we were unable to recover it. 00:26:19.410 [2024-04-26 12:22:20.426903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.427218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.427228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.410 qpair failed and we were unable to recover it. 00:26:19.410 [2024-04-26 12:22:20.427595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.427796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.427804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.410 qpair failed and we were unable to recover it. 00:26:19.410 [2024-04-26 12:22:20.428194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.428508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.428518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.410 qpair failed and we were unable to recover it. 00:26:19.410 [2024-04-26 12:22:20.428703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.429028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.410 [2024-04-26 12:22:20.429038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.411 qpair failed and we were unable to recover it. 00:26:19.411 [2024-04-26 12:22:20.429254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.429466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.429475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.411 qpair failed and we were unable to recover it. 00:26:19.411 [2024-04-26 12:22:20.429683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.430045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.430056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.411 qpair failed and we were unable to recover it. 00:26:19.411 [2024-04-26 12:22:20.430383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.430657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.430666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.411 qpair failed and we were unable to recover it. 00:26:19.411 [2024-04-26 12:22:20.431100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.431425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.431435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.411 qpair failed and we were unable to recover it. 00:26:19.411 [2024-04-26 12:22:20.431783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.431978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.431988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.411 qpair failed and we were unable to recover it. 00:26:19.411 [2024-04-26 12:22:20.432363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.432662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.432672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.411 qpair failed and we were unable to recover it. 00:26:19.411 [2024-04-26 12:22:20.433018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.433348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.433357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.411 qpair failed and we were unable to recover it. 00:26:19.411 [2024-04-26 12:22:20.433614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.433821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.433831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.411 qpair failed and we were unable to recover it. 00:26:19.411 [2024-04-26 12:22:20.434264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.434601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.434610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.411 qpair failed and we were unable to recover it. 00:26:19.411 [2024-04-26 12:22:20.434907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.435133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.435142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.411 qpair failed and we were unable to recover it. 00:26:19.411 [2024-04-26 12:22:20.435455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.435774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.435783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.411 qpair failed and we were unable to recover it. 00:26:19.411 [2024-04-26 12:22:20.435970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.436261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.436270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.411 qpair failed and we were unable to recover it. 00:26:19.411 [2024-04-26 12:22:20.436595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.436799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.436810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.411 qpair failed and we were unable to recover it. 00:26:19.411 [2024-04-26 12:22:20.437156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.437362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.437372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.411 qpair failed and we were unable to recover it. 00:26:19.411 [2024-04-26 12:22:20.437660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.437900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.437910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.411 qpair failed and we were unable to recover it. 00:26:19.411 [2024-04-26 12:22:20.438013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.438328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.438337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.411 qpair failed and we were unable to recover it. 00:26:19.411 [2024-04-26 12:22:20.438564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.438853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.438862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.411 qpair failed and we were unable to recover it. 00:26:19.411 [2024-04-26 12:22:20.439113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.439306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.439316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.411 qpair failed and we were unable to recover it. 00:26:19.411 [2024-04-26 12:22:20.439520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.439759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.439768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.411 qpair failed and we were unable to recover it. 00:26:19.411 [2024-04-26 12:22:20.439975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.440173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.440182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.411 qpair failed and we were unable to recover it. 00:26:19.411 [2024-04-26 12:22:20.440482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.440844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.440853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.411 qpair failed and we were unable to recover it. 00:26:19.411 [2024-04-26 12:22:20.441174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.441363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.441373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.411 qpair failed and we were unable to recover it. 00:26:19.411 [2024-04-26 12:22:20.441746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.442058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.442077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.411 qpair failed and we were unable to recover it. 00:26:19.411 [2024-04-26 12:22:20.442390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.442715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.442723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.411 qpair failed and we were unable to recover it. 00:26:19.411 [2024-04-26 12:22:20.443056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.443146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.443157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.411 qpair failed and we were unable to recover it. 00:26:19.411 [2024-04-26 12:22:20.443209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.443534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.443543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.411 qpair failed and we were unable to recover it. 00:26:19.411 [2024-04-26 12:22:20.443850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.444031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.444040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.411 qpair failed and we were unable to recover it. 00:26:19.411 [2024-04-26 12:22:20.444427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.444698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.444707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.411 qpair failed and we were unable to recover it. 00:26:19.411 [2024-04-26 12:22:20.445103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.445408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.445417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.411 qpair failed and we were unable to recover it. 00:26:19.411 [2024-04-26 12:22:20.445820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.446169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.446179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.411 qpair failed and we were unable to recover it. 00:26:19.411 [2024-04-26 12:22:20.446505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.446846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.446856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.411 qpair failed and we were unable to recover it. 00:26:19.411 [2024-04-26 12:22:20.447059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.447277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.447286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.411 qpair failed and we were unable to recover it. 00:26:19.411 [2024-04-26 12:22:20.447513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.447843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.447853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.411 qpair failed and we were unable to recover it. 00:26:19.411 [2024-04-26 12:22:20.448181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.448501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.448510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.411 qpair failed and we were unable to recover it. 00:26:19.411 [2024-04-26 12:22:20.448862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.449124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.449133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.411 qpair failed and we were unable to recover it. 00:26:19.411 [2024-04-26 12:22:20.449362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.449565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.449575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.411 qpair failed and we were unable to recover it. 00:26:19.411 [2024-04-26 12:22:20.449885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.450211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.450220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.411 qpair failed and we were unable to recover it. 00:26:19.411 [2024-04-26 12:22:20.450570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.450750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.450759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.411 qpair failed and we were unable to recover it. 00:26:19.411 [2024-04-26 12:22:20.451063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.451301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.451310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.411 qpair failed and we were unable to recover it. 00:26:19.411 [2024-04-26 12:22:20.451519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.451875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.451884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.411 qpair failed and we were unable to recover it. 00:26:19.411 [2024-04-26 12:22:20.452200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.452488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.452497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.411 qpair failed and we were unable to recover it. 00:26:19.411 [2024-04-26 12:22:20.452877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.453053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.453062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.411 qpair failed and we were unable to recover it. 00:26:19.411 [2024-04-26 12:22:20.453373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.453671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.453680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.411 qpair failed and we were unable to recover it. 00:26:19.411 [2024-04-26 12:22:20.454022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.454396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.454405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.411 qpair failed and we were unable to recover it. 00:26:19.411 [2024-04-26 12:22:20.454621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.454695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.454705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.411 qpair failed and we were unable to recover it. 00:26:19.411 [2024-04-26 12:22:20.454885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.455064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.455074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.411 qpair failed and we were unable to recover it. 00:26:19.411 [2024-04-26 12:22:20.455427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.455798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.455807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.411 qpair failed and we were unable to recover it. 00:26:19.411 [2024-04-26 12:22:20.456148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.456365] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:26:19.411 [2024-04-26 12:22:20.456407] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:19.411 [2024-04-26 12:22:20.456477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.456486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.411 qpair failed and we were unable to recover it. 00:26:19.411 [2024-04-26 12:22:20.456911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.457229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.411 [2024-04-26 12:22:20.457237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.411 qpair failed and we were unable to recover it. 00:26:19.411 [2024-04-26 12:22:20.457435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.457758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.457768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.412 qpair failed and we were unable to recover it. 00:26:19.412 [2024-04-26 12:22:20.458101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.458320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.458330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.412 qpair failed and we were unable to recover it. 00:26:19.412 [2024-04-26 12:22:20.458661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.459112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.459122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.412 qpair failed and we were unable to recover it. 00:26:19.412 [2024-04-26 12:22:20.459373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.459735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.459744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.412 qpair failed and we were unable to recover it. 00:26:19.412 [2024-04-26 12:22:20.460110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.460428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.460438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.412 qpair failed and we were unable to recover it. 00:26:19.412 [2024-04-26 12:22:20.460622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.460861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.460872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.412 qpair failed and we were unable to recover it. 00:26:19.412 [2024-04-26 12:22:20.461221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.461571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.461581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.412 qpair failed and we were unable to recover it. 00:26:19.412 [2024-04-26 12:22:20.461777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.462122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.462133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.412 qpair failed and we were unable to recover it. 00:26:19.412 [2024-04-26 12:22:20.462383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.462668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.462678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.412 qpair failed and we were unable to recover it. 00:26:19.412 [2024-04-26 12:22:20.463017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.463359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.463368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.412 qpair failed and we were unable to recover it. 00:26:19.412 [2024-04-26 12:22:20.463688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.464024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.464034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.412 qpair failed and we were unable to recover it. 00:26:19.412 [2024-04-26 12:22:20.464263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.464439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.464449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.412 qpair failed and we were unable to recover it. 00:26:19.412 [2024-04-26 12:22:20.464797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.465040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.465051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.412 qpair failed and we were unable to recover it. 00:26:19.412 [2024-04-26 12:22:20.465400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.465705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.465715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.412 qpair failed and we were unable to recover it. 00:26:19.412 [2024-04-26 12:22:20.466070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.466431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.466441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.412 qpair failed and we were unable to recover it. 00:26:19.412 [2024-04-26 12:22:20.466768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.467104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.467114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.412 qpair failed and we were unable to recover it. 00:26:19.412 [2024-04-26 12:22:20.467472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.467827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.467840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.412 qpair failed and we were unable to recover it. 00:26:19.412 [2024-04-26 12:22:20.468252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.468516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.468525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.412 qpair failed and we were unable to recover it. 00:26:19.412 [2024-04-26 12:22:20.468832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.469197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.469207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.412 qpair failed and we were unable to recover it. 00:26:19.412 [2024-04-26 12:22:20.469382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.469585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.469594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.412 qpair failed and we were unable to recover it. 00:26:19.412 [2024-04-26 12:22:20.469919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.470220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.470229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.412 qpair failed and we were unable to recover it. 00:26:19.412 [2024-04-26 12:22:20.470476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.470702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.470711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.412 qpair failed and we were unable to recover it. 00:26:19.412 [2024-04-26 12:22:20.470997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.471175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.471185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.412 qpair failed and we were unable to recover it. 00:26:19.412 [2024-04-26 12:22:20.471251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.471453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.471462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.412 qpair failed and we were unable to recover it. 00:26:19.412 [2024-04-26 12:22:20.471767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.471943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.471952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.412 qpair failed and we were unable to recover it. 00:26:19.412 [2024-04-26 12:22:20.472131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.472432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.472443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.412 qpair failed and we were unable to recover it. 00:26:19.412 [2024-04-26 12:22:20.472536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.472854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.472864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.412 qpair failed and we were unable to recover it. 00:26:19.412 [2024-04-26 12:22:20.473186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.473269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.473278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.412 qpair failed and we were unable to recover it. 00:26:19.412 [2024-04-26 12:22:20.473593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.473871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.473881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.412 qpair failed and we were unable to recover it. 00:26:19.412 [2024-04-26 12:22:20.474183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.474472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.474482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.412 qpair failed and we were unable to recover it. 00:26:19.412 [2024-04-26 12:22:20.474841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.475204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.475213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.412 qpair failed and we were unable to recover it. 00:26:19.412 [2024-04-26 12:22:20.475526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.475851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.475861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.412 qpair failed and we were unable to recover it. 00:26:19.412 [2024-04-26 12:22:20.476170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.476471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.476479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.412 qpair failed and we were unable to recover it. 00:26:19.412 [2024-04-26 12:22:20.476795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.477137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.477147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.412 qpair failed and we were unable to recover it. 00:26:19.412 [2024-04-26 12:22:20.477471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.477756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.477765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.412 qpair failed and we were unable to recover it. 00:26:19.412 [2024-04-26 12:22:20.478089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.478285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.478295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.412 qpair failed and we were unable to recover it. 00:26:19.412 [2024-04-26 12:22:20.478501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.478847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.478857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.412 qpair failed and we were unable to recover it. 00:26:19.412 [2024-04-26 12:22:20.479160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.479458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.479466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.412 qpair failed and we were unable to recover it. 00:26:19.412 [2024-04-26 12:22:20.479675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.479901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.479910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.412 qpair failed and we were unable to recover it. 00:26:19.412 [2024-04-26 12:22:20.480242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.480569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.480578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.412 qpair failed and we were unable to recover it. 00:26:19.412 [2024-04-26 12:22:20.480916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.481255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.481265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.412 qpair failed and we were unable to recover it. 00:26:19.412 [2024-04-26 12:22:20.481575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.481778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.481787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.412 qpair failed and we were unable to recover it. 00:26:19.412 [2024-04-26 12:22:20.482109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.482462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.482472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.412 qpair failed and we were unable to recover it. 00:26:19.412 [2024-04-26 12:22:20.482728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.483011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.483020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.412 qpair failed and we were unable to recover it. 00:26:19.412 [2024-04-26 12:22:20.483333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.483534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.483543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.412 qpair failed and we were unable to recover it. 00:26:19.412 [2024-04-26 12:22:20.483791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.484111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.484120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.412 qpair failed and we were unable to recover it. 00:26:19.412 [2024-04-26 12:22:20.484447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.484731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.484740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.412 qpair failed and we were unable to recover it. 00:26:19.412 [2024-04-26 12:22:20.484944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.485270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.485279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.412 qpair failed and we were unable to recover it. 00:26:19.412 [2024-04-26 12:22:20.485618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.485943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.485953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.412 qpair failed and we were unable to recover it. 00:26:19.412 [2024-04-26 12:22:20.486338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.486512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.486521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.412 qpair failed and we were unable to recover it. 00:26:19.412 [2024-04-26 12:22:20.486834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.487176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.487185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.412 qpair failed and we were unable to recover it. 00:26:19.412 [2024-04-26 12:22:20.487493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.487811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.487820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.412 qpair failed and we were unable to recover it. 00:26:19.412 [2024-04-26 12:22:20.488201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.488526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 EAL: No free 2048 kB hugepages reported on node 1 00:26:19.412 [2024-04-26 12:22:20.488536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.412 qpair failed and we were unable to recover it. 00:26:19.412 [2024-04-26 12:22:20.488891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.489228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.489238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.412 qpair failed and we were unable to recover it. 00:26:19.412 [2024-04-26 12:22:20.489582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.489875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.489885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.412 qpair failed and we were unable to recover it. 00:26:19.412 [2024-04-26 12:22:20.490216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.412 [2024-04-26 12:22:20.490404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.490413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.413 qpair failed and we were unable to recover it. 00:26:19.413 [2024-04-26 12:22:20.490646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.490940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.490950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.413 qpair failed and we were unable to recover it. 00:26:19.413 [2024-04-26 12:22:20.491303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.491593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.491602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.413 qpair failed and we were unable to recover it. 00:26:19.413 [2024-04-26 12:22:20.491918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.492069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.492078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.413 qpair failed and we were unable to recover it. 00:26:19.413 [2024-04-26 12:22:20.492290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.492566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.492576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.413 qpair failed and we were unable to recover it. 00:26:19.413 [2024-04-26 12:22:20.492930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.493265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.493274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.413 qpair failed and we were unable to recover it. 00:26:19.413 [2024-04-26 12:22:20.493617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.493908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.493918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.413 qpair failed and we were unable to recover it. 00:26:19.413 [2024-04-26 12:22:20.494155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.494367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.494376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.413 qpair failed and we were unable to recover it. 00:26:19.413 [2024-04-26 12:22:20.494710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.494917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.494927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.413 qpair failed and we were unable to recover it. 00:26:19.413 [2024-04-26 12:22:20.495280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.495635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.495645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.413 qpair failed and we were unable to recover it. 00:26:19.413 [2024-04-26 12:22:20.495972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.496260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.496269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.413 qpair failed and we were unable to recover it. 00:26:19.413 [2024-04-26 12:22:20.496446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.496789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.496798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.413 qpair failed and we were unable to recover it. 00:26:19.413 [2024-04-26 12:22:20.497172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.497521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.497531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.413 qpair failed and we were unable to recover it. 00:26:19.413 [2024-04-26 12:22:20.497886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.498237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.498246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.413 qpair failed and we were unable to recover it. 00:26:19.413 [2024-04-26 12:22:20.498474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.498719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.498728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.413 qpair failed and we were unable to recover it. 00:26:19.413 [2024-04-26 12:22:20.498928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.499215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.499225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.413 qpair failed and we were unable to recover it. 00:26:19.413 [2024-04-26 12:22:20.499603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.499972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.499982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.413 qpair failed and we were unable to recover it. 00:26:19.413 [2024-04-26 12:22:20.500165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.500453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.500462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.413 qpair failed and we were unable to recover it. 00:26:19.413 [2024-04-26 12:22:20.500788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.501011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.501020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.413 qpair failed and we were unable to recover it. 00:26:19.413 [2024-04-26 12:22:20.501219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.501366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.501375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.413 qpair failed and we were unable to recover it. 00:26:19.413 [2024-04-26 12:22:20.501719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.501920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.501931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.413 qpair failed and we were unable to recover it. 00:26:19.413 [2024-04-26 12:22:20.502262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.502460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.502470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.413 qpair failed and we were unable to recover it. 00:26:19.413 [2024-04-26 12:22:20.502832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.503163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.503172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.413 qpair failed and we were unable to recover it. 00:26:19.413 [2024-04-26 12:22:20.503507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.503712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.503721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.413 qpair failed and we were unable to recover it. 00:26:19.413 [2024-04-26 12:22:20.504132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.504359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.504368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.413 qpair failed and we were unable to recover it. 00:26:19.413 [2024-04-26 12:22:20.504661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.505019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.505030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.413 qpair failed and we were unable to recover it. 00:26:19.413 [2024-04-26 12:22:20.505267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.505626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.505636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.413 qpair failed and we were unable to recover it. 00:26:19.413 [2024-04-26 12:22:20.505955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.506303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.506313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.413 qpair failed and we were unable to recover it. 00:26:19.413 [2024-04-26 12:22:20.506629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.506962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.506971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.413 qpair failed and we were unable to recover it. 00:26:19.413 [2024-04-26 12:22:20.507302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.507503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.507511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.413 qpair failed and we were unable to recover it. 00:26:19.413 [2024-04-26 12:22:20.507829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.508180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.508189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.413 qpair failed and we were unable to recover it. 00:26:19.413 [2024-04-26 12:22:20.508497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.508853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.508864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.413 qpair failed and we were unable to recover it. 00:26:19.413 [2024-04-26 12:22:20.509187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.509519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.509527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.413 qpair failed and we were unable to recover it. 00:26:19.413 [2024-04-26 12:22:20.509818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.510141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.510151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.413 qpair failed and we were unable to recover it. 00:26:19.413 [2024-04-26 12:22:20.510343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.510719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.510728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.413 qpair failed and we were unable to recover it. 00:26:19.413 [2024-04-26 12:22:20.510956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.511305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.511314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.413 qpair failed and we were unable to recover it. 00:26:19.413 [2024-04-26 12:22:20.511487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.511690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.511699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.413 qpair failed and we were unable to recover it. 00:26:19.413 [2024-04-26 12:22:20.512019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.512361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.512370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.413 qpair failed and we were unable to recover it. 00:26:19.413 [2024-04-26 12:22:20.512707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.513065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.513074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.413 qpair failed and we were unable to recover it. 00:26:19.413 [2024-04-26 12:22:20.513399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.513725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.513734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.413 qpair failed and we were unable to recover it. 00:26:19.413 [2024-04-26 12:22:20.514109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.514397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.514406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.413 qpair failed and we were unable to recover it. 00:26:19.413 [2024-04-26 12:22:20.514583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.514817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.514826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.413 qpair failed and we were unable to recover it. 00:26:19.413 [2024-04-26 12:22:20.515008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.515346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.515355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.413 qpair failed and we were unable to recover it. 00:26:19.413 [2024-04-26 12:22:20.515580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.515870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.515880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.413 qpair failed and we were unable to recover it. 00:26:19.413 [2024-04-26 12:22:20.516230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.516509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.516518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.413 qpair failed and we were unable to recover it. 00:26:19.413 [2024-04-26 12:22:20.516831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.517016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.517027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.413 qpair failed and we were unable to recover it. 00:26:19.413 [2024-04-26 12:22:20.517243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.517521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.517530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.413 qpair failed and we were unable to recover it. 00:26:19.413 [2024-04-26 12:22:20.517888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.518085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.518095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.413 qpair failed and we were unable to recover it. 00:26:19.413 [2024-04-26 12:22:20.518400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.518687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.518696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.413 qpair failed and we were unable to recover it. 00:26:19.413 [2024-04-26 12:22:20.519102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.519428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.519437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.413 qpair failed and we were unable to recover it. 00:26:19.413 [2024-04-26 12:22:20.519737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.519954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.519963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.413 qpair failed and we were unable to recover it. 00:26:19.413 [2024-04-26 12:22:20.520303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.520657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.520667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.413 qpair failed and we were unable to recover it. 00:26:19.413 [2024-04-26 12:22:20.520909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.521310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.521319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.413 qpair failed and we were unable to recover it. 00:26:19.413 [2024-04-26 12:22:20.521635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.521954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.521964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.413 qpair failed and we were unable to recover it. 00:26:19.413 [2024-04-26 12:22:20.522297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.522616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.413 [2024-04-26 12:22:20.522625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.413 qpair failed and we were unable to recover it. 00:26:19.413 [2024-04-26 12:22:20.522965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.523176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.523186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.414 qpair failed and we were unable to recover it. 00:26:19.414 [2024-04-26 12:22:20.523479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.523801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.523810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.414 qpair failed and we were unable to recover it. 00:26:19.414 [2024-04-26 12:22:20.524127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.524445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.524455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.414 qpair failed and we were unable to recover it. 00:26:19.414 [2024-04-26 12:22:20.524846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.525133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.525142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.414 qpair failed and we were unable to recover it. 00:26:19.414 [2024-04-26 12:22:20.525437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.525740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.525749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.414 qpair failed and we were unable to recover it. 00:26:19.414 [2024-04-26 12:22:20.526056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.526384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.526393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.414 qpair failed and we were unable to recover it. 00:26:19.414 [2024-04-26 12:22:20.526725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.526927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.526938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.414 qpair failed and we were unable to recover it. 00:26:19.414 [2024-04-26 12:22:20.527240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.527577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.527586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.414 qpair failed and we were unable to recover it. 00:26:19.414 [2024-04-26 12:22:20.527809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.528121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.528130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.414 qpair failed and we were unable to recover it. 00:26:19.414 [2024-04-26 12:22:20.528308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.528642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.528651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.414 qpair failed and we were unable to recover it. 00:26:19.414 [2024-04-26 12:22:20.529003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.529310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.529319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.414 qpair failed and we were unable to recover it. 00:26:19.414 [2024-04-26 12:22:20.529656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.529928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.529938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.414 qpair failed and we were unable to recover it. 00:26:19.414 [2024-04-26 12:22:20.530275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.530626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.530635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.414 qpair failed and we were unable to recover it. 00:26:19.414 [2024-04-26 12:22:20.530931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.530994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.531004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.414 qpair failed and we were unable to recover it. 00:26:19.414 [2024-04-26 12:22:20.531220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.531556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.531564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.414 qpair failed and we were unable to recover it. 00:26:19.414 [2024-04-26 12:22:20.531888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.532095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.532104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.414 qpair failed and we were unable to recover it. 00:26:19.414 [2024-04-26 12:22:20.532460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.532828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.532842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.414 qpair failed and we were unable to recover it. 00:26:19.414 [2024-04-26 12:22:20.533126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.533348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.533358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.414 qpair failed and we were unable to recover it. 00:26:19.414 [2024-04-26 12:22:20.533676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.534009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.534019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.414 qpair failed and we were unable to recover it. 00:26:19.414 [2024-04-26 12:22:20.534344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.534683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.534691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.414 qpair failed and we were unable to recover it. 00:26:19.414 [2024-04-26 12:22:20.534906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.535114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.535123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.414 qpair failed and we were unable to recover it. 00:26:19.414 [2024-04-26 12:22:20.535352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.535508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.535517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.414 qpair failed and we were unable to recover it. 00:26:19.414 [2024-04-26 12:22:20.535786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.536106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.536116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.414 qpair failed and we were unable to recover it. 00:26:19.414 [2024-04-26 12:22:20.536437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.536748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.536757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.414 qpair failed and we were unable to recover it. 00:26:19.414 [2024-04-26 12:22:20.537072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.537424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.537432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.414 qpair failed and we were unable to recover it. 00:26:19.414 [2024-04-26 12:22:20.537757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.537931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.537941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.414 qpair failed and we were unable to recover it. 00:26:19.414 [2024-04-26 12:22:20.538269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.538603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.538612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.414 qpair failed and we were unable to recover it. 00:26:19.414 [2024-04-26 12:22:20.538941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.539109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.539120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.414 qpair failed and we were unable to recover it. 00:26:19.414 [2024-04-26 12:22:20.539301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.539649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.539658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.414 qpair failed and we were unable to recover it. 00:26:19.414 [2024-04-26 12:22:20.539847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.540154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.540163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.414 qpair failed and we were unable to recover it. 00:26:19.414 [2024-04-26 12:22:20.540514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.540869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.540879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.414 qpair failed and we were unable to recover it. 00:26:19.414 [2024-04-26 12:22:20.541066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.541440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.541448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.414 qpair failed and we were unable to recover it. 00:26:19.414 [2024-04-26 12:22:20.541655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.541963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.541973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.414 qpair failed and we were unable to recover it. 00:26:19.414 [2024-04-26 12:22:20.542300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.542496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.542505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.414 qpair failed and we were unable to recover it. 00:26:19.414 [2024-04-26 12:22:20.542813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.543137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.543147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.414 qpair failed and we were unable to recover it. 00:26:19.414 [2024-04-26 12:22:20.543462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.543749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.543758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.414 qpair failed and we were unable to recover it. 00:26:19.414 [2024-04-26 12:22:20.543775] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:19.414 [2024-04-26 12:22:20.543971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.544329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.544338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.414 qpair failed and we were unable to recover it. 00:26:19.414 [2024-04-26 12:22:20.544641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.544962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.544974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.414 qpair failed and we were unable to recover it. 00:26:19.414 [2024-04-26 12:22:20.545149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.545402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.545412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.414 qpair failed and we were unable to recover it. 00:26:19.414 [2024-04-26 12:22:20.545740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.545932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.545942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.414 qpair failed and we were unable to recover it. 00:26:19.414 [2024-04-26 12:22:20.546346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.546659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.546668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.414 qpair failed and we were unable to recover it. 00:26:19.414 [2024-04-26 12:22:20.547047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.547358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.547367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.414 qpair failed and we were unable to recover it. 00:26:19.414 [2024-04-26 12:22:20.547714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.548023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.548033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.414 qpair failed and we were unable to recover it. 00:26:19.414 [2024-04-26 12:22:20.548325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.548605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.548614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.414 qpair failed and we were unable to recover it. 00:26:19.414 [2024-04-26 12:22:20.548940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.549223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.549232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.414 qpair failed and we were unable to recover it. 00:26:19.414 [2024-04-26 12:22:20.549516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.549860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.549871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.414 qpair failed and we were unable to recover it. 00:26:19.414 [2024-04-26 12:22:20.550239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.550537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.550546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.414 qpair failed and we were unable to recover it. 00:26:19.414 [2024-04-26 12:22:20.550871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.414 [2024-04-26 12:22:20.551232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.551242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.415 qpair failed and we were unable to recover it. 00:26:19.415 [2024-04-26 12:22:20.551571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.551752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.551761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.415 qpair failed and we were unable to recover it. 00:26:19.415 [2024-04-26 12:22:20.552001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.552277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.552287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.415 qpair failed and we were unable to recover it. 00:26:19.415 [2024-04-26 12:22:20.552622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.552877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.552886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.415 qpair failed and we were unable to recover it. 00:26:19.415 [2024-04-26 12:22:20.553138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.553467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.553477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.415 qpair failed and we were unable to recover it. 00:26:19.415 [2024-04-26 12:22:20.553798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.554107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.554118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.415 qpair failed and we were unable to recover it. 00:26:19.415 [2024-04-26 12:22:20.554439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.554733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.554743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.415 qpair failed and we were unable to recover it. 00:26:19.415 [2024-04-26 12:22:20.554955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.555308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.555318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.415 qpair failed and we were unable to recover it. 00:26:19.415 [2024-04-26 12:22:20.555611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.555902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.555912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.415 qpair failed and we were unable to recover it. 00:26:19.415 [2024-04-26 12:22:20.556189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.556415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.556424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.415 qpair failed and we were unable to recover it. 00:26:19.415 [2024-04-26 12:22:20.556754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.557048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.557059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.415 qpair failed and we were unable to recover it. 00:26:19.415 [2024-04-26 12:22:20.557340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.557631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.557639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.415 qpair failed and we were unable to recover it. 00:26:19.415 [2024-04-26 12:22:20.557925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.558215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.558224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.415 qpair failed and we were unable to recover it. 00:26:19.415 [2024-04-26 12:22:20.558519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.558843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.558852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.415 qpair failed and we were unable to recover it. 00:26:19.415 [2024-04-26 12:22:20.559138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.559388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.559397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.415 qpair failed and we were unable to recover it. 00:26:19.415 [2024-04-26 12:22:20.559700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.560016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.560025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.415 qpair failed and we were unable to recover it. 00:26:19.415 [2024-04-26 12:22:20.560345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.560499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.560509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.415 qpair failed and we were unable to recover it. 00:26:19.415 [2024-04-26 12:22:20.560868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.561182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.561192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.415 qpair failed and we were unable to recover it. 00:26:19.415 [2024-04-26 12:22:20.561508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.561779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.561788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.415 qpair failed and we were unable to recover it. 00:26:19.415 [2024-04-26 12:22:20.562101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.562284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.562293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.415 qpair failed and we were unable to recover it. 00:26:19.415 [2024-04-26 12:22:20.562606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.562756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.562767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.415 qpair failed and we were unable to recover it. 00:26:19.415 [2024-04-26 12:22:20.563086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.563360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.563370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.415 qpair failed and we were unable to recover it. 00:26:19.415 [2024-04-26 12:22:20.563682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.563882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.563892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.415 qpair failed and we were unable to recover it. 00:26:19.415 [2024-04-26 12:22:20.564207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.564549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.564558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.415 qpair failed and we were unable to recover it. 00:26:19.415 [2024-04-26 12:22:20.564851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.565189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.565199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.415 qpair failed and we were unable to recover it. 00:26:19.415 [2024-04-26 12:22:20.565550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.565822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.565831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.415 qpair failed and we were unable to recover it. 00:26:19.415 [2024-04-26 12:22:20.566156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.566444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.566454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.415 qpair failed and we were unable to recover it. 00:26:19.415 [2024-04-26 12:22:20.566630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.566939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.566948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.415 qpair failed and we were unable to recover it. 00:26:19.415 [2024-04-26 12:22:20.567230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.567529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.567538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.415 qpair failed and we were unable to recover it. 00:26:19.415 [2024-04-26 12:22:20.567869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.568171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.568180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.415 qpair failed and we were unable to recover it. 00:26:19.415 [2024-04-26 12:22:20.568493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.568808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.568817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.415 qpair failed and we were unable to recover it. 00:26:19.415 [2024-04-26 12:22:20.568992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.569185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.569194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.415 qpair failed and we were unable to recover it. 00:26:19.415 [2024-04-26 12:22:20.569502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.569793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.569802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.415 qpair failed and we were unable to recover it. 00:26:19.415 [2024-04-26 12:22:20.570011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.570281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.570290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.415 qpair failed and we were unable to recover it. 00:26:19.415 [2024-04-26 12:22:20.570619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.570887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.570896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.415 qpair failed and we were unable to recover it. 00:26:19.415 [2024-04-26 12:22:20.570954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.571240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.571249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.415 qpair failed and we were unable to recover it. 00:26:19.415 [2024-04-26 12:22:20.571588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.571860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.571870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.415 qpair failed and we were unable to recover it. 00:26:19.415 [2024-04-26 12:22:20.572186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.572383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.572393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.415 qpair failed and we were unable to recover it. 00:26:19.415 [2024-04-26 12:22:20.572712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.573026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.573035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.415 qpair failed and we were unable to recover it. 00:26:19.415 [2024-04-26 12:22:20.573349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.573672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.573681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.415 qpair failed and we were unable to recover it. 00:26:19.415 [2024-04-26 12:22:20.574049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.574360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.574370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.415 qpair failed and we were unable to recover it. 00:26:19.415 [2024-04-26 12:22:20.574511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.574825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.574841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.415 qpair failed and we were unable to recover it. 00:26:19.415 [2024-04-26 12:22:20.575282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.575596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.575605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.415 qpair failed and we were unable to recover it. 00:26:19.415 [2024-04-26 12:22:20.575948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.576130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.576140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.415 qpair failed and we were unable to recover it. 00:26:19.415 [2024-04-26 12:22:20.576434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.576766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.576775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.415 qpair failed and we were unable to recover it. 00:26:19.415 [2024-04-26 12:22:20.577128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.577431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.577440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.415 qpair failed and we were unable to recover it. 00:26:19.415 [2024-04-26 12:22:20.577776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.578067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.578078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.415 qpair failed and we were unable to recover it. 00:26:19.415 [2024-04-26 12:22:20.578383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.578567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.578577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.415 qpair failed and we were unable to recover it. 00:26:19.415 [2024-04-26 12:22:20.578873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.579154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.579163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.415 qpair failed and we were unable to recover it. 00:26:19.415 [2024-04-26 12:22:20.579535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.579841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.579851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.415 qpair failed and we were unable to recover it. 00:26:19.415 [2024-04-26 12:22:20.580167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.580478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.580488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.415 qpair failed and we were unable to recover it. 00:26:19.415 [2024-04-26 12:22:20.580711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.580959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.580972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.415 qpair failed and we were unable to recover it. 00:26:19.415 [2024-04-26 12:22:20.581254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.581597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.581606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.415 qpair failed and we were unable to recover it. 00:26:19.415 [2024-04-26 12:22:20.581954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.582285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.582295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.415 qpair failed and we were unable to recover it. 00:26:19.415 [2024-04-26 12:22:20.582482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.582783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.582793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.415 qpair failed and we were unable to recover it. 00:26:19.415 [2024-04-26 12:22:20.583168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.583527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.415 [2024-04-26 12:22:20.583536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.416 qpair failed and we were unable to recover it. 00:26:19.416 [2024-04-26 12:22:20.583828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.584159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.584168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.416 qpair failed and we were unable to recover it. 00:26:19.416 [2024-04-26 12:22:20.584470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.584787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.584797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.416 qpair failed and we were unable to recover it. 00:26:19.416 [2024-04-26 12:22:20.585117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.585165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.585175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.416 qpair failed and we were unable to recover it. 00:26:19.416 [2024-04-26 12:22:20.585499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.585780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.585790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.416 qpair failed and we were unable to recover it. 00:26:19.416 [2024-04-26 12:22:20.586017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.586333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.586342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.416 qpair failed and we were unable to recover it. 00:26:19.416 [2024-04-26 12:22:20.586668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.586970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.586979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.416 qpair failed and we were unable to recover it. 00:26:19.416 [2024-04-26 12:22:20.587172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.587452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.587460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.416 qpair failed and we were unable to recover it. 00:26:19.416 [2024-04-26 12:22:20.587772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.588082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.588092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.416 qpair failed and we were unable to recover it. 00:26:19.416 [2024-04-26 12:22:20.588384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.588567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.588576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.416 qpair failed and we were unable to recover it. 00:26:19.416 [2024-04-26 12:22:20.588898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.589215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.589224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.416 qpair failed and we were unable to recover it. 00:26:19.416 [2024-04-26 12:22:20.589547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.589865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.589874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.416 qpair failed and we were unable to recover it. 00:26:19.416 [2024-04-26 12:22:20.590238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.590433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.590442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.416 qpair failed and we were unable to recover it. 00:26:19.416 [2024-04-26 12:22:20.590751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.591074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.591083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.416 qpair failed and we were unable to recover it. 00:26:19.416 [2024-04-26 12:22:20.591402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.591676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.591685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.416 qpair failed and we were unable to recover it. 00:26:19.416 [2024-04-26 12:22:20.592006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.592216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.592225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.416 qpair failed and we were unable to recover it. 00:26:19.416 [2024-04-26 12:22:20.592607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.592930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.592940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.416 qpair failed and we were unable to recover it. 00:26:19.416 [2024-04-26 12:22:20.593233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.593429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.593438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.416 qpair failed and we were unable to recover it. 00:26:19.416 [2024-04-26 12:22:20.593709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.594001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.594011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.416 qpair failed and we were unable to recover it. 00:26:19.416 [2024-04-26 12:22:20.594331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.594652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.594661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.416 qpair failed and we were unable to recover it. 00:26:19.416 [2024-04-26 12:22:20.594869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.595178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.595187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.416 qpair failed and we were unable to recover it. 00:26:19.416 [2024-04-26 12:22:20.595355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.595634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.595643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.416 qpair failed and we were unable to recover it. 00:26:19.416 [2024-04-26 12:22:20.595895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.596232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.596241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.416 qpair failed and we were unable to recover it. 00:26:19.416 [2024-04-26 12:22:20.596562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.596907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.596917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.416 qpair failed and we were unable to recover it. 00:26:19.416 [2024-04-26 12:22:20.597232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.597573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.597582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.416 qpair failed and we were unable to recover it. 00:26:19.416 [2024-04-26 12:22:20.597901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.598103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.598111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.416 qpair failed and we were unable to recover it. 00:26:19.416 [2024-04-26 12:22:20.598440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.598656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.598664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.416 qpair failed and we were unable to recover it. 00:26:19.416 [2024-04-26 12:22:20.598958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.599347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.599356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.416 qpair failed and we were unable to recover it. 00:26:19.416 [2024-04-26 12:22:20.599669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.599984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.599993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.416 qpair failed and we were unable to recover it. 00:26:19.416 [2024-04-26 12:22:20.600306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.600618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.600627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.416 qpair failed and we were unable to recover it. 00:26:19.416 [2024-04-26 12:22:20.600964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.601291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.601300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.416 qpair failed and we were unable to recover it. 00:26:19.416 [2024-04-26 12:22:20.601482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.601746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.601755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.416 qpair failed and we were unable to recover it. 00:26:19.416 [2024-04-26 12:22:20.602071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.602465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.602474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.416 qpair failed and we were unable to recover it. 00:26:19.416 [2024-04-26 12:22:20.602764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.603056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.603066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.416 qpair failed and we were unable to recover it. 00:26:19.416 [2024-04-26 12:22:20.603360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.603674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.603683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.416 qpair failed and we were unable to recover it. 00:26:19.416 [2024-04-26 12:22:20.604069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.604393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.604402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.416 qpair failed and we were unable to recover it. 00:26:19.416 [2024-04-26 12:22:20.604740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.604958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.604968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.416 qpair failed and we were unable to recover it. 00:26:19.416 [2024-04-26 12:22:20.605265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.605606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.605615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.416 qpair failed and we were unable to recover it. 00:26:19.416 [2024-04-26 12:22:20.605935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.606249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.606258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.416 qpair failed and we were unable to recover it. 00:26:19.416 [2024-04-26 12:22:20.606573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.606693] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:19.416 [2024-04-26 12:22:20.606724] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:19.416 [2024-04-26 12:22:20.606732] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:19.416 [2024-04-26 12:22:20.606740] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:19.416 [2024-04-26 12:22:20.606747] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:19.416 [2024-04-26 12:22:20.606853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.606862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.416 qpair failed and we were unable to recover it. 00:26:19.416 [2024-04-26 12:22:20.606919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:26:19.416 [2024-04-26 12:22:20.607086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.607038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:26:19.416 [2024-04-26 12:22:20.607179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:26:19.416 [2024-04-26 12:22:20.607180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:26:19.416 [2024-04-26 12:22:20.607391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.607401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.416 qpair failed and we were unable to recover it. 00:26:19.416 [2024-04-26 12:22:20.607614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.607812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.607822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.416 qpair failed and we were unable to recover it. 00:26:19.416 [2024-04-26 12:22:20.608045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.608411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.608421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.416 qpair failed and we were unable to recover it. 00:26:19.416 [2024-04-26 12:22:20.608611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.608892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.608902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.416 qpair failed and we were unable to recover it. 00:26:19.416 [2024-04-26 12:22:20.609238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.609558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.609567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.416 qpair failed and we were unable to recover it. 00:26:19.416 [2024-04-26 12:22:20.609770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.610100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.610110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.416 qpair failed and we were unable to recover it. 00:26:19.416 [2024-04-26 12:22:20.610320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.610662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.610671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.416 qpair failed and we were unable to recover it. 00:26:19.416 [2024-04-26 12:22:20.610969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.611296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.611306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.416 qpair failed and we were unable to recover it. 00:26:19.416 [2024-04-26 12:22:20.611654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.612004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.416 [2024-04-26 12:22:20.612013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.416 qpair failed and we were unable to recover it. 00:26:19.417 [2024-04-26 12:22:20.612250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.417 [2024-04-26 12:22:20.612588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.417 [2024-04-26 12:22:20.612597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.417 qpair failed and we were unable to recover it. 00:26:19.417 [2024-04-26 12:22:20.612785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.417 [2024-04-26 12:22:20.613004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.417 [2024-04-26 12:22:20.613015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.417 qpair failed and we were unable to recover it. 00:26:19.417 [2024-04-26 12:22:20.613309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.417 [2024-04-26 12:22:20.613640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.417 [2024-04-26 12:22:20.613650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.417 qpair failed and we were unable to recover it. 00:26:19.417 [2024-04-26 12:22:20.613825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.417 [2024-04-26 12:22:20.614251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.417 [2024-04-26 12:22:20.614261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.417 qpair failed and we were unable to recover it. 00:26:19.417 [2024-04-26 12:22:20.614594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.417 [2024-04-26 12:22:20.614918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.417 [2024-04-26 12:22:20.614927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.417 qpair failed and we were unable to recover it. 00:26:19.417 [2024-04-26 12:22:20.615107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.417 [2024-04-26 12:22:20.615293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.417 [2024-04-26 12:22:20.615302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.417 qpair failed and we were unable to recover it. 00:26:19.417 [2024-04-26 12:22:20.615681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.417 [2024-04-26 12:22:20.615878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.417 [2024-04-26 12:22:20.615887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.417 qpair failed and we were unable to recover it. 00:26:19.417 [2024-04-26 12:22:20.616206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.417 [2024-04-26 12:22:20.616411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.417 [2024-04-26 12:22:20.616421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.417 qpair failed and we were unable to recover it. 00:26:19.686 [2024-04-26 12:22:20.616717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.686 [2024-04-26 12:22:20.616900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.686 [2024-04-26 12:22:20.616910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.686 qpair failed and we were unable to recover it. 00:26:19.686 [2024-04-26 12:22:20.617215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.686 [2024-04-26 12:22:20.617509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.686 [2024-04-26 12:22:20.617519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.686 qpair failed and we were unable to recover it. 00:26:19.686 [2024-04-26 12:22:20.617845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.686 [2024-04-26 12:22:20.618166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.686 [2024-04-26 12:22:20.618176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.686 qpair failed and we were unable to recover it. 00:26:19.686 [2024-04-26 12:22:20.618506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.686 [2024-04-26 12:22:20.618822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.686 [2024-04-26 12:22:20.618832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.686 qpair failed and we were unable to recover it. 00:26:19.686 [2024-04-26 12:22:20.619167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.686 [2024-04-26 12:22:20.619364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.686 [2024-04-26 12:22:20.619374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.686 qpair failed and we were unable to recover it. 00:26:19.686 [2024-04-26 12:22:20.619697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.686 [2024-04-26 12:22:20.619877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.686 [2024-04-26 12:22:20.619886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.686 qpair failed and we were unable to recover it. 00:26:19.686 [2024-04-26 12:22:20.620082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.686 [2024-04-26 12:22:20.620405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.686 [2024-04-26 12:22:20.620415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.686 qpair failed and we were unable to recover it. 00:26:19.686 [2024-04-26 12:22:20.620748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.686 [2024-04-26 12:22:20.621061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.686 [2024-04-26 12:22:20.621072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.686 qpair failed and we were unable to recover it. 00:26:19.686 [2024-04-26 12:22:20.621394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.686 [2024-04-26 12:22:20.621676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.686 [2024-04-26 12:22:20.621689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.686 qpair failed and we were unable to recover it. 00:26:19.686 [2024-04-26 12:22:20.622055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-04-26 12:22:20.622362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-04-26 12:22:20.622372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.687 qpair failed and we were unable to recover it. 00:26:19.687 [2024-04-26 12:22:20.622651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-04-26 12:22:20.622988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-04-26 12:22:20.622998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.687 qpair failed and we were unable to recover it. 00:26:19.687 [2024-04-26 12:22:20.623244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-04-26 12:22:20.623414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-04-26 12:22:20.623424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.687 qpair failed and we were unable to recover it. 00:26:19.687 [2024-04-26 12:22:20.623711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-04-26 12:22:20.624022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-04-26 12:22:20.624033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.687 qpair failed and we were unable to recover it. 00:26:19.687 [2024-04-26 12:22:20.624366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-04-26 12:22:20.624667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-04-26 12:22:20.624676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.687 qpair failed and we were unable to recover it. 00:26:19.687 [2024-04-26 12:22:20.625001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-04-26 12:22:20.625329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-04-26 12:22:20.625338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.687 qpair failed and we were unable to recover it. 00:26:19.687 [2024-04-26 12:22:20.625682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-04-26 12:22:20.626006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-04-26 12:22:20.626016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.687 qpair failed and we were unable to recover it. 00:26:19.687 [2024-04-26 12:22:20.626232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-04-26 12:22:20.626401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-04-26 12:22:20.626410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.687 qpair failed and we were unable to recover it. 00:26:19.687 [2024-04-26 12:22:20.626599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-04-26 12:22:20.626897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-04-26 12:22:20.626907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.687 qpair failed and we were unable to recover it. 00:26:19.687 [2024-04-26 12:22:20.627105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-04-26 12:22:20.627291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-04-26 12:22:20.627301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.687 qpair failed and we were unable to recover it. 00:26:19.687 [2024-04-26 12:22:20.627590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-04-26 12:22:20.627914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-04-26 12:22:20.627924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.687 qpair failed and we were unable to recover it. 00:26:19.687 [2024-04-26 12:22:20.628207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-04-26 12:22:20.628546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-04-26 12:22:20.628557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.687 qpair failed and we were unable to recover it. 00:26:19.687 [2024-04-26 12:22:20.628872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-04-26 12:22:20.629186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-04-26 12:22:20.629196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.687 qpair failed and we were unable to recover it. 00:26:19.687 [2024-04-26 12:22:20.629384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-04-26 12:22:20.629718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-04-26 12:22:20.629728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.687 qpair failed and we were unable to recover it. 00:26:19.687 [2024-04-26 12:22:20.629947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-04-26 12:22:20.630268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-04-26 12:22:20.630279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.687 qpair failed and we were unable to recover it. 00:26:19.687 [2024-04-26 12:22:20.630579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-04-26 12:22:20.630921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-04-26 12:22:20.630943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.687 qpair failed and we were unable to recover it. 00:26:19.687 [2024-04-26 12:22:20.631345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-04-26 12:22:20.631662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-04-26 12:22:20.631671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.687 qpair failed and we were unable to recover it. 00:26:19.687 [2024-04-26 12:22:20.631895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-04-26 12:22:20.632260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-04-26 12:22:20.632269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.687 qpair failed and we were unable to recover it. 00:26:19.687 [2024-04-26 12:22:20.632655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-04-26 12:22:20.632967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-04-26 12:22:20.632977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.687 qpair failed and we were unable to recover it. 00:26:19.687 [2024-04-26 12:22:20.633300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-04-26 12:22:20.633604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-04-26 12:22:20.633614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.687 qpair failed and we were unable to recover it. 00:26:19.687 [2024-04-26 12:22:20.633955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-04-26 12:22:20.634129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-04-26 12:22:20.634138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.687 qpair failed and we were unable to recover it. 00:26:19.687 [2024-04-26 12:22:20.634447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-04-26 12:22:20.634621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-04-26 12:22:20.634631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.687 qpair failed and we were unable to recover it. 00:26:19.687 [2024-04-26 12:22:20.634939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-04-26 12:22:20.635133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-04-26 12:22:20.635142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.687 qpair failed and we were unable to recover it. 00:26:19.687 [2024-04-26 12:22:20.635429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-04-26 12:22:20.635610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-04-26 12:22:20.635619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.687 qpair failed and we were unable to recover it. 00:26:19.687 [2024-04-26 12:22:20.635842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-04-26 12:22:20.635978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-04-26 12:22:20.635987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.687 qpair failed and we were unable to recover it. 00:26:19.687 [2024-04-26 12:22:20.636211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-04-26 12:22:20.636516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-04-26 12:22:20.636526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.687 qpair failed and we were unable to recover it. 00:26:19.687 [2024-04-26 12:22:20.636716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-04-26 12:22:20.636994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-04-26 12:22:20.637003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.687 qpair failed and we were unable to recover it. 00:26:19.687 [2024-04-26 12:22:20.637286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-04-26 12:22:20.637613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-04-26 12:22:20.637622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.687 qpair failed and we were unable to recover it. 00:26:19.687 [2024-04-26 12:22:20.637959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-04-26 12:22:20.638192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-04-26 12:22:20.638201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.688 qpair failed and we were unable to recover it. 00:26:19.688 [2024-04-26 12:22:20.638529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-04-26 12:22:20.638844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-04-26 12:22:20.638854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.688 qpair failed and we were unable to recover it. 00:26:19.688 [2024-04-26 12:22:20.639167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-04-26 12:22:20.639450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-04-26 12:22:20.639459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.688 qpair failed and we were unable to recover it. 00:26:19.688 [2024-04-26 12:22:20.639790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-04-26 12:22:20.640130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-04-26 12:22:20.640139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.688 qpair failed and we were unable to recover it. 00:26:19.688 [2024-04-26 12:22:20.640436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-04-26 12:22:20.640608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-04-26 12:22:20.640618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.688 qpair failed and we were unable to recover it. 00:26:19.688 [2024-04-26 12:22:20.640809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-04-26 12:22:20.641188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-04-26 12:22:20.641197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.688 qpair failed and we were unable to recover it. 00:26:19.688 [2024-04-26 12:22:20.641531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-04-26 12:22:20.641842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-04-26 12:22:20.641852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.688 qpair failed and we were unable to recover it. 00:26:19.688 [2024-04-26 12:22:20.642236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-04-26 12:22:20.642284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-04-26 12:22:20.642294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.688 qpair failed and we were unable to recover it. 00:26:19.688 [2024-04-26 12:22:20.642451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-04-26 12:22:20.642792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-04-26 12:22:20.642801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.688 qpair failed and we were unable to recover it. 00:26:19.688 [2024-04-26 12:22:20.643093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-04-26 12:22:20.643447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-04-26 12:22:20.643456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.688 qpair failed and we were unable to recover it. 00:26:19.688 [2024-04-26 12:22:20.643654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-04-26 12:22:20.643813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-04-26 12:22:20.643822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.688 qpair failed and we were unable to recover it. 00:26:19.688 [2024-04-26 12:22:20.644160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-04-26 12:22:20.644353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-04-26 12:22:20.644362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.688 qpair failed and we were unable to recover it. 00:26:19.688 [2024-04-26 12:22:20.644538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-04-26 12:22:20.644715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-04-26 12:22:20.644725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.688 qpair failed and we were unable to recover it. 00:26:19.688 [2024-04-26 12:22:20.645047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-04-26 12:22:20.645340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-04-26 12:22:20.645349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.688 qpair failed and we were unable to recover it. 00:26:19.688 [2024-04-26 12:22:20.645739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-04-26 12:22:20.645971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-04-26 12:22:20.645981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.688 qpair failed and we were unable to recover it. 00:26:19.688 [2024-04-26 12:22:20.646343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-04-26 12:22:20.646661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-04-26 12:22:20.646670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.688 qpair failed and we were unable to recover it. 00:26:19.688 [2024-04-26 12:22:20.646719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-04-26 12:22:20.647040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-04-26 12:22:20.647049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.688 qpair failed and we were unable to recover it. 00:26:19.688 [2024-04-26 12:22:20.647333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-04-26 12:22:20.647631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-04-26 12:22:20.647646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.688 qpair failed and we were unable to recover it. 00:26:19.688 [2024-04-26 12:22:20.647958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-04-26 12:22:20.648298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-04-26 12:22:20.648307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.688 qpair failed and we were unable to recover it. 00:26:19.688 [2024-04-26 12:22:20.648625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-04-26 12:22:20.648784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-04-26 12:22:20.648793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.688 qpair failed and we were unable to recover it. 00:26:19.688 [2024-04-26 12:22:20.649165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-04-26 12:22:20.649362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-04-26 12:22:20.649371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.688 qpair failed and we were unable to recover it. 00:26:19.688 [2024-04-26 12:22:20.649671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-04-26 12:22:20.650001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-04-26 12:22:20.650010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.688 qpair failed and we were unable to recover it. 00:26:19.688 [2024-04-26 12:22:20.650262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-04-26 12:22:20.650568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-04-26 12:22:20.650579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.688 qpair failed and we were unable to recover it. 00:26:19.688 [2024-04-26 12:22:20.650760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-04-26 12:22:20.651056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-04-26 12:22:20.651065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.688 qpair failed and we were unable to recover it. 00:26:19.688 [2024-04-26 12:22:20.651247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-04-26 12:22:20.651541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-04-26 12:22:20.651550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.688 qpair failed and we were unable to recover it. 00:26:19.689 [2024-04-26 12:22:20.651852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-04-26 12:22:20.652219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-04-26 12:22:20.652227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.689 qpair failed and we were unable to recover it. 00:26:19.689 [2024-04-26 12:22:20.652536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-04-26 12:22:20.652772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-04-26 12:22:20.652781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.689 qpair failed and we were unable to recover it. 00:26:19.689 [2024-04-26 12:22:20.653085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-04-26 12:22:20.653392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-04-26 12:22:20.653401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.689 qpair failed and we were unable to recover it. 00:26:19.689 [2024-04-26 12:22:20.653728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-04-26 12:22:20.653915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-04-26 12:22:20.653925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.689 qpair failed and we were unable to recover it. 00:26:19.689 [2024-04-26 12:22:20.654210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-04-26 12:22:20.654395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-04-26 12:22:20.654404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.689 qpair failed and we were unable to recover it. 00:26:19.689 [2024-04-26 12:22:20.654535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-04-26 12:22:20.654817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-04-26 12:22:20.654827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.689 qpair failed and we were unable to recover it. 00:26:19.689 [2024-04-26 12:22:20.655152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-04-26 12:22:20.655311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-04-26 12:22:20.655320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.689 qpair failed and we were unable to recover it. 00:26:19.689 [2024-04-26 12:22:20.655668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-04-26 12:22:20.655866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-04-26 12:22:20.655878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.689 qpair failed and we were unable to recover it. 00:26:19.689 [2024-04-26 12:22:20.655931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-04-26 12:22:20.656225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-04-26 12:22:20.656234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.689 qpair failed and we were unable to recover it. 00:26:19.689 [2024-04-26 12:22:20.656555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-04-26 12:22:20.656804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-04-26 12:22:20.656812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.689 qpair failed and we were unable to recover it. 00:26:19.689 [2024-04-26 12:22:20.657158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-04-26 12:22:20.657467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-04-26 12:22:20.657476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.689 qpair failed and we were unable to recover it. 00:26:19.689 [2024-04-26 12:22:20.657794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-04-26 12:22:20.658143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-04-26 12:22:20.658153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.689 qpair failed and we were unable to recover it. 00:26:19.689 [2024-04-26 12:22:20.658481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-04-26 12:22:20.658693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-04-26 12:22:20.658702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.689 qpair failed and we were unable to recover it. 00:26:19.689 [2024-04-26 12:22:20.659040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-04-26 12:22:20.659335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-04-26 12:22:20.659345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.689 qpair failed and we were unable to recover it. 00:26:19.689 [2024-04-26 12:22:20.659524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-04-26 12:22:20.659844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-04-26 12:22:20.659854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.689 qpair failed and we were unable to recover it. 00:26:19.689 [2024-04-26 12:22:20.660260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-04-26 12:22:20.660502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-04-26 12:22:20.660512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.689 qpair failed and we were unable to recover it. 00:26:19.689 [2024-04-26 12:22:20.660809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-04-26 12:22:20.661224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-04-26 12:22:20.661234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.689 qpair failed and we were unable to recover it. 00:26:19.689 [2024-04-26 12:22:20.661538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-04-26 12:22:20.661720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-04-26 12:22:20.661729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.689 qpair failed and we were unable to recover it. 00:26:19.689 [2024-04-26 12:22:20.662132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-04-26 12:22:20.662438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-04-26 12:22:20.662448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.689 qpair failed and we were unable to recover it. 00:26:19.689 [2024-04-26 12:22:20.662667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-04-26 12:22:20.662979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-04-26 12:22:20.662988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.689 qpair failed and we were unable to recover it. 00:26:19.689 [2024-04-26 12:22:20.663313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-04-26 12:22:20.663484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-04-26 12:22:20.663494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.689 qpair failed and we were unable to recover it. 00:26:19.689 [2024-04-26 12:22:20.663897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-04-26 12:22:20.664210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-04-26 12:22:20.664219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.689 qpair failed and we were unable to recover it. 00:26:19.689 [2024-04-26 12:22:20.664539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-04-26 12:22:20.664846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-04-26 12:22:20.664857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.689 qpair failed and we were unable to recover it. 00:26:19.689 [2024-04-26 12:22:20.665190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-04-26 12:22:20.665528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-04-26 12:22:20.665538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.689 qpair failed and we were unable to recover it. 00:26:19.689 [2024-04-26 12:22:20.665663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-04-26 12:22:20.665972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-04-26 12:22:20.665981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.689 qpair failed and we were unable to recover it. 00:26:19.689 [2024-04-26 12:22:20.666295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-04-26 12:22:20.666583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-04-26 12:22:20.666592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.690 qpair failed and we were unable to recover it. 00:26:19.690 [2024-04-26 12:22:20.666768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-04-26 12:22:20.667099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-04-26 12:22:20.667108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.690 qpair failed and we were unable to recover it. 00:26:19.690 [2024-04-26 12:22:20.667415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-04-26 12:22:20.667744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-04-26 12:22:20.667753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.690 qpair failed and we were unable to recover it. 00:26:19.690 [2024-04-26 12:22:20.667945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-04-26 12:22:20.668251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-04-26 12:22:20.668260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.690 qpair failed and we were unable to recover it. 00:26:19.690 [2024-04-26 12:22:20.668474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-04-26 12:22:20.668687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-04-26 12:22:20.668697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.690 qpair failed and we were unable to recover it. 00:26:19.690 [2024-04-26 12:22:20.668893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-04-26 12:22:20.669184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-04-26 12:22:20.669193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.690 qpair failed and we were unable to recover it. 00:26:19.690 [2024-04-26 12:22:20.669369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-04-26 12:22:20.669685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-04-26 12:22:20.669694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.690 qpair failed and we were unable to recover it. 00:26:19.690 [2024-04-26 12:22:20.669901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-04-26 12:22:20.670086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-04-26 12:22:20.670095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.690 qpair failed and we were unable to recover it. 00:26:19.690 [2024-04-26 12:22:20.670262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-04-26 12:22:20.670563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-04-26 12:22:20.670572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.690 qpair failed and we were unable to recover it. 00:26:19.690 [2024-04-26 12:22:20.670717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-04-26 12:22:20.670983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-04-26 12:22:20.670994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.690 qpair failed and we were unable to recover it. 00:26:19.690 [2024-04-26 12:22:20.671196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-04-26 12:22:20.671494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-04-26 12:22:20.671503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.690 qpair failed and we were unable to recover it. 00:26:19.690 [2024-04-26 12:22:20.671699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-04-26 12:22:20.671982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-04-26 12:22:20.671992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.690 qpair failed and we were unable to recover it. 00:26:19.690 [2024-04-26 12:22:20.672310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-04-26 12:22:20.672513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-04-26 12:22:20.672522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.690 qpair failed and we were unable to recover it. 00:26:19.690 [2024-04-26 12:22:20.672752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-04-26 12:22:20.673063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-04-26 12:22:20.673073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.690 qpair failed and we were unable to recover it. 00:26:19.690 [2024-04-26 12:22:20.673423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-04-26 12:22:20.673600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-04-26 12:22:20.673609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.690 qpair failed and we were unable to recover it. 00:26:19.690 [2024-04-26 12:22:20.673939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-04-26 12:22:20.674296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-04-26 12:22:20.674306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.690 qpair failed and we were unable to recover it. 00:26:19.690 [2024-04-26 12:22:20.674656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-04-26 12:22:20.674977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-04-26 12:22:20.674987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.690 qpair failed and we were unable to recover it. 00:26:19.690 [2024-04-26 12:22:20.675240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-04-26 12:22:20.675423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-04-26 12:22:20.675432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.690 qpair failed and we were unable to recover it. 00:26:19.690 [2024-04-26 12:22:20.675799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-04-26 12:22:20.675997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-04-26 12:22:20.676006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.690 qpair failed and we were unable to recover it. 00:26:19.690 [2024-04-26 12:22:20.676346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-04-26 12:22:20.676758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-04-26 12:22:20.676766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.690 qpair failed and we were unable to recover it. 00:26:19.690 [2024-04-26 12:22:20.676896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-04-26 12:22:20.677217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-04-26 12:22:20.677226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.690 qpair failed and we were unable to recover it. 00:26:19.690 [2024-04-26 12:22:20.677413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-04-26 12:22:20.677742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-04-26 12:22:20.677751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.690 qpair failed and we were unable to recover it. 00:26:19.690 [2024-04-26 12:22:20.678062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-04-26 12:22:20.678383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-04-26 12:22:20.678393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.690 qpair failed and we were unable to recover it. 00:26:19.690 [2024-04-26 12:22:20.678703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-04-26 12:22:20.679003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-04-26 12:22:20.679013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.691 qpair failed and we were unable to recover it. 00:26:19.691 [2024-04-26 12:22:20.679181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-04-26 12:22:20.679465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-04-26 12:22:20.679474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.691 qpair failed and we were unable to recover it. 00:26:19.691 [2024-04-26 12:22:20.679851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-04-26 12:22:20.680167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-04-26 12:22:20.680177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.691 qpair failed and we were unable to recover it. 00:26:19.691 [2024-04-26 12:22:20.680506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-04-26 12:22:20.680701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-04-26 12:22:20.680712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.691 qpair failed and we were unable to recover it. 00:26:19.691 [2024-04-26 12:22:20.680899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-04-26 12:22:20.681197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-04-26 12:22:20.681206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.691 qpair failed and we were unable to recover it. 00:26:19.691 [2024-04-26 12:22:20.681414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-04-26 12:22:20.681603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-04-26 12:22:20.681611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.691 qpair failed and we were unable to recover it. 00:26:19.691 [2024-04-26 12:22:20.682011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-04-26 12:22:20.682327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-04-26 12:22:20.682337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.691 qpair failed and we were unable to recover it. 00:26:19.691 [2024-04-26 12:22:20.682667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-04-26 12:22:20.682979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-04-26 12:22:20.682989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.691 qpair failed and we were unable to recover it. 00:26:19.691 [2024-04-26 12:22:20.683176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-04-26 12:22:20.683481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-04-26 12:22:20.683491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.691 qpair failed and we were unable to recover it. 00:26:19.691 [2024-04-26 12:22:20.683665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-04-26 12:22:20.683886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-04-26 12:22:20.683896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.691 qpair failed and we were unable to recover it. 00:26:19.691 [2024-04-26 12:22:20.684095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-04-26 12:22:20.684406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-04-26 12:22:20.684417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.691 qpair failed and we were unable to recover it. 00:26:19.691 [2024-04-26 12:22:20.684721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-04-26 12:22:20.684780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-04-26 12:22:20.684788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.691 qpair failed and we were unable to recover it. 00:26:19.691 [2024-04-26 12:22:20.685084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-04-26 12:22:20.685435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-04-26 12:22:20.685444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.691 qpair failed and we were unable to recover it. 00:26:19.691 [2024-04-26 12:22:20.685731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-04-26 12:22:20.686049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-04-26 12:22:20.686059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.691 qpair failed and we were unable to recover it. 00:26:19.691 [2024-04-26 12:22:20.686377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-04-26 12:22:20.686668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-04-26 12:22:20.686677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.691 qpair failed and we were unable to recover it. 00:26:19.691 [2024-04-26 12:22:20.686873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-04-26 12:22:20.687181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-04-26 12:22:20.687190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.691 qpair failed and we were unable to recover it. 00:26:19.691 [2024-04-26 12:22:20.687512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-04-26 12:22:20.687817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-04-26 12:22:20.687827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.691 qpair failed and we were unable to recover it. 00:26:19.691 [2024-04-26 12:22:20.688048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-04-26 12:22:20.688397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-04-26 12:22:20.688407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.691 qpair failed and we were unable to recover it. 00:26:19.691 [2024-04-26 12:22:20.688718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-04-26 12:22:20.688901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-04-26 12:22:20.688910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.691 qpair failed and we were unable to recover it. 00:26:19.691 [2024-04-26 12:22:20.689232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-04-26 12:22:20.689579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-04-26 12:22:20.689588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.691 qpair failed and we were unable to recover it. 00:26:19.691 [2024-04-26 12:22:20.689798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-04-26 12:22:20.690012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-04-26 12:22:20.690021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.691 qpair failed and we were unable to recover it. 00:26:19.691 [2024-04-26 12:22:20.690326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-04-26 12:22:20.690539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-04-26 12:22:20.690548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.691 qpair failed and we were unable to recover it. 00:26:19.692 [2024-04-26 12:22:20.690848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-04-26 12:22:20.691153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-04-26 12:22:20.691162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.692 qpair failed and we were unable to recover it. 00:26:19.692 [2024-04-26 12:22:20.691494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-04-26 12:22:20.691678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-04-26 12:22:20.691687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.692 qpair failed and we were unable to recover it. 00:26:19.692 [2024-04-26 12:22:20.692040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-04-26 12:22:20.692116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-04-26 12:22:20.692125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.692 qpair failed and we were unable to recover it. 00:26:19.692 [2024-04-26 12:22:20.692300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-04-26 12:22:20.692622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-04-26 12:22:20.692631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.692 qpair failed and we were unable to recover it. 00:26:19.692 [2024-04-26 12:22:20.692952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-04-26 12:22:20.693002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-04-26 12:22:20.693010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.692 qpair failed and we were unable to recover it. 00:26:19.692 [2024-04-26 12:22:20.693208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-04-26 12:22:20.693551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-04-26 12:22:20.693560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.692 qpair failed and we were unable to recover it. 00:26:19.692 [2024-04-26 12:22:20.693859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-04-26 12:22:20.694065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-04-26 12:22:20.694074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.692 qpair failed and we were unable to recover it. 00:26:19.692 [2024-04-26 12:22:20.694270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-04-26 12:22:20.694460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-04-26 12:22:20.694470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.692 qpair failed and we were unable to recover it. 00:26:19.692 [2024-04-26 12:22:20.694826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-04-26 12:22:20.695192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-04-26 12:22:20.695201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.692 qpair failed and we were unable to recover it. 00:26:19.692 [2024-04-26 12:22:20.695417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-04-26 12:22:20.695725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-04-26 12:22:20.695735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.692 qpair failed and we were unable to recover it. 00:26:19.692 [2024-04-26 12:22:20.695900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-04-26 12:22:20.696205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-04-26 12:22:20.696214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.692 qpair failed and we were unable to recover it. 00:26:19.692 [2024-04-26 12:22:20.696530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-04-26 12:22:20.696731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-04-26 12:22:20.696741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.692 qpair failed and we were unable to recover it. 00:26:19.692 [2024-04-26 12:22:20.697037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-04-26 12:22:20.697225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-04-26 12:22:20.697235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.692 qpair failed and we were unable to recover it. 00:26:19.692 [2024-04-26 12:22:20.697564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-04-26 12:22:20.697764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-04-26 12:22:20.697773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.692 qpair failed and we were unable to recover it. 00:26:19.692 [2024-04-26 12:22:20.698112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-04-26 12:22:20.698193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-04-26 12:22:20.698203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.692 qpair failed and we were unable to recover it. 00:26:19.692 [2024-04-26 12:22:20.698382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-04-26 12:22:20.698708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-04-26 12:22:20.698718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.692 qpair failed and we were unable to recover it. 00:26:19.692 [2024-04-26 12:22:20.699027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-04-26 12:22:20.699331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-04-26 12:22:20.699341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.692 qpair failed and we were unable to recover it. 00:26:19.692 [2024-04-26 12:22:20.699519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-04-26 12:22:20.699812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-04-26 12:22:20.699821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.692 qpair failed and we were unable to recover it. 00:26:19.692 [2024-04-26 12:22:20.700032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-04-26 12:22:20.700340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-04-26 12:22:20.700350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.692 qpair failed and we were unable to recover it. 00:26:19.692 [2024-04-26 12:22:20.700678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-04-26 12:22:20.700859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-04-26 12:22:20.700869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.692 qpair failed and we were unable to recover it. 00:26:19.692 [2024-04-26 12:22:20.701064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-04-26 12:22:20.701362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-04-26 12:22:20.701371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.692 qpair failed and we were unable to recover it. 00:26:19.692 [2024-04-26 12:22:20.701553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-04-26 12:22:20.701742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-04-26 12:22:20.701751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.692 qpair failed and we were unable to recover it. 00:26:19.692 [2024-04-26 12:22:20.701848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-04-26 12:22:20.702049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-04-26 12:22:20.702058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.692 qpair failed and we were unable to recover it. 00:26:19.692 [2024-04-26 12:22:20.702335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-04-26 12:22:20.702581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-04-26 12:22:20.702590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.692 qpair failed and we were unable to recover it. 00:26:19.692 [2024-04-26 12:22:20.702769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-04-26 12:22:20.703106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-04-26 12:22:20.703115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.693 qpair failed and we were unable to recover it. 00:26:19.693 [2024-04-26 12:22:20.703448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-04-26 12:22:20.703620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-04-26 12:22:20.703628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.693 qpair failed and we were unable to recover it. 00:26:19.693 [2024-04-26 12:22:20.703800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-04-26 12:22:20.703965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-04-26 12:22:20.703975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.693 qpair failed and we were unable to recover it. 00:26:19.693 [2024-04-26 12:22:20.704263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-04-26 12:22:20.704570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-04-26 12:22:20.704579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.693 qpair failed and we were unable to recover it. 00:26:19.693 [2024-04-26 12:22:20.704766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-04-26 12:22:20.705154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-04-26 12:22:20.705164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.693 qpair failed and we were unable to recover it. 00:26:19.693 [2024-04-26 12:22:20.705554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-04-26 12:22:20.705880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-04-26 12:22:20.705890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.693 qpair failed and we were unable to recover it. 00:26:19.693 [2024-04-26 12:22:20.706094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-04-26 12:22:20.706432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-04-26 12:22:20.706441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.693 qpair failed and we were unable to recover it. 00:26:19.693 [2024-04-26 12:22:20.706643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-04-26 12:22:20.706918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-04-26 12:22:20.706928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.693 qpair failed and we were unable to recover it. 00:26:19.693 [2024-04-26 12:22:20.707283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-04-26 12:22:20.707483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-04-26 12:22:20.707492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.693 qpair failed and we were unable to recover it. 00:26:19.693 [2024-04-26 12:22:20.707749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-04-26 12:22:20.708097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-04-26 12:22:20.708106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.693 qpair failed and we were unable to recover it. 00:26:19.693 [2024-04-26 12:22:20.708403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-04-26 12:22:20.708754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-04-26 12:22:20.708763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.693 qpair failed and we were unable to recover it. 00:26:19.693 [2024-04-26 12:22:20.709079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-04-26 12:22:20.709155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-04-26 12:22:20.709164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.693 qpair failed and we were unable to recover it. 00:26:19.693 [2024-04-26 12:22:20.709333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-04-26 12:22:20.709522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-04-26 12:22:20.709530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.693 qpair failed and we were unable to recover it. 00:26:19.693 [2024-04-26 12:22:20.709674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-04-26 12:22:20.709995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-04-26 12:22:20.710005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.693 qpair failed and we were unable to recover it. 00:26:19.693 [2024-04-26 12:22:20.710210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-04-26 12:22:20.710549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-04-26 12:22:20.710559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.693 qpair failed and we were unable to recover it. 00:26:19.693 [2024-04-26 12:22:20.710900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-04-26 12:22:20.711135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-04-26 12:22:20.711146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.693 qpair failed and we were unable to recover it. 00:26:19.693 [2024-04-26 12:22:20.711536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-04-26 12:22:20.711754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-04-26 12:22:20.711763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.693 qpair failed and we were unable to recover it. 00:26:19.693 [2024-04-26 12:22:20.711824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-04-26 12:22:20.712101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-04-26 12:22:20.712110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.693 qpair failed and we were unable to recover it. 00:26:19.693 [2024-04-26 12:22:20.712433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-04-26 12:22:20.712614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-04-26 12:22:20.712623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.693 qpair failed and we were unable to recover it. 00:26:19.693 [2024-04-26 12:22:20.712978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-04-26 12:22:20.713301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-04-26 12:22:20.713310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.693 qpair failed and we were unable to recover it. 00:26:19.693 [2024-04-26 12:22:20.713508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-04-26 12:22:20.713747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-04-26 12:22:20.713756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.693 qpair failed and we were unable to recover it. 00:26:19.693 [2024-04-26 12:22:20.714143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-04-26 12:22:20.714313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-04-26 12:22:20.714328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.693 qpair failed and we were unable to recover it. 00:26:19.693 [2024-04-26 12:22:20.714662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-04-26 12:22:20.715024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-04-26 12:22:20.715033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.693 qpair failed and we were unable to recover it. 00:26:19.693 [2024-04-26 12:22:20.715384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-04-26 12:22:20.715682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-04-26 12:22:20.715691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.693 qpair failed and we were unable to recover it. 00:26:19.693 [2024-04-26 12:22:20.716035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-04-26 12:22:20.716301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-04-26 12:22:20.716310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.693 qpair failed and we were unable to recover it. 00:26:19.693 [2024-04-26 12:22:20.716629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-04-26 12:22:20.716682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-04-26 12:22:20.716690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.693 qpair failed and we were unable to recover it. 00:26:19.693 [2024-04-26 12:22:20.717051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-04-26 12:22:20.717427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-04-26 12:22:20.717435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.693 qpair failed and we were unable to recover it. 00:26:19.693 [2024-04-26 12:22:20.717625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-04-26 12:22:20.717930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-04-26 12:22:20.717939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.693 qpair failed and we were unable to recover it. 00:26:19.693 [2024-04-26 12:22:20.718161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-04-26 12:22:20.718482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-04-26 12:22:20.718491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.694 qpair failed and we were unable to recover it. 00:26:19.694 [2024-04-26 12:22:20.718702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-04-26 12:22:20.718997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-04-26 12:22:20.719007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.694 qpair failed and we were unable to recover it. 00:26:19.694 [2024-04-26 12:22:20.719368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-04-26 12:22:20.719558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-04-26 12:22:20.719567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.694 qpair failed and we were unable to recover it. 00:26:19.694 [2024-04-26 12:22:20.719758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-04-26 12:22:20.719928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-04-26 12:22:20.719937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.694 qpair failed and we were unable to recover it. 00:26:19.694 [2024-04-26 12:22:20.720145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-04-26 12:22:20.720468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-04-26 12:22:20.720477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.694 qpair failed and we were unable to recover it. 00:26:19.694 [2024-04-26 12:22:20.720793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-04-26 12:22:20.721163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-04-26 12:22:20.721172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.694 qpair failed and we were unable to recover it. 00:26:19.694 [2024-04-26 12:22:20.721362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-04-26 12:22:20.721733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-04-26 12:22:20.721742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.694 qpair failed and we were unable to recover it. 00:26:19.694 [2024-04-26 12:22:20.722147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-04-26 12:22:20.722347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-04-26 12:22:20.722355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.694 qpair failed and we were unable to recover it. 00:26:19.694 [2024-04-26 12:22:20.722539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-04-26 12:22:20.722742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-04-26 12:22:20.722751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.694 qpair failed and we were unable to recover it. 00:26:19.694 [2024-04-26 12:22:20.723039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-04-26 12:22:20.723382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-04-26 12:22:20.723392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.694 qpair failed and we were unable to recover it. 00:26:19.694 [2024-04-26 12:22:20.723736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-04-26 12:22:20.724054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-04-26 12:22:20.724063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.694 qpair failed and we were unable to recover it. 00:26:19.694 [2024-04-26 12:22:20.724452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-04-26 12:22:20.724645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-04-26 12:22:20.724654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.694 qpair failed and we were unable to recover it. 00:26:19.694 [2024-04-26 12:22:20.725006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-04-26 12:22:20.725301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-04-26 12:22:20.725311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.694 qpair failed and we were unable to recover it. 00:26:19.694 [2024-04-26 12:22:20.725637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-04-26 12:22:20.725940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-04-26 12:22:20.725950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.694 qpair failed and we were unable to recover it. 00:26:19.694 [2024-04-26 12:22:20.726278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-04-26 12:22:20.726626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-04-26 12:22:20.726634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.694 qpair failed and we were unable to recover it. 00:26:19.694 [2024-04-26 12:22:20.726833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-04-26 12:22:20.727131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-04-26 12:22:20.727140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.694 qpair failed and we were unable to recover it. 00:26:19.694 [2024-04-26 12:22:20.727325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-04-26 12:22:20.727622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-04-26 12:22:20.727631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.694 qpair failed and we were unable to recover it. 00:26:19.694 [2024-04-26 12:22:20.727805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-04-26 12:22:20.728102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-04-26 12:22:20.728112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.694 qpair failed and we were unable to recover it. 00:26:19.694 [2024-04-26 12:22:20.728451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-04-26 12:22:20.728792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-04-26 12:22:20.728801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.694 qpair failed and we were unable to recover it. 00:26:19.694 [2024-04-26 12:22:20.729168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-04-26 12:22:20.729495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-04-26 12:22:20.729504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.694 qpair failed and we were unable to recover it. 00:26:19.694 [2024-04-26 12:22:20.729870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-04-26 12:22:20.730159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-04-26 12:22:20.730168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.694 qpair failed and we were unable to recover it. 00:26:19.694 [2024-04-26 12:22:20.730344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-04-26 12:22:20.730613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-04-26 12:22:20.730622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.694 qpair failed and we were unable to recover it. 00:26:19.694 [2024-04-26 12:22:20.730804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-04-26 12:22:20.731128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-04-26 12:22:20.731138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.694 qpair failed and we were unable to recover it. 00:26:19.694 [2024-04-26 12:22:20.731484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-04-26 12:22:20.731643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-04-26 12:22:20.731652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.694 qpair failed and we were unable to recover it. 00:26:19.694 [2024-04-26 12:22:20.731946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-04-26 12:22:20.732281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-04-26 12:22:20.732290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.694 qpair failed and we were unable to recover it. 00:26:19.694 [2024-04-26 12:22:20.732480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-04-26 12:22:20.732698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-04-26 12:22:20.732707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.694 qpair failed and we were unable to recover it. 00:26:19.694 [2024-04-26 12:22:20.732947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-04-26 12:22:20.733160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-04-26 12:22:20.733169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.694 qpair failed and we were unable to recover it. 00:26:19.694 [2024-04-26 12:22:20.733483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-04-26 12:22:20.733681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-04-26 12:22:20.733689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.694 qpair failed and we were unable to recover it. 00:26:19.694 [2024-04-26 12:22:20.734014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-04-26 12:22:20.734309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-04-26 12:22:20.734319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.695 qpair failed and we were unable to recover it. 00:26:19.695 [2024-04-26 12:22:20.734686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-04-26 12:22:20.735008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-04-26 12:22:20.735020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.695 qpair failed and we were unable to recover it. 00:26:19.695 [2024-04-26 12:22:20.735236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-04-26 12:22:20.735402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-04-26 12:22:20.735411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.695 qpair failed and we were unable to recover it. 00:26:19.695 [2024-04-26 12:22:20.735722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-04-26 12:22:20.736022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-04-26 12:22:20.736031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.695 qpair failed and we were unable to recover it. 00:26:19.695 [2024-04-26 12:22:20.736329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-04-26 12:22:20.736642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-04-26 12:22:20.736652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.695 qpair failed and we were unable to recover it. 00:26:19.695 [2024-04-26 12:22:20.736968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-04-26 12:22:20.737291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-04-26 12:22:20.737300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.695 qpair failed and we were unable to recover it. 00:26:19.695 [2024-04-26 12:22:20.737655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-04-26 12:22:20.737954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-04-26 12:22:20.737963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.695 qpair failed and we were unable to recover it. 00:26:19.695 [2024-04-26 12:22:20.738299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-04-26 12:22:20.738490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-04-26 12:22:20.738500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.695 qpair failed and we were unable to recover it. 00:26:19.695 [2024-04-26 12:22:20.738710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-04-26 12:22:20.738988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-04-26 12:22:20.738998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.695 qpair failed and we were unable to recover it. 00:26:19.695 [2024-04-26 12:22:20.739307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-04-26 12:22:20.739622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-04-26 12:22:20.739631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.695 qpair failed and we were unable to recover it. 00:26:19.695 [2024-04-26 12:22:20.739897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-04-26 12:22:20.740121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-04-26 12:22:20.740133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.695 qpair failed and we were unable to recover it. 00:26:19.695 [2024-04-26 12:22:20.740451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-04-26 12:22:20.740762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-04-26 12:22:20.740772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.695 qpair failed and we were unable to recover it. 00:26:19.695 [2024-04-26 12:22:20.741011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-04-26 12:22:20.741379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-04-26 12:22:20.741388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.695 qpair failed and we were unable to recover it. 00:26:19.695 [2024-04-26 12:22:20.741717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-04-26 12:22:20.742036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-04-26 12:22:20.742045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.695 qpair failed and we were unable to recover it. 00:26:19.695 [2024-04-26 12:22:20.742381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-04-26 12:22:20.742575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-04-26 12:22:20.742583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.695 qpair failed and we were unable to recover it. 00:26:19.695 [2024-04-26 12:22:20.742844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-04-26 12:22:20.743164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-04-26 12:22:20.743173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.695 qpair failed and we were unable to recover it. 00:26:19.695 [2024-04-26 12:22:20.743498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-04-26 12:22:20.743823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-04-26 12:22:20.743833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.695 qpair failed and we were unable to recover it. 00:26:19.695 [2024-04-26 12:22:20.744145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-04-26 12:22:20.744466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-04-26 12:22:20.744476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.695 qpair failed and we were unable to recover it. 00:26:19.695 [2024-04-26 12:22:20.744660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-04-26 12:22:20.744888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-04-26 12:22:20.744898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.695 qpair failed and we were unable to recover it. 00:26:19.695 [2024-04-26 12:22:20.745196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-04-26 12:22:20.745247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-04-26 12:22:20.745258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.695 qpair failed and we were unable to recover it. 00:26:19.695 [2024-04-26 12:22:20.745591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-04-26 12:22:20.745864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-04-26 12:22:20.745876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.695 qpair failed and we were unable to recover it. 00:26:19.695 [2024-04-26 12:22:20.746187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-04-26 12:22:20.746489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-04-26 12:22:20.746498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.695 qpair failed and we were unable to recover it. 00:26:19.695 [2024-04-26 12:22:20.746870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-04-26 12:22:20.747167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-04-26 12:22:20.747175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.695 qpair failed and we were unable to recover it. 00:26:19.695 [2024-04-26 12:22:20.747422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-04-26 12:22:20.747604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-04-26 12:22:20.747613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.695 qpair failed and we were unable to recover it. 00:26:19.695 [2024-04-26 12:22:20.747661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-04-26 12:22:20.747957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-04-26 12:22:20.747967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.695 qpair failed and we were unable to recover it. 00:26:19.695 [2024-04-26 12:22:20.748293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-04-26 12:22:20.748619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-04-26 12:22:20.748628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.695 qpair failed and we were unable to recover it. 00:26:19.695 [2024-04-26 12:22:20.748813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-04-26 12:22:20.749192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-04-26 12:22:20.749202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.695 qpair failed and we were unable to recover it. 00:26:19.695 [2024-04-26 12:22:20.749544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-04-26 12:22:20.749886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-04-26 12:22:20.749895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.695 qpair failed and we were unable to recover it. 00:26:19.695 [2024-04-26 12:22:20.750269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-04-26 12:22:20.750595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-04-26 12:22:20.750605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.696 qpair failed and we were unable to recover it. 00:26:19.696 [2024-04-26 12:22:20.750954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-04-26 12:22:20.751268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-04-26 12:22:20.751277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.696 qpair failed and we were unable to recover it. 00:26:19.696 [2024-04-26 12:22:20.751468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-04-26 12:22:20.751852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-04-26 12:22:20.751863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.696 qpair failed and we were unable to recover it. 00:26:19.696 [2024-04-26 12:22:20.752199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-04-26 12:22:20.752520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-04-26 12:22:20.752529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.696 qpair failed and we were unable to recover it. 00:26:19.696 [2024-04-26 12:22:20.752848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-04-26 12:22:20.753167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-04-26 12:22:20.753176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.696 qpair failed and we were unable to recover it. 00:26:19.696 [2024-04-26 12:22:20.753499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-04-26 12:22:20.753805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-04-26 12:22:20.753814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.696 qpair failed and we were unable to recover it. 00:26:19.696 [2024-04-26 12:22:20.754135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-04-26 12:22:20.754324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-04-26 12:22:20.754333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.696 qpair failed and we were unable to recover it. 00:26:19.696 [2024-04-26 12:22:20.754520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-04-26 12:22:20.754701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-04-26 12:22:20.754711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.696 qpair failed and we were unable to recover it. 00:26:19.696 [2024-04-26 12:22:20.755034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-04-26 12:22:20.755340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-04-26 12:22:20.755349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.696 qpair failed and we were unable to recover it. 00:26:19.696 [2024-04-26 12:22:20.755691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-04-26 12:22:20.755887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-04-26 12:22:20.755896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.696 qpair failed and we were unable to recover it. 00:26:19.696 [2024-04-26 12:22:20.756206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-04-26 12:22:20.756550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-04-26 12:22:20.756559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.696 qpair failed and we were unable to recover it. 00:26:19.696 [2024-04-26 12:22:20.756871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-04-26 12:22:20.757063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-04-26 12:22:20.757072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.696 qpair failed and we were unable to recover it. 00:26:19.696 [2024-04-26 12:22:20.757345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-04-26 12:22:20.757693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-04-26 12:22:20.757702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.696 qpair failed and we were unable to recover it. 00:26:19.696 [2024-04-26 12:22:20.757916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-04-26 12:22:20.758208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-04-26 12:22:20.758217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.696 qpair failed and we were unable to recover it. 00:26:19.696 [2024-04-26 12:22:20.758542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-04-26 12:22:20.758865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-04-26 12:22:20.758875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.696 qpair failed and we were unable to recover it. 00:26:19.696 [2024-04-26 12:22:20.759144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-04-26 12:22:20.759473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-04-26 12:22:20.759481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.696 qpair failed and we were unable to recover it. 00:26:19.696 [2024-04-26 12:22:20.759887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-04-26 12:22:20.760227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-04-26 12:22:20.760237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.696 qpair failed and we were unable to recover it. 00:26:19.696 [2024-04-26 12:22:20.760432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-04-26 12:22:20.760734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-04-26 12:22:20.760743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.696 qpair failed and we were unable to recover it. 00:26:19.696 [2024-04-26 12:22:20.761061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-04-26 12:22:20.761276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-04-26 12:22:20.761285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.696 qpair failed and we were unable to recover it. 00:26:19.696 [2024-04-26 12:22:20.761458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-04-26 12:22:20.761645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-04-26 12:22:20.761655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.696 qpair failed and we were unable to recover it. 00:26:19.696 [2024-04-26 12:22:20.761972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-04-26 12:22:20.762285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-04-26 12:22:20.762294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.696 qpair failed and we were unable to recover it. 00:26:19.696 [2024-04-26 12:22:20.762603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-04-26 12:22:20.762917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-04-26 12:22:20.762926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.696 qpair failed and we were unable to recover it. 00:26:19.696 [2024-04-26 12:22:20.763262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-04-26 12:22:20.763581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-04-26 12:22:20.763590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.696 qpair failed and we were unable to recover it. 00:26:19.696 [2024-04-26 12:22:20.763917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-04-26 12:22:20.764048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-04-26 12:22:20.764065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.697 qpair failed and we were unable to recover it. 00:26:19.697 [2024-04-26 12:22:20.764396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-04-26 12:22:20.764721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-04-26 12:22:20.764729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.697 qpair failed and we were unable to recover it. 00:26:19.697 [2024-04-26 12:22:20.765054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-04-26 12:22:20.765360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-04-26 12:22:20.765369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.697 qpair failed and we were unable to recover it. 00:26:19.697 [2024-04-26 12:22:20.765664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-04-26 12:22:20.765990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-04-26 12:22:20.766000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.697 qpair failed and we were unable to recover it. 00:26:19.697 [2024-04-26 12:22:20.766342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-04-26 12:22:20.766685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-04-26 12:22:20.766695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.697 qpair failed and we were unable to recover it. 00:26:19.697 [2024-04-26 12:22:20.766994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-04-26 12:22:20.767330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-04-26 12:22:20.767339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.697 qpair failed and we were unable to recover it. 00:26:19.697 [2024-04-26 12:22:20.767656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-04-26 12:22:20.767864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-04-26 12:22:20.767873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.697 qpair failed and we were unable to recover it. 00:26:19.697 [2024-04-26 12:22:20.768071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-04-26 12:22:20.768265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-04-26 12:22:20.768273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.697 qpair failed and we were unable to recover it. 00:26:19.697 [2024-04-26 12:22:20.768469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-04-26 12:22:20.768768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-04-26 12:22:20.768785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.697 qpair failed and we were unable to recover it. 00:26:19.697 [2024-04-26 12:22:20.769098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-04-26 12:22:20.769398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-04-26 12:22:20.769407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.697 qpair failed and we were unable to recover it. 00:26:19.697 [2024-04-26 12:22:20.769734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-04-26 12:22:20.769912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-04-26 12:22:20.769921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.697 qpair failed and we were unable to recover it. 00:26:19.697 [2024-04-26 12:22:20.770226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-04-26 12:22:20.770528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-04-26 12:22:20.770536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.697 qpair failed and we were unable to recover it. 00:26:19.697 [2024-04-26 12:22:20.770763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-04-26 12:22:20.771042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-04-26 12:22:20.771052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.697 qpair failed and we were unable to recover it. 00:26:19.697 [2024-04-26 12:22:20.771367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-04-26 12:22:20.771438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-04-26 12:22:20.771446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.697 qpair failed and we were unable to recover it. 00:26:19.697 [2024-04-26 12:22:20.771727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-04-26 12:22:20.772014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-04-26 12:22:20.772023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.697 qpair failed and we were unable to recover it. 00:26:19.697 [2024-04-26 12:22:20.772338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-04-26 12:22:20.772639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-04-26 12:22:20.772649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.697 qpair failed and we were unable to recover it. 00:26:19.697 [2024-04-26 12:22:20.772871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-04-26 12:22:20.773052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-04-26 12:22:20.773061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.697 qpair failed and we were unable to recover it. 00:26:19.697 [2024-04-26 12:22:20.773357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-04-26 12:22:20.773538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-04-26 12:22:20.773546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.697 qpair failed and we were unable to recover it. 00:26:19.697 [2024-04-26 12:22:20.773707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-04-26 12:22:20.773872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-04-26 12:22:20.773883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.697 qpair failed and we were unable to recover it. 00:26:19.697 [2024-04-26 12:22:20.774214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-04-26 12:22:20.774402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-04-26 12:22:20.774411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.697 qpair failed and we were unable to recover it. 00:26:19.697 [2024-04-26 12:22:20.774734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-04-26 12:22:20.775109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-04-26 12:22:20.775121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.697 qpair failed and we were unable to recover it. 00:26:19.697 [2024-04-26 12:22:20.775461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-04-26 12:22:20.775787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-04-26 12:22:20.775797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.697 qpair failed and we were unable to recover it. 00:26:19.697 [2024-04-26 12:22:20.776068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-04-26 12:22:20.776372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-04-26 12:22:20.776383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.697 qpair failed and we were unable to recover it. 00:26:19.697 [2024-04-26 12:22:20.776570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-04-26 12:22:20.776867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-04-26 12:22:20.776878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.697 qpair failed and we were unable to recover it. 00:26:19.697 [2024-04-26 12:22:20.777212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-04-26 12:22:20.777527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-04-26 12:22:20.777536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.697 qpair failed and we were unable to recover it. 00:26:19.697 [2024-04-26 12:22:20.777874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-04-26 12:22:20.777927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-04-26 12:22:20.777937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.697 qpair failed and we were unable to recover it. 00:26:19.697 [2024-04-26 12:22:20.778206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-04-26 12:22:20.778385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-04-26 12:22:20.778394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.697 qpair failed and we were unable to recover it. 00:26:19.697 [2024-04-26 12:22:20.778691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-04-26 12:22:20.778889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-04-26 12:22:20.778898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.697 qpair failed and we were unable to recover it. 00:26:19.697 [2024-04-26 12:22:20.779256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-04-26 12:22:20.779576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-04-26 12:22:20.779586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.697 qpair failed and we were unable to recover it. 00:26:19.697 [2024-04-26 12:22:20.779922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-04-26 12:22:20.780204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-04-26 12:22:20.780214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.698 qpair failed and we were unable to recover it. 00:26:19.698 [2024-04-26 12:22:20.780537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-04-26 12:22:20.780755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-04-26 12:22:20.780764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.698 qpair failed and we were unable to recover it. 00:26:19.698 [2024-04-26 12:22:20.781051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-04-26 12:22:20.781290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-04-26 12:22:20.781299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.698 qpair failed and we were unable to recover it. 00:26:19.698 [2024-04-26 12:22:20.781494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-04-26 12:22:20.781541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-04-26 12:22:20.781550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.698 qpair failed and we were unable to recover it. 00:26:19.698 [2024-04-26 12:22:20.781847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-04-26 12:22:20.782014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-04-26 12:22:20.782024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.698 qpair failed and we were unable to recover it. 00:26:19.698 [2024-04-26 12:22:20.782208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-04-26 12:22:20.782490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-04-26 12:22:20.782499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.698 qpair failed and we were unable to recover it. 00:26:19.698 [2024-04-26 12:22:20.782848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-04-26 12:22:20.783011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-04-26 12:22:20.783021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.698 qpair failed and we were unable to recover it. 00:26:19.698 [2024-04-26 12:22:20.783317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-04-26 12:22:20.783662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-04-26 12:22:20.783671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.698 qpair failed and we were unable to recover it. 00:26:19.698 [2024-04-26 12:22:20.783998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-04-26 12:22:20.784295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-04-26 12:22:20.784305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.698 qpair failed and we were unable to recover it. 00:26:19.698 [2024-04-26 12:22:20.784425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-04-26 12:22:20.784707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-04-26 12:22:20.784716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.698 qpair failed and we were unable to recover it. 00:26:19.698 [2024-04-26 12:22:20.785029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-04-26 12:22:20.785365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-04-26 12:22:20.785373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.698 qpair failed and we were unable to recover it. 00:26:19.698 [2024-04-26 12:22:20.785748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-04-26 12:22:20.785988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-04-26 12:22:20.785997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.698 qpair failed and we were unable to recover it. 00:26:19.698 [2024-04-26 12:22:20.786177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-04-26 12:22:20.786464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-04-26 12:22:20.786473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.698 qpair failed and we were unable to recover it. 00:26:19.698 [2024-04-26 12:22:20.786655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-04-26 12:22:20.786963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-04-26 12:22:20.786981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.698 qpair failed and we were unable to recover it. 00:26:19.698 [2024-04-26 12:22:20.787182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-04-26 12:22:20.787525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-04-26 12:22:20.787535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.698 qpair failed and we were unable to recover it. 00:26:19.698 [2024-04-26 12:22:20.787880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-04-26 12:22:20.788188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-04-26 12:22:20.788199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.698 qpair failed and we were unable to recover it. 00:26:19.698 [2024-04-26 12:22:20.788520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-04-26 12:22:20.788846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-04-26 12:22:20.788857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.698 qpair failed and we were unable to recover it. 00:26:19.698 [2024-04-26 12:22:20.789165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-04-26 12:22:20.789470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-04-26 12:22:20.789480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.698 qpair failed and we were unable to recover it. 00:26:19.698 [2024-04-26 12:22:20.789817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-04-26 12:22:20.790140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-04-26 12:22:20.790150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.698 qpair failed and we were unable to recover it. 00:26:19.698 [2024-04-26 12:22:20.790331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-04-26 12:22:20.790658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-04-26 12:22:20.790668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.698 qpair failed and we were unable to recover it. 00:26:19.698 [2024-04-26 12:22:20.790994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-04-26 12:22:20.791300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-04-26 12:22:20.791311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.698 qpair failed and we were unable to recover it. 00:26:19.698 [2024-04-26 12:22:20.791645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-04-26 12:22:20.791961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-04-26 12:22:20.791972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.698 qpair failed and we were unable to recover it. 00:26:19.698 [2024-04-26 12:22:20.792326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-04-26 12:22:20.792596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-04-26 12:22:20.792606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.698 qpair failed and we were unable to recover it. 00:26:19.698 [2024-04-26 12:22:20.792797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-04-26 12:22:20.793065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-04-26 12:22:20.793075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.698 qpair failed and we were unable to recover it. 00:26:19.698 [2024-04-26 12:22:20.793380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-04-26 12:22:20.793695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-04-26 12:22:20.793705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.698 qpair failed and we were unable to recover it. 00:26:19.698 [2024-04-26 12:22:20.794014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-04-26 12:22:20.794213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-04-26 12:22:20.794223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.698 qpair failed and we were unable to recover it. 00:26:19.698 [2024-04-26 12:22:20.794543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-04-26 12:22:20.794742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-04-26 12:22:20.794752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.698 qpair failed and we were unable to recover it. 00:26:19.698 [2024-04-26 12:22:20.795077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-04-26 12:22:20.795395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-04-26 12:22:20.795405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.698 qpair failed and we were unable to recover it. 00:26:19.698 [2024-04-26 12:22:20.795572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-04-26 12:22:20.795929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-04-26 12:22:20.795939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.698 qpair failed and we were unable to recover it. 00:26:19.699 [2024-04-26 12:22:20.796287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-04-26 12:22:20.796637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-04-26 12:22:20.796647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.699 qpair failed and we were unable to recover it. 00:26:19.699 [2024-04-26 12:22:20.796965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-04-26 12:22:20.797157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-04-26 12:22:20.797167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.699 qpair failed and we were unable to recover it. 00:26:19.699 [2024-04-26 12:22:20.797439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-04-26 12:22:20.797791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-04-26 12:22:20.797801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.699 qpair failed and we were unable to recover it. 00:26:19.699 [2024-04-26 12:22:20.798128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-04-26 12:22:20.798299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-04-26 12:22:20.798310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.699 qpair failed and we were unable to recover it. 00:26:19.699 [2024-04-26 12:22:20.798646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-04-26 12:22:20.798952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-04-26 12:22:20.798964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.699 qpair failed and we were unable to recover it. 00:26:19.699 [2024-04-26 12:22:20.799290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-04-26 12:22:20.799524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-04-26 12:22:20.799533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.699 qpair failed and we were unable to recover it. 00:26:19.699 [2024-04-26 12:22:20.799875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-04-26 12:22:20.800189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-04-26 12:22:20.800199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.699 qpair failed and we were unable to recover it. 00:26:19.699 [2024-04-26 12:22:20.800387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-04-26 12:22:20.800733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-04-26 12:22:20.800743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.699 qpair failed and we were unable to recover it. 00:26:19.699 [2024-04-26 12:22:20.801069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-04-26 12:22:20.801309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-04-26 12:22:20.801318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.699 qpair failed and we were unable to recover it. 00:26:19.699 [2024-04-26 12:22:20.801567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-04-26 12:22:20.801639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-04-26 12:22:20.801647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.699 qpair failed and we were unable to recover it. 00:26:19.699 [2024-04-26 12:22:20.801830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-04-26 12:22:20.802153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-04-26 12:22:20.802163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.699 qpair failed and we were unable to recover it. 00:26:19.699 [2024-04-26 12:22:20.802499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-04-26 12:22:20.802830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-04-26 12:22:20.802843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.699 qpair failed and we were unable to recover it. 00:26:19.699 [2024-04-26 12:22:20.803176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-04-26 12:22:20.803495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-04-26 12:22:20.803504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.699 qpair failed and we were unable to recover it. 00:26:19.699 [2024-04-26 12:22:20.803844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-04-26 12:22:20.804010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-04-26 12:22:20.804021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.699 qpair failed and we were unable to recover it. 00:26:19.699 [2024-04-26 12:22:20.804320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-04-26 12:22:20.804648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-04-26 12:22:20.804658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.699 qpair failed and we were unable to recover it. 00:26:19.699 [2024-04-26 12:22:20.804978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-04-26 12:22:20.805168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-04-26 12:22:20.805177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.699 qpair failed and we were unable to recover it. 00:26:19.699 [2024-04-26 12:22:20.805372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-04-26 12:22:20.805692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-04-26 12:22:20.805701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.699 qpair failed and we were unable to recover it. 00:26:19.699 [2024-04-26 12:22:20.806006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-04-26 12:22:20.806289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-04-26 12:22:20.806298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.699 qpair failed and we were unable to recover it. 00:26:19.699 [2024-04-26 12:22:20.806678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-04-26 12:22:20.806985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-04-26 12:22:20.806995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.699 qpair failed and we were unable to recover it. 00:26:19.699 [2024-04-26 12:22:20.807208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-04-26 12:22:20.807494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-04-26 12:22:20.807503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.699 qpair failed and we were unable to recover it. 00:26:19.699 [2024-04-26 12:22:20.807806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-04-26 12:22:20.808015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-04-26 12:22:20.808024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.699 qpair failed and we were unable to recover it. 00:26:19.699 [2024-04-26 12:22:20.808201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-04-26 12:22:20.808368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-04-26 12:22:20.808376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.699 qpair failed and we were unable to recover it. 00:26:19.699 [2024-04-26 12:22:20.808757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-04-26 12:22:20.809049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-04-26 12:22:20.809060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.699 qpair failed and we were unable to recover it. 00:26:19.699 [2024-04-26 12:22:20.809358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-04-26 12:22:20.809646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-04-26 12:22:20.809656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.699 qpair failed and we were unable to recover it. 00:26:19.699 [2024-04-26 12:22:20.809884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-04-26 12:22:20.810109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-04-26 12:22:20.810120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.699 qpair failed and we were unable to recover it. 00:26:19.699 [2024-04-26 12:22:20.810431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-04-26 12:22:20.810745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-04-26 12:22:20.810754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.699 qpair failed and we were unable to recover it. 00:26:19.699 [2024-04-26 12:22:20.811070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-04-26 12:22:20.811462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-04-26 12:22:20.811471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.699 qpair failed and we were unable to recover it. 00:26:19.699 [2024-04-26 12:22:20.811766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-04-26 12:22:20.812129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-04-26 12:22:20.812138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.699 qpair failed and we were unable to recover it. 00:26:19.699 [2024-04-26 12:22:20.812453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-04-26 12:22:20.812802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-04-26 12:22:20.812811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.699 qpair failed and we were unable to recover it. 00:26:19.700 [2024-04-26 12:22:20.813133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-04-26 12:22:20.813398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-04-26 12:22:20.813408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.700 qpair failed and we were unable to recover it. 00:26:19.700 [2024-04-26 12:22:20.813745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-04-26 12:22:20.813918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-04-26 12:22:20.813928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.700 qpair failed and we were unable to recover it. 00:26:19.700 [2024-04-26 12:22:20.814268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-04-26 12:22:20.814591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-04-26 12:22:20.814600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.700 qpair failed and we were unable to recover it. 00:26:19.700 [2024-04-26 12:22:20.814911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-04-26 12:22:20.815213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-04-26 12:22:20.815222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.700 qpair failed and we were unable to recover it. 00:26:19.700 [2024-04-26 12:22:20.815551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-04-26 12:22:20.815855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-04-26 12:22:20.815864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.700 qpair failed and we were unable to recover it. 00:26:19.700 [2024-04-26 12:22:20.816229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-04-26 12:22:20.816531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-04-26 12:22:20.816540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.700 qpair failed and we were unable to recover it. 00:26:19.700 [2024-04-26 12:22:20.816851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-04-26 12:22:20.817024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-04-26 12:22:20.817034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.700 qpair failed and we were unable to recover it. 00:26:19.700 [2024-04-26 12:22:20.817336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-04-26 12:22:20.817645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-04-26 12:22:20.817654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.700 qpair failed and we were unable to recover it. 00:26:19.700 [2024-04-26 12:22:20.818016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-04-26 12:22:20.818319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-04-26 12:22:20.818330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.700 qpair failed and we were unable to recover it. 00:26:19.700 [2024-04-26 12:22:20.818685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-04-26 12:22:20.818883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-04-26 12:22:20.818892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.700 qpair failed and we were unable to recover it. 00:26:19.700 [2024-04-26 12:22:20.819172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-04-26 12:22:20.819463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-04-26 12:22:20.819472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.700 qpair failed and we were unable to recover it. 00:26:19.700 [2024-04-26 12:22:20.819560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-04-26 12:22:20.819722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-04-26 12:22:20.819732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.700 qpair failed and we were unable to recover it. 00:26:19.700 [2024-04-26 12:22:20.820040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-04-26 12:22:20.820214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-04-26 12:22:20.820223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.700 qpair failed and we were unable to recover it. 00:26:19.700 [2024-04-26 12:22:20.820611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-04-26 12:22:20.820915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-04-26 12:22:20.820926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.700 qpair failed and we were unable to recover it. 00:26:19.700 [2024-04-26 12:22:20.821264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-04-26 12:22:20.821409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-04-26 12:22:20.821418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.700 qpair failed and we were unable to recover it. 00:26:19.700 [2024-04-26 12:22:20.821750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-04-26 12:22:20.822078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-04-26 12:22:20.822088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.700 qpair failed and we were unable to recover it. 00:26:19.700 [2024-04-26 12:22:20.822287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-04-26 12:22:20.822455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-04-26 12:22:20.822464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.700 qpair failed and we were unable to recover it. 00:26:19.700 [2024-04-26 12:22:20.822653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-04-26 12:22:20.822945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-04-26 12:22:20.822955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.700 qpair failed and we were unable to recover it. 00:26:19.700 [2024-04-26 12:22:20.823287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-04-26 12:22:20.823487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-04-26 12:22:20.823497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.700 qpair failed and we were unable to recover it. 00:26:19.700 [2024-04-26 12:22:20.823830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-04-26 12:22:20.824199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-04-26 12:22:20.824209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.700 qpair failed and we were unable to recover it. 00:26:19.700 [2024-04-26 12:22:20.824525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-04-26 12:22:20.824705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-04-26 12:22:20.824714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.700 qpair failed and we were unable to recover it. 00:26:19.700 [2024-04-26 12:22:20.824946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-04-26 12:22:20.825175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-04-26 12:22:20.825185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.700 qpair failed and we were unable to recover it. 00:26:19.700 [2024-04-26 12:22:20.825506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-04-26 12:22:20.825697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-04-26 12:22:20.825707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.700 qpair failed and we were unable to recover it. 00:26:19.700 [2024-04-26 12:22:20.826062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-04-26 12:22:20.826401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-04-26 12:22:20.826411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.700 qpair failed and we were unable to recover it. 00:26:19.700 [2024-04-26 12:22:20.826601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-04-26 12:22:20.826929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-04-26 12:22:20.826940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.700 qpair failed and we were unable to recover it. 00:26:19.700 [2024-04-26 12:22:20.827148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-04-26 12:22:20.827423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-04-26 12:22:20.827432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.700 qpair failed and we were unable to recover it. 00:26:19.700 [2024-04-26 12:22:20.827750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-04-26 12:22:20.828065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-04-26 12:22:20.828074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.700 qpair failed and we were unable to recover it. 00:26:19.700 [2024-04-26 12:22:20.828479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-04-26 12:22:20.828784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-04-26 12:22:20.828793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.700 qpair failed and we were unable to recover it. 00:26:19.701 [2024-04-26 12:22:20.828980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-04-26 12:22:20.829320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-04-26 12:22:20.829329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.701 qpair failed and we were unable to recover it. 00:26:19.701 [2024-04-26 12:22:20.829625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-04-26 12:22:20.829941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-04-26 12:22:20.829952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.701 qpair failed and we were unable to recover it. 00:26:19.701 [2024-04-26 12:22:20.830280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-04-26 12:22:20.830590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-04-26 12:22:20.830599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.701 qpair failed and we were unable to recover it. 00:26:19.701 [2024-04-26 12:22:20.830785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-04-26 12:22:20.830995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-04-26 12:22:20.831005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.701 qpair failed and we were unable to recover it. 00:26:19.701 [2024-04-26 12:22:20.831357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-04-26 12:22:20.831684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-04-26 12:22:20.831695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.701 qpair failed and we were unable to recover it. 00:26:19.701 [2024-04-26 12:22:20.831887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-04-26 12:22:20.832087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-04-26 12:22:20.832096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.701 qpair failed and we were unable to recover it. 00:26:19.701 [2024-04-26 12:22:20.832290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-04-26 12:22:20.832650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-04-26 12:22:20.832659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.701 qpair failed and we were unable to recover it. 00:26:19.701 [2024-04-26 12:22:20.832981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-04-26 12:22:20.833308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-04-26 12:22:20.833319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.701 qpair failed and we were unable to recover it. 00:26:19.701 [2024-04-26 12:22:20.833658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-04-26 12:22:20.833986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-04-26 12:22:20.833995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.701 qpair failed and we were unable to recover it. 00:26:19.701 [2024-04-26 12:22:20.834179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-04-26 12:22:20.834398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-04-26 12:22:20.834407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.701 qpair failed and we were unable to recover it. 00:26:19.701 [2024-04-26 12:22:20.834601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-04-26 12:22:20.834792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-04-26 12:22:20.834801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.701 qpair failed and we were unable to recover it. 00:26:19.701 [2024-04-26 12:22:20.835169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-04-26 12:22:20.835491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-04-26 12:22:20.835501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.701 qpair failed and we were unable to recover it. 00:26:19.701 [2024-04-26 12:22:20.835823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-04-26 12:22:20.836004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-04-26 12:22:20.836014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.701 qpair failed and we were unable to recover it. 00:26:19.701 [2024-04-26 12:22:20.836241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-04-26 12:22:20.836456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-04-26 12:22:20.836465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.701 qpair failed and we were unable to recover it. 00:26:19.701 [2024-04-26 12:22:20.836784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-04-26 12:22:20.837065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-04-26 12:22:20.837075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.701 qpair failed and we were unable to recover it. 00:26:19.701 [2024-04-26 12:22:20.837412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-04-26 12:22:20.837731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-04-26 12:22:20.837740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.701 qpair failed and we were unable to recover it. 00:26:19.701 [2024-04-26 12:22:20.837912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-04-26 12:22:20.838188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-04-26 12:22:20.838197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.701 qpair failed and we were unable to recover it. 00:26:19.701 [2024-04-26 12:22:20.838393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-04-26 12:22:20.838704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-04-26 12:22:20.838713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.701 qpair failed and we were unable to recover it. 00:26:19.701 [2024-04-26 12:22:20.839067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-04-26 12:22:20.839262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-04-26 12:22:20.839271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.701 qpair failed and we were unable to recover it. 00:26:19.701 [2024-04-26 12:22:20.839442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-04-26 12:22:20.839665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-04-26 12:22:20.839675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.701 qpair failed and we were unable to recover it. 00:26:19.701 [2024-04-26 12:22:20.840024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-04-26 12:22:20.840322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-04-26 12:22:20.840331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.701 qpair failed and we were unable to recover it. 00:26:19.701 [2024-04-26 12:22:20.840513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-04-26 12:22:20.840849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-04-26 12:22:20.840858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.701 qpair failed and we were unable to recover it. 00:26:19.701 [2024-04-26 12:22:20.841189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-04-26 12:22:20.841393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-04-26 12:22:20.841402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.701 qpair failed and we were unable to recover it. 00:26:19.701 [2024-04-26 12:22:20.841740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-04-26 12:22:20.841927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-04-26 12:22:20.841936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.701 qpair failed and we were unable to recover it. 00:26:19.701 [2024-04-26 12:22:20.842258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-04-26 12:22:20.842565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-04-26 12:22:20.842574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.702 qpair failed and we were unable to recover it. 00:26:19.702 [2024-04-26 12:22:20.842903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-04-26 12:22:20.843127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-04-26 12:22:20.843137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.702 qpair failed and we were unable to recover it. 00:26:19.702 [2024-04-26 12:22:20.843471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-04-26 12:22:20.843786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-04-26 12:22:20.843795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.702 qpair failed and we were unable to recover it. 00:26:19.702 [2024-04-26 12:22:20.844127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-04-26 12:22:20.844434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-04-26 12:22:20.844443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.702 qpair failed and we were unable to recover it. 00:26:19.702 [2024-04-26 12:22:20.844845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-04-26 12:22:20.845197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-04-26 12:22:20.845206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.702 qpair failed and we were unable to recover it. 00:26:19.702 [2024-04-26 12:22:20.845567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-04-26 12:22:20.845765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-04-26 12:22:20.845775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.702 qpair failed and we were unable to recover it. 00:26:19.702 [2024-04-26 12:22:20.846076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-04-26 12:22:20.846425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-04-26 12:22:20.846434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.702 qpair failed and we were unable to recover it. 00:26:19.702 [2024-04-26 12:22:20.846748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-04-26 12:22:20.847063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-04-26 12:22:20.847072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.702 qpair failed and we were unable to recover it. 00:26:19.702 [2024-04-26 12:22:20.847401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-04-26 12:22:20.847735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-04-26 12:22:20.847744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.702 qpair failed and we were unable to recover it. 00:26:19.702 [2024-04-26 12:22:20.848086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-04-26 12:22:20.848404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-04-26 12:22:20.848413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.702 qpair failed and we were unable to recover it. 00:26:19.702 [2024-04-26 12:22:20.848694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-04-26 12:22:20.848874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-04-26 12:22:20.848883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.702 qpair failed and we were unable to recover it. 00:26:19.702 [2024-04-26 12:22:20.849196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-04-26 12:22:20.849473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-04-26 12:22:20.849482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.702 qpair failed and we were unable to recover it. 00:26:19.702 [2024-04-26 12:22:20.849653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-04-26 12:22:20.849983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-04-26 12:22:20.849993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.702 qpair failed and we were unable to recover it. 00:26:19.702 [2024-04-26 12:22:20.850321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-04-26 12:22:20.850634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-04-26 12:22:20.850643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.702 qpair failed and we were unable to recover it. 00:26:19.702 [2024-04-26 12:22:20.850697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-04-26 12:22:20.850915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-04-26 12:22:20.850925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.702 qpair failed and we were unable to recover it. 00:26:19.702 [2024-04-26 12:22:20.851232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-04-26 12:22:20.851480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-04-26 12:22:20.851488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.702 qpair failed and we were unable to recover it. 00:26:19.702 [2024-04-26 12:22:20.851701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-04-26 12:22:20.852005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-04-26 12:22:20.852014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.702 qpair failed and we were unable to recover it. 00:26:19.702 [2024-04-26 12:22:20.852342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-04-26 12:22:20.852516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-04-26 12:22:20.852526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.702 qpair failed and we were unable to recover it. 00:26:19.702 [2024-04-26 12:22:20.852825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-04-26 12:22:20.853052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-04-26 12:22:20.853061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.702 qpair failed and we were unable to recover it. 00:26:19.702 [2024-04-26 12:22:20.853278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-04-26 12:22:20.853488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-04-26 12:22:20.853497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.702 qpair failed and we were unable to recover it. 00:26:19.702 [2024-04-26 12:22:20.853804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-04-26 12:22:20.854113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-04-26 12:22:20.854123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.702 qpair failed and we were unable to recover it. 00:26:19.702 [2024-04-26 12:22:20.854337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-04-26 12:22:20.854692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-04-26 12:22:20.854702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.702 qpair failed and we were unable to recover it. 00:26:19.702 [2024-04-26 12:22:20.855025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-04-26 12:22:20.855345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-04-26 12:22:20.855354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.702 qpair failed and we were unable to recover it. 00:26:19.702 [2024-04-26 12:22:20.855684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-04-26 12:22:20.855997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-04-26 12:22:20.856007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.702 qpair failed and we were unable to recover it. 00:26:19.702 [2024-04-26 12:22:20.856319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-04-26 12:22:20.856638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-04-26 12:22:20.856647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.702 qpair failed and we were unable to recover it. 00:26:19.702 [2024-04-26 12:22:20.856957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-04-26 12:22:20.857121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-04-26 12:22:20.857130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.702 qpair failed and we were unable to recover it. 00:26:19.702 [2024-04-26 12:22:20.857429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-04-26 12:22:20.857775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-04-26 12:22:20.857784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.702 qpair failed and we were unable to recover it. 00:26:19.702 [2024-04-26 12:22:20.858095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-04-26 12:22:20.858424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-04-26 12:22:20.858432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.702 qpair failed and we were unable to recover it. 00:26:19.702 [2024-04-26 12:22:20.858741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-04-26 12:22:20.859060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-04-26 12:22:20.859069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.702 qpair failed and we were unable to recover it. 00:26:19.703 [2024-04-26 12:22:20.859409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-04-26 12:22:20.859486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-04-26 12:22:20.859495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.703 qpair failed and we were unable to recover it. 00:26:19.703 [2024-04-26 12:22:20.859670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-04-26 12:22:20.859879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-04-26 12:22:20.859888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.703 qpair failed and we were unable to recover it. 00:26:19.703 [2024-04-26 12:22:20.860187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-04-26 12:22:20.860348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-04-26 12:22:20.860357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.703 qpair failed and we were unable to recover it. 00:26:19.703 [2024-04-26 12:22:20.860579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-04-26 12:22:20.860855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-04-26 12:22:20.860864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.703 qpair failed and we were unable to recover it. 00:26:19.703 [2024-04-26 12:22:20.861235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-04-26 12:22:20.861564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-04-26 12:22:20.861573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.703 qpair failed and we were unable to recover it. 00:26:19.703 [2024-04-26 12:22:20.861769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-04-26 12:22:20.861816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-04-26 12:22:20.861826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.703 qpair failed and we were unable to recover it. 00:26:19.703 [2024-04-26 12:22:20.862109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-04-26 12:22:20.862445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-04-26 12:22:20.862454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.703 qpair failed and we were unable to recover it. 00:26:19.703 [2024-04-26 12:22:20.862497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-04-26 12:22:20.862785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-04-26 12:22:20.862795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.703 qpair failed and we were unable to recover it. 00:26:19.703 [2024-04-26 12:22:20.863104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-04-26 12:22:20.863439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-04-26 12:22:20.863448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.703 qpair failed and we were unable to recover it. 00:26:19.703 [2024-04-26 12:22:20.863773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-04-26 12:22:20.864093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-04-26 12:22:20.864104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.703 qpair failed and we were unable to recover it. 00:26:19.703 [2024-04-26 12:22:20.864444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-04-26 12:22:20.864771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-04-26 12:22:20.864780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.703 qpair failed and we were unable to recover it. 00:26:19.703 [2024-04-26 12:22:20.864968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-04-26 12:22:20.865329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-04-26 12:22:20.865339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.703 qpair failed and we were unable to recover it. 00:26:19.703 [2024-04-26 12:22:20.865673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-04-26 12:22:20.865991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-04-26 12:22:20.866001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.703 qpair failed and we were unable to recover it. 00:26:19.703 [2024-04-26 12:22:20.866325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-04-26 12:22:20.866522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-04-26 12:22:20.866531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.703 qpair failed and we were unable to recover it. 00:26:19.703 [2024-04-26 12:22:20.866871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-04-26 12:22:20.867246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-04-26 12:22:20.867255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.703 qpair failed and we were unable to recover it. 00:26:19.703 [2024-04-26 12:22:20.867583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-04-26 12:22:20.867945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-04-26 12:22:20.867958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.703 qpair failed and we were unable to recover it. 00:26:19.703 [2024-04-26 12:22:20.868270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-04-26 12:22:20.868578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-04-26 12:22:20.868587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.703 qpair failed and we were unable to recover it. 00:26:19.703 [2024-04-26 12:22:20.868906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-04-26 12:22:20.869107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-04-26 12:22:20.869117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.703 qpair failed and we were unable to recover it. 00:26:19.703 [2024-04-26 12:22:20.869427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-04-26 12:22:20.869769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-04-26 12:22:20.869778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.703 qpair failed and we were unable to recover it. 00:26:19.703 [2024-04-26 12:22:20.870104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-04-26 12:22:20.870397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-04-26 12:22:20.870406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.703 qpair failed and we were unable to recover it. 00:26:19.703 [2024-04-26 12:22:20.870594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-04-26 12:22:20.870918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-04-26 12:22:20.870929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.703 qpair failed and we were unable to recover it. 00:26:19.703 [2024-04-26 12:22:20.871211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-04-26 12:22:20.871582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-04-26 12:22:20.871591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.703 qpair failed and we were unable to recover it. 00:26:19.703 [2024-04-26 12:22:20.871879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-04-26 12:22:20.872164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-04-26 12:22:20.872173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.703 qpair failed and we were unable to recover it. 00:26:19.703 [2024-04-26 12:22:20.872346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-04-26 12:22:20.872608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-04-26 12:22:20.872617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.703 qpair failed and we were unable to recover it. 00:26:19.703 [2024-04-26 12:22:20.872939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-04-26 12:22:20.873139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-04-26 12:22:20.873149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.703 qpair failed and we were unable to recover it. 00:26:19.703 [2024-04-26 12:22:20.873430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-04-26 12:22:20.873788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-04-26 12:22:20.873798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.703 qpair failed and we were unable to recover it. 00:26:19.703 [2024-04-26 12:22:20.873987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-04-26 12:22:20.874271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-04-26 12:22:20.874280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.703 qpair failed and we were unable to recover it. 00:26:19.703 [2024-04-26 12:22:20.874583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-04-26 12:22:20.874889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-04-26 12:22:20.874898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.703 qpair failed and we were unable to recover it. 00:26:19.704 [2024-04-26 12:22:20.875224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-04-26 12:22:20.875545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-04-26 12:22:20.875554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.704 qpair failed and we were unable to recover it. 00:26:19.704 [2024-04-26 12:22:20.875746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-04-26 12:22:20.875957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-04-26 12:22:20.875968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.704 qpair failed and we were unable to recover it. 00:26:19.704 [2024-04-26 12:22:20.876295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-04-26 12:22:20.876482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-04-26 12:22:20.876491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.704 qpair failed and we were unable to recover it. 00:26:19.704 [2024-04-26 12:22:20.876829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-04-26 12:22:20.877019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-04-26 12:22:20.877028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.704 qpair failed and we were unable to recover it. 00:26:19.704 [2024-04-26 12:22:20.877350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-04-26 12:22:20.877519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-04-26 12:22:20.877529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.704 qpair failed and we were unable to recover it. 00:26:19.704 [2024-04-26 12:22:20.877889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-04-26 12:22:20.878116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-04-26 12:22:20.878125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.704 qpair failed and we were unable to recover it. 00:26:19.704 [2024-04-26 12:22:20.878445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-04-26 12:22:20.878767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-04-26 12:22:20.878778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.704 qpair failed and we were unable to recover it. 00:26:19.704 [2024-04-26 12:22:20.879084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-04-26 12:22:20.879397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-04-26 12:22:20.879407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.704 qpair failed and we were unable to recover it. 00:26:19.704 [2024-04-26 12:22:20.879755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-04-26 12:22:20.880056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-04-26 12:22:20.880066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.704 qpair failed and we were unable to recover it. 00:26:19.704 [2024-04-26 12:22:20.880252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-04-26 12:22:20.880554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-04-26 12:22:20.880563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.704 qpair failed and we were unable to recover it. 00:26:19.704 [2024-04-26 12:22:20.880809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-04-26 12:22:20.881149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-04-26 12:22:20.881158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.704 qpair failed and we were unable to recover it. 00:26:19.704 [2024-04-26 12:22:20.881483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-04-26 12:22:20.881649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-04-26 12:22:20.881658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.704 qpair failed and we were unable to recover it. 00:26:19.704 [2024-04-26 12:22:20.881935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-04-26 12:22:20.882265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-04-26 12:22:20.882274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.704 qpair failed and we were unable to recover it. 00:26:19.704 [2024-04-26 12:22:20.882579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-04-26 12:22:20.882920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-04-26 12:22:20.882930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.704 qpair failed and we were unable to recover it. 00:26:19.704 [2024-04-26 12:22:20.883232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-04-26 12:22:20.883554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-04-26 12:22:20.883563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.704 qpair failed and we were unable to recover it. 00:26:19.704 [2024-04-26 12:22:20.883725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-04-26 12:22:20.884016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-04-26 12:22:20.884026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.704 qpair failed and we were unable to recover it. 00:26:19.704 [2024-04-26 12:22:20.884250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-04-26 12:22:20.884554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-04-26 12:22:20.884563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.704 qpair failed and we were unable to recover it. 00:26:19.704 [2024-04-26 12:22:20.884897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-04-26 12:22:20.885087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-04-26 12:22:20.885097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.704 qpair failed and we were unable to recover it. 00:26:19.704 [2024-04-26 12:22:20.885295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-04-26 12:22:20.885601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-04-26 12:22:20.885610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.704 qpair failed and we were unable to recover it. 00:26:19.704 [2024-04-26 12:22:20.885786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-04-26 12:22:20.886006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-04-26 12:22:20.886015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.704 qpair failed and we were unable to recover it. 00:26:19.704 [2024-04-26 12:22:20.886330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-04-26 12:22:20.886595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-04-26 12:22:20.886605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.704 qpair failed and we were unable to recover it. 00:26:19.704 [2024-04-26 12:22:20.886779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-04-26 12:22:20.887123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-04-26 12:22:20.887133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.704 qpair failed and we were unable to recover it. 00:26:19.704 [2024-04-26 12:22:20.887456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-04-26 12:22:20.887780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-04-26 12:22:20.887789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.704 qpair failed and we were unable to recover it. 00:26:19.704 [2024-04-26 12:22:20.888106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-04-26 12:22:20.888433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-04-26 12:22:20.888443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.704 qpair failed and we were unable to recover it. 00:26:19.704 [2024-04-26 12:22:20.888786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-04-26 12:22:20.889111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-04-26 12:22:20.889122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.704 qpair failed and we were unable to recover it. 00:26:19.704 [2024-04-26 12:22:20.889174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-04-26 12:22:20.889447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-04-26 12:22:20.889457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.704 qpair failed and we were unable to recover it. 00:26:19.704 [2024-04-26 12:22:20.889739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-04-26 12:22:20.889886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-04-26 12:22:20.889895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.704 qpair failed and we were unable to recover it. 00:26:19.704 [2024-04-26 12:22:20.890227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-04-26 12:22:20.890573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-04-26 12:22:20.890582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.704 qpair failed and we were unable to recover it. 00:26:19.705 [2024-04-26 12:22:20.890979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.705 [2024-04-26 12:22:20.891331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.705 [2024-04-26 12:22:20.891341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.705 qpair failed and we were unable to recover it. 00:26:19.705 [2024-04-26 12:22:20.891674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.705 [2024-04-26 12:22:20.891894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.705 [2024-04-26 12:22:20.891904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.705 qpair failed and we were unable to recover it. 00:26:19.705 [2024-04-26 12:22:20.892254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.705 [2024-04-26 12:22:20.892458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.705 [2024-04-26 12:22:20.892467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.705 qpair failed and we were unable to recover it. 00:26:19.705 [2024-04-26 12:22:20.892649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.705 [2024-04-26 12:22:20.892966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.705 [2024-04-26 12:22:20.892975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.705 qpair failed and we were unable to recover it. 00:26:19.705 [2024-04-26 12:22:20.893282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.705 [2024-04-26 12:22:20.893577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.705 [2024-04-26 12:22:20.893585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.705 qpair failed and we were unable to recover it. 00:26:19.705 [2024-04-26 12:22:20.893774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.705 [2024-04-26 12:22:20.894137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.705 [2024-04-26 12:22:20.894146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.705 qpair failed and we were unable to recover it. 00:26:19.705 [2024-04-26 12:22:20.894465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.705 [2024-04-26 12:22:20.894627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.705 [2024-04-26 12:22:20.894636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.705 qpair failed and we were unable to recover it. 00:26:19.705 [2024-04-26 12:22:20.894911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.705 [2024-04-26 12:22:20.895201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.705 [2024-04-26 12:22:20.895210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.705 qpair failed and we were unable to recover it. 00:26:19.705 [2024-04-26 12:22:20.895586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.705 [2024-04-26 12:22:20.895928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.705 [2024-04-26 12:22:20.895938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.705 qpair failed and we were unable to recover it. 00:26:19.705 [2024-04-26 12:22:20.896261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.705 [2024-04-26 12:22:20.896307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.705 [2024-04-26 12:22:20.896316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.705 qpair failed and we were unable to recover it. 00:26:19.705 [2024-04-26 12:22:20.896598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.705 [2024-04-26 12:22:20.896940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.705 [2024-04-26 12:22:20.896951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.705 qpair failed and we were unable to recover it. 00:26:19.973 [2024-04-26 12:22:20.897240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.973 [2024-04-26 12:22:20.897542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.973 [2024-04-26 12:22:20.897551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.973 qpair failed and we were unable to recover it. 00:26:19.973 [2024-04-26 12:22:20.897858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.973 [2024-04-26 12:22:20.898059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.973 [2024-04-26 12:22:20.898068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.973 qpair failed and we were unable to recover it. 00:26:19.973 [2024-04-26 12:22:20.898372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.973 [2024-04-26 12:22:20.898524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.973 [2024-04-26 12:22:20.898533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.973 qpair failed and we were unable to recover it. 00:26:19.973 [2024-04-26 12:22:20.898755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.973 [2024-04-26 12:22:20.898832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.973 [2024-04-26 12:22:20.898854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.973 qpair failed and we were unable to recover it. 00:26:19.973 [2024-04-26 12:22:20.899135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.973 [2024-04-26 12:22:20.899338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.973 [2024-04-26 12:22:20.899347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.973 qpair failed and we were unable to recover it. 00:26:19.973 [2024-04-26 12:22:20.899677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.973 [2024-04-26 12:22:20.899847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.973 [2024-04-26 12:22:20.899856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.973 qpair failed and we were unable to recover it. 00:26:19.973 [2024-04-26 12:22:20.900239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.973 [2024-04-26 12:22:20.900583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.973 [2024-04-26 12:22:20.900591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.973 qpair failed and we were unable to recover it. 00:26:19.973 [2024-04-26 12:22:20.900893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.973 [2024-04-26 12:22:20.901185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.973 [2024-04-26 12:22:20.901194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.973 qpair failed and we were unable to recover it. 00:26:19.973 [2024-04-26 12:22:20.901593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.973 [2024-04-26 12:22:20.901905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.973 [2024-04-26 12:22:20.901914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.973 qpair failed and we were unable to recover it. 00:26:19.973 [2024-04-26 12:22:20.902084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.973 [2024-04-26 12:22:20.902357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.973 [2024-04-26 12:22:20.902366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.973 qpair failed and we were unable to recover it. 00:26:19.973 [2024-04-26 12:22:20.902699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.973 [2024-04-26 12:22:20.903021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.973 [2024-04-26 12:22:20.903031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.973 qpair failed and we were unable to recover it. 00:26:19.973 [2024-04-26 12:22:20.903353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.973 [2024-04-26 12:22:20.903694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.973 [2024-04-26 12:22:20.903703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.973 qpair failed and we were unable to recover it. 00:26:19.973 [2024-04-26 12:22:20.904096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.973 [2024-04-26 12:22:20.904413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.973 [2024-04-26 12:22:20.904423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.973 qpair failed and we were unable to recover it. 00:26:19.973 [2024-04-26 12:22:20.904612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.973 [2024-04-26 12:22:20.904797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.973 [2024-04-26 12:22:20.904806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.973 qpair failed and we were unable to recover it. 00:26:19.973 [2024-04-26 12:22:20.905043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.973 [2024-04-26 12:22:20.905232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.973 [2024-04-26 12:22:20.905241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.973 qpair failed and we were unable to recover it. 00:26:19.973 [2024-04-26 12:22:20.905558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.973 [2024-04-26 12:22:20.905835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.973 [2024-04-26 12:22:20.905849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.973 qpair failed and we were unable to recover it. 00:26:19.973 [2024-04-26 12:22:20.906010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.973 [2024-04-26 12:22:20.906292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.973 [2024-04-26 12:22:20.906301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.973 qpair failed and we were unable to recover it. 00:26:19.973 [2024-04-26 12:22:20.906646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.973 [2024-04-26 12:22:20.906862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.973 [2024-04-26 12:22:20.906872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.973 qpair failed and we were unable to recover it. 00:26:19.973 [2024-04-26 12:22:20.907209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.973 [2024-04-26 12:22:20.907497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.973 [2024-04-26 12:22:20.907508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.973 qpair failed and we were unable to recover it. 00:26:19.973 [2024-04-26 12:22:20.907684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.974 [2024-04-26 12:22:20.907922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.974 [2024-04-26 12:22:20.907933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.974 qpair failed and we were unable to recover it. 00:26:19.974 [2024-04-26 12:22:20.908247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.974 [2024-04-26 12:22:20.908542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.974 [2024-04-26 12:22:20.908552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.974 qpair failed and we were unable to recover it. 00:26:19.974 [2024-04-26 12:22:20.908861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.974 [2024-04-26 12:22:20.909276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.974 [2024-04-26 12:22:20.909285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.974 qpair failed and we were unable to recover it. 00:26:19.974 [2024-04-26 12:22:20.909583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.974 [2024-04-26 12:22:20.909792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.974 [2024-04-26 12:22:20.909801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.974 qpair failed and we were unable to recover it. 00:26:19.974 [2024-04-26 12:22:20.910128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.974 [2024-04-26 12:22:20.910325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.974 [2024-04-26 12:22:20.910334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.974 qpair failed and we were unable to recover it. 00:26:19.974 [2024-04-26 12:22:20.910561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.974 [2024-04-26 12:22:20.910888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.974 [2024-04-26 12:22:20.910897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.974 qpair failed and we were unable to recover it. 00:26:19.974 [2024-04-26 12:22:20.911215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.974 [2024-04-26 12:22:20.911410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.974 [2024-04-26 12:22:20.911419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.974 qpair failed and we were unable to recover it. 00:26:19.974 [2024-04-26 12:22:20.911624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.974 [2024-04-26 12:22:20.911826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.974 [2024-04-26 12:22:20.911848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.974 qpair failed and we were unable to recover it. 00:26:19.974 [2024-04-26 12:22:20.912171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.974 [2024-04-26 12:22:20.912363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.974 [2024-04-26 12:22:20.912372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.974 qpair failed and we were unable to recover it. 00:26:19.974 [2024-04-26 12:22:20.912691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.974 [2024-04-26 12:22:20.913004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.974 [2024-04-26 12:22:20.913013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.974 qpair failed and we were unable to recover it. 00:26:19.974 [2024-04-26 12:22:20.913225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.974 [2024-04-26 12:22:20.913499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.974 [2024-04-26 12:22:20.913508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.974 qpair failed and we were unable to recover it. 00:26:19.974 [2024-04-26 12:22:20.913690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.974 [2024-04-26 12:22:20.913913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.974 [2024-04-26 12:22:20.913922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.974 qpair failed and we were unable to recover it. 00:26:19.974 [2024-04-26 12:22:20.914266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.974 [2024-04-26 12:22:20.914430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.974 [2024-04-26 12:22:20.914438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.974 qpair failed and we were unable to recover it. 00:26:19.974 [2024-04-26 12:22:20.914714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.974 [2024-04-26 12:22:20.914908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.974 [2024-04-26 12:22:20.914918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.974 qpair failed and we were unable to recover it. 00:26:19.974 [2024-04-26 12:22:20.915235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.974 [2024-04-26 12:22:20.915536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.974 [2024-04-26 12:22:20.915545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.974 qpair failed and we were unable to recover it. 00:26:19.974 [2024-04-26 12:22:20.915742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.974 [2024-04-26 12:22:20.916077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.974 [2024-04-26 12:22:20.916086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.974 qpair failed and we were unable to recover it. 00:26:19.974 [2024-04-26 12:22:20.916394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.974 [2024-04-26 12:22:20.916439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.974 [2024-04-26 12:22:20.916448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.974 qpair failed and we were unable to recover it. 00:26:19.974 [2024-04-26 12:22:20.916634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.974 [2024-04-26 12:22:20.916674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.974 [2024-04-26 12:22:20.916682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.974 qpair failed and we were unable to recover it. 00:26:19.974 [2024-04-26 12:22:20.916850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.974 [2024-04-26 12:22:20.917141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.974 [2024-04-26 12:22:20.917157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.974 qpair failed and we were unable to recover it. 00:26:19.974 [2024-04-26 12:22:20.917315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.974 [2024-04-26 12:22:20.917513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.974 [2024-04-26 12:22:20.917521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.974 qpair failed and we were unable to recover it. 00:26:19.974 [2024-04-26 12:22:20.917836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.974 [2024-04-26 12:22:20.918037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.974 [2024-04-26 12:22:20.918046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.974 qpair failed and we were unable to recover it. 00:26:19.974 [2024-04-26 12:22:20.918217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.974 [2024-04-26 12:22:20.918551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.974 [2024-04-26 12:22:20.918560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.974 qpair failed and we were unable to recover it. 00:26:19.974 [2024-04-26 12:22:20.918771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.974 [2024-04-26 12:22:20.919089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.974 [2024-04-26 12:22:20.919098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.974 qpair failed and we were unable to recover it. 00:26:19.974 [2024-04-26 12:22:20.919436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.974 [2024-04-26 12:22:20.919710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.974 [2024-04-26 12:22:20.919719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.974 qpair failed and we were unable to recover it. 00:26:19.974 [2024-04-26 12:22:20.920022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.974 [2024-04-26 12:22:20.920360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.974 [2024-04-26 12:22:20.920370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.974 qpair failed and we were unable to recover it. 00:26:19.974 [2024-04-26 12:22:20.920440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.974 [2024-04-26 12:22:20.920752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.974 [2024-04-26 12:22:20.920761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.974 qpair failed and we were unable to recover it. 00:26:19.974 [2024-04-26 12:22:20.921143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.974 [2024-04-26 12:22:20.921347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.974 [2024-04-26 12:22:20.921355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.974 qpair failed and we were unable to recover it. 00:26:19.974 [2024-04-26 12:22:20.921689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.974 [2024-04-26 12:22:20.922007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.974 [2024-04-26 12:22:20.922018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.974 qpair failed and we were unable to recover it. 00:26:19.974 [2024-04-26 12:22:20.922194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.974 [2024-04-26 12:22:20.922546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.975 [2024-04-26 12:22:20.922555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.975 qpair failed and we were unable to recover it. 00:26:19.975 [2024-04-26 12:22:20.922767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.975 [2024-04-26 12:22:20.923056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.975 [2024-04-26 12:22:20.923065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.975 qpair failed and we were unable to recover it. 00:26:19.975 [2024-04-26 12:22:20.923281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.975 [2024-04-26 12:22:20.923518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.975 [2024-04-26 12:22:20.923527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.975 qpair failed and we were unable to recover it. 00:26:19.975 [2024-04-26 12:22:20.923628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.975 [2024-04-26 12:22:20.923945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.975 [2024-04-26 12:22:20.923957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.975 qpair failed and we were unable to recover it. 00:26:19.975 [2024-04-26 12:22:20.924142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.975 [2024-04-26 12:22:20.924503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.975 [2024-04-26 12:22:20.924511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.975 qpair failed and we were unable to recover it. 00:26:19.975 [2024-04-26 12:22:20.924687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.975 [2024-04-26 12:22:20.925023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.975 [2024-04-26 12:22:20.925033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.975 qpair failed and we were unable to recover it. 00:26:19.975 [2024-04-26 12:22:20.925421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.975 [2024-04-26 12:22:20.925599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.975 [2024-04-26 12:22:20.925607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.975 qpair failed and we were unable to recover it. 00:26:19.975 [2024-04-26 12:22:20.925795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.975 [2024-04-26 12:22:20.926100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.975 [2024-04-26 12:22:20.926110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.975 qpair failed and we were unable to recover it. 00:26:19.975 [2024-04-26 12:22:20.926453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.975 [2024-04-26 12:22:20.926802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.975 [2024-04-26 12:22:20.926811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.975 qpair failed and we were unable to recover it. 00:26:19.975 [2024-04-26 12:22:20.926991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.975 [2024-04-26 12:22:20.927260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.975 [2024-04-26 12:22:20.927269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.975 qpair failed and we were unable to recover it. 00:26:19.975 [2024-04-26 12:22:20.927586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.975 [2024-04-26 12:22:20.927907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.975 [2024-04-26 12:22:20.927917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.975 qpair failed and we were unable to recover it. 00:26:19.975 [2024-04-26 12:22:20.928332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.975 [2024-04-26 12:22:20.928514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.975 [2024-04-26 12:22:20.928524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.975 qpair failed and we were unable to recover it. 00:26:19.975 [2024-04-26 12:22:20.928850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.975 [2024-04-26 12:22:20.929017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.975 [2024-04-26 12:22:20.929026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.975 qpair failed and we were unable to recover it. 00:26:19.975 [2024-04-26 12:22:20.929309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.975 [2024-04-26 12:22:20.929600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.975 [2024-04-26 12:22:20.929610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.975 qpair failed and we were unable to recover it. 00:26:19.975 [2024-04-26 12:22:20.929926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.975 [2024-04-26 12:22:20.930274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.975 [2024-04-26 12:22:20.930282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.975 qpair failed and we were unable to recover it. 00:26:19.975 [2024-04-26 12:22:20.930500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.975 [2024-04-26 12:22:20.930655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.975 [2024-04-26 12:22:20.930663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.975 qpair failed and we were unable to recover it. 00:26:19.975 [2024-04-26 12:22:20.930845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.975 [2024-04-26 12:22:20.931159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.975 [2024-04-26 12:22:20.931168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.975 qpair failed and we were unable to recover it. 00:26:19.975 [2024-04-26 12:22:20.931389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.975 [2024-04-26 12:22:20.931576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.975 [2024-04-26 12:22:20.931586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.975 qpair failed and we were unable to recover it. 00:26:19.975 [2024-04-26 12:22:20.931906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.975 [2024-04-26 12:22:20.932214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.975 [2024-04-26 12:22:20.932223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.975 qpair failed and we were unable to recover it. 00:26:19.975 [2024-04-26 12:22:20.932452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.975 [2024-04-26 12:22:20.932792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.975 [2024-04-26 12:22:20.932801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.975 qpair failed and we were unable to recover it. 00:26:19.975 [2024-04-26 12:22:20.932954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.975 [2024-04-26 12:22:20.933160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.975 [2024-04-26 12:22:20.933169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.975 qpair failed and we were unable to recover it. 00:26:19.975 [2024-04-26 12:22:20.933497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.975 [2024-04-26 12:22:20.933748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.975 [2024-04-26 12:22:20.933757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.975 qpair failed and we were unable to recover it. 00:26:19.975 [2024-04-26 12:22:20.933960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.976 [2024-04-26 12:22:20.934286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.976 [2024-04-26 12:22:20.934295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.976 qpair failed and we were unable to recover it. 00:26:19.976 [2024-04-26 12:22:20.934474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.976 [2024-04-26 12:22:20.934811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.976 [2024-04-26 12:22:20.934821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.976 qpair failed and we were unable to recover it. 00:26:19.976 [2024-04-26 12:22:20.935138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.976 [2024-04-26 12:22:20.935372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.976 [2024-04-26 12:22:20.935382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.976 qpair failed and we were unable to recover it. 00:26:19.976 [2024-04-26 12:22:20.935723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.976 [2024-04-26 12:22:20.935796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.976 [2024-04-26 12:22:20.935806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.976 qpair failed and we were unable to recover it. 00:26:19.976 [2024-04-26 12:22:20.936015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.976 [2024-04-26 12:22:20.936350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.976 [2024-04-26 12:22:20.936358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.976 qpair failed and we were unable to recover it. 00:26:19.976 [2024-04-26 12:22:20.936666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.976 [2024-04-26 12:22:20.936965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.976 [2024-04-26 12:22:20.936975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.976 qpair failed and we were unable to recover it. 00:26:19.976 [2024-04-26 12:22:20.937187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.976 [2024-04-26 12:22:20.937375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.976 [2024-04-26 12:22:20.937385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.976 qpair failed and we were unable to recover it. 00:26:19.976 [2024-04-26 12:22:20.937590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.976 [2024-04-26 12:22:20.937828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.976 [2024-04-26 12:22:20.937843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.976 qpair failed and we were unable to recover it. 00:26:19.976 [2024-04-26 12:22:20.938209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.976 [2024-04-26 12:22:20.938466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.976 [2024-04-26 12:22:20.938475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.976 qpair failed and we were unable to recover it. 00:26:19.976 [2024-04-26 12:22:20.938671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.976 [2024-04-26 12:22:20.938722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.976 [2024-04-26 12:22:20.938730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.976 qpair failed and we were unable to recover it. 00:26:19.976 [2024-04-26 12:22:20.939047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.976 [2024-04-26 12:22:20.939390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.976 [2024-04-26 12:22:20.939399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.976 qpair failed and we were unable to recover it. 00:26:19.976 [2024-04-26 12:22:20.939716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.976 [2024-04-26 12:22:20.939941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.976 [2024-04-26 12:22:20.939950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.976 qpair failed and we were unable to recover it. 00:26:19.976 [2024-04-26 12:22:20.940339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.976 [2024-04-26 12:22:20.940670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.976 [2024-04-26 12:22:20.940680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.976 qpair failed and we were unable to recover it. 00:26:19.976 [2024-04-26 12:22:20.940731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.976 [2024-04-26 12:22:20.941066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.976 [2024-04-26 12:22:20.941076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.976 qpair failed and we were unable to recover it. 00:26:19.976 [2024-04-26 12:22:20.941483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.976 [2024-04-26 12:22:20.941826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.976 [2024-04-26 12:22:20.941835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.976 qpair failed and we were unable to recover it. 00:26:19.976 [2024-04-26 12:22:20.942050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.976 [2024-04-26 12:22:20.942267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.976 [2024-04-26 12:22:20.942277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.976 qpair failed and we were unable to recover it. 00:26:19.976 [2024-04-26 12:22:20.942605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.976 [2024-04-26 12:22:20.943007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.976 [2024-04-26 12:22:20.943018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.976 qpair failed and we were unable to recover it. 00:26:19.976 [2024-04-26 12:22:20.943239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.976 [2024-04-26 12:22:20.943522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.976 [2024-04-26 12:22:20.943531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.976 qpair failed and we were unable to recover it. 00:26:19.976 [2024-04-26 12:22:20.943717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.976 [2024-04-26 12:22:20.943906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.976 [2024-04-26 12:22:20.943914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.976 qpair failed and we were unable to recover it. 00:26:19.976 [2024-04-26 12:22:20.944248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.976 [2024-04-26 12:22:20.944608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.976 [2024-04-26 12:22:20.944617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.976 qpair failed and we were unable to recover it. 00:26:19.976 [2024-04-26 12:22:20.944920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.976 [2024-04-26 12:22:20.945099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.976 [2024-04-26 12:22:20.945108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.976 qpair failed and we were unable to recover it. 00:26:19.976 [2024-04-26 12:22:20.945327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.976 [2024-04-26 12:22:20.945625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.976 [2024-04-26 12:22:20.945634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.976 qpair failed and we were unable to recover it. 00:26:19.976 [2024-04-26 12:22:20.945808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.976 [2024-04-26 12:22:20.946115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.976 [2024-04-26 12:22:20.946126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.976 qpair failed and we were unable to recover it. 00:26:19.976 [2024-04-26 12:22:20.946443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.977 [2024-04-26 12:22:20.946784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.977 [2024-04-26 12:22:20.946794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.977 qpair failed and we were unable to recover it. 00:26:19.977 [2024-04-26 12:22:20.946916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.977 [2024-04-26 12:22:20.947231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.977 [2024-04-26 12:22:20.947240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.977 qpair failed and we were unable to recover it. 00:26:19.977 [2024-04-26 12:22:20.947603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.977 [2024-04-26 12:22:20.947905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.977 [2024-04-26 12:22:20.947915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.977 qpair failed and we were unable to recover it. 00:26:19.977 [2024-04-26 12:22:20.948219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.977 [2024-04-26 12:22:20.948414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.977 [2024-04-26 12:22:20.948423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.977 qpair failed and we were unable to recover it. 00:26:19.977 [2024-04-26 12:22:20.948705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.977 [2024-04-26 12:22:20.948994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.977 [2024-04-26 12:22:20.949003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.977 qpair failed and we were unable to recover it. 00:26:19.977 [2024-04-26 12:22:20.949215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.977 [2024-04-26 12:22:20.949531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.977 [2024-04-26 12:22:20.949539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.977 qpair failed and we were unable to recover it. 00:26:19.977 [2024-04-26 12:22:20.949719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.977 [2024-04-26 12:22:20.950019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.977 [2024-04-26 12:22:20.950028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.977 qpair failed and we were unable to recover it. 00:26:19.977 [2024-04-26 12:22:20.950191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.977 [2024-04-26 12:22:20.950499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.977 [2024-04-26 12:22:20.950508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.977 qpair failed and we were unable to recover it. 00:26:19.977 [2024-04-26 12:22:20.950687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.977 [2024-04-26 12:22:20.950732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.977 [2024-04-26 12:22:20.950740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.977 qpair failed and we were unable to recover it. 00:26:19.977 [2024-04-26 12:22:20.950950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.977 [2024-04-26 12:22:20.951273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.977 [2024-04-26 12:22:20.951287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.977 qpair failed and we were unable to recover it. 00:26:19.977 [2024-04-26 12:22:20.951493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.977 [2024-04-26 12:22:20.951822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.977 [2024-04-26 12:22:20.951832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.977 qpair failed and we were unable to recover it. 00:26:19.977 [2024-04-26 12:22:20.952012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.977 [2024-04-26 12:22:20.952246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.977 [2024-04-26 12:22:20.952256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.977 qpair failed and we were unable to recover it. 00:26:19.977 [2024-04-26 12:22:20.952589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.977 [2024-04-26 12:22:20.952918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.977 [2024-04-26 12:22:20.952928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.977 qpair failed and we were unable to recover it. 00:26:19.977 [2024-04-26 12:22:20.953256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.977 [2024-04-26 12:22:20.953310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.977 [2024-04-26 12:22:20.953319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.977 qpair failed and we were unable to recover it. 00:26:19.977 [2024-04-26 12:22:20.953740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.977 [2024-04-26 12:22:20.954073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.977 [2024-04-26 12:22:20.954083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.977 qpair failed and we were unable to recover it. 00:26:19.977 [2024-04-26 12:22:20.954390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.977 [2024-04-26 12:22:20.954605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.977 [2024-04-26 12:22:20.954614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.977 qpair failed and we were unable to recover it. 00:26:19.977 [2024-04-26 12:22:20.954994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.977 [2024-04-26 12:22:20.955305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.977 [2024-04-26 12:22:20.955314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.977 qpair failed and we were unable to recover it. 00:26:19.977 [2024-04-26 12:22:20.955647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.977 [2024-04-26 12:22:20.955698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.977 [2024-04-26 12:22:20.955707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.977 qpair failed and we were unable to recover it. 00:26:19.977 [2024-04-26 12:22:20.956072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.977 [2024-04-26 12:22:20.956378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.977 [2024-04-26 12:22:20.956387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.977 qpair failed and we were unable to recover it. 00:26:19.977 [2024-04-26 12:22:20.956572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.977 [2024-04-26 12:22:20.956874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.977 [2024-04-26 12:22:20.956885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.977 qpair failed and we were unable to recover it. 00:26:19.977 [2024-04-26 12:22:20.957186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.977 [2024-04-26 12:22:20.957494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.977 [2024-04-26 12:22:20.957503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.977 qpair failed and we were unable to recover it. 00:26:19.977 [2024-04-26 12:22:20.957698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.977 [2024-04-26 12:22:20.957972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.977 [2024-04-26 12:22:20.957981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.977 qpair failed and we were unable to recover it. 00:26:19.977 [2024-04-26 12:22:20.958317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.977 [2024-04-26 12:22:20.958624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.977 [2024-04-26 12:22:20.958633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.977 qpair failed and we were unable to recover it. 00:26:19.977 [2024-04-26 12:22:20.958953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.977 [2024-04-26 12:22:20.959281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.977 [2024-04-26 12:22:20.959290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.977 qpair failed and we were unable to recover it. 00:26:19.977 [2024-04-26 12:22:20.959616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.977 [2024-04-26 12:22:20.959968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.977 [2024-04-26 12:22:20.959977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.977 qpair failed and we were unable to recover it. 00:26:19.977 [2024-04-26 12:22:20.960155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.977 [2024-04-26 12:22:20.960358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.977 [2024-04-26 12:22:20.960367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.977 qpair failed and we were unable to recover it. 00:26:19.978 [2024-04-26 12:22:20.960688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.978 [2024-04-26 12:22:20.960859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.978 [2024-04-26 12:22:20.960868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.978 qpair failed and we were unable to recover it. 00:26:19.978 [2024-04-26 12:22:20.961009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.978 [2024-04-26 12:22:20.961341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.978 [2024-04-26 12:22:20.961349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.978 qpair failed and we were unable to recover it. 00:26:19.978 [2024-04-26 12:22:20.961561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.978 [2024-04-26 12:22:20.961857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.978 [2024-04-26 12:22:20.961867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.978 qpair failed and we were unable to recover it. 00:26:19.978 [2024-04-26 12:22:20.962269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.978 [2024-04-26 12:22:20.962570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.978 [2024-04-26 12:22:20.962579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.978 qpair failed and we were unable to recover it. 00:26:19.978 [2024-04-26 12:22:20.962914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.978 [2024-04-26 12:22:20.963225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.978 [2024-04-26 12:22:20.963234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.978 qpair failed and we were unable to recover it. 00:26:19.978 [2024-04-26 12:22:20.963428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.978 [2024-04-26 12:22:20.963651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.978 [2024-04-26 12:22:20.963660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.978 qpair failed and we were unable to recover it. 00:26:19.978 [2024-04-26 12:22:20.963952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.978 [2024-04-26 12:22:20.964174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.978 [2024-04-26 12:22:20.964183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.978 qpair failed and we were unable to recover it. 00:26:19.978 [2024-04-26 12:22:20.964383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.978 [2024-04-26 12:22:20.964611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.978 [2024-04-26 12:22:20.964620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.978 qpair failed and we were unable to recover it. 00:26:19.978 [2024-04-26 12:22:20.964800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.978 [2024-04-26 12:22:20.965118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.978 [2024-04-26 12:22:20.965128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.978 qpair failed and we were unable to recover it. 00:26:19.978 [2024-04-26 12:22:20.965445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.978 [2024-04-26 12:22:20.965730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.978 [2024-04-26 12:22:20.965739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.978 qpair failed and we were unable to recover it. 00:26:19.978 [2024-04-26 12:22:20.966030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.978 [2024-04-26 12:22:20.966204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.978 [2024-04-26 12:22:20.966213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.978 qpair failed and we were unable to recover it. 00:26:19.978 [2024-04-26 12:22:20.966521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.978 [2024-04-26 12:22:20.966711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.978 [2024-04-26 12:22:20.966720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.978 qpair failed and we were unable to recover it. 00:26:19.978 [2024-04-26 12:22:20.967003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.978 [2024-04-26 12:22:20.967341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.978 [2024-04-26 12:22:20.967350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.978 qpair failed and we were unable to recover it. 00:26:19.978 [2024-04-26 12:22:20.967653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.978 [2024-04-26 12:22:20.967871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.978 [2024-04-26 12:22:20.967880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.978 qpair failed and we were unable to recover it. 00:26:19.978 [2024-04-26 12:22:20.968177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.978 [2024-04-26 12:22:20.968517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.978 [2024-04-26 12:22:20.968526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.978 qpair failed and we were unable to recover it. 00:26:19.978 [2024-04-26 12:22:20.968819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.978 [2024-04-26 12:22:20.969016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.978 [2024-04-26 12:22:20.969025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.978 qpair failed and we were unable to recover it. 00:26:19.978 [2024-04-26 12:22:20.969322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.978 [2024-04-26 12:22:20.969627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.978 [2024-04-26 12:22:20.969636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.978 qpair failed and we were unable to recover it. 00:26:19.978 [2024-04-26 12:22:20.969924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.978 [2024-04-26 12:22:20.969976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.978 [2024-04-26 12:22:20.969986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.978 qpair failed and we were unable to recover it. 00:26:19.978 [2024-04-26 12:22:20.970163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.978 [2024-04-26 12:22:20.970503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.978 [2024-04-26 12:22:20.970512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.978 qpair failed and we were unable to recover it. 00:26:19.978 [2024-04-26 12:22:20.970683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.978 [2024-04-26 12:22:20.970998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.978 [2024-04-26 12:22:20.971007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.978 qpair failed and we were unable to recover it. 00:26:19.978 [2024-04-26 12:22:20.971177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.978 [2024-04-26 12:22:20.971580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.978 [2024-04-26 12:22:20.971589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.978 qpair failed and we were unable to recover it. 00:26:19.978 [2024-04-26 12:22:20.971885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.978 [2024-04-26 12:22:20.972177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.978 [2024-04-26 12:22:20.972186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.978 qpair failed and we were unable to recover it. 00:26:19.978 [2024-04-26 12:22:20.972483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.978 [2024-04-26 12:22:20.972793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.978 [2024-04-26 12:22:20.972802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.978 qpair failed and we were unable to recover it. 00:26:19.978 [2024-04-26 12:22:20.973181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.978 [2024-04-26 12:22:20.973388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.978 [2024-04-26 12:22:20.973397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.978 qpair failed and we were unable to recover it. 00:26:19.979 [2024-04-26 12:22:20.973556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.979 [2024-04-26 12:22:20.973892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.979 [2024-04-26 12:22:20.973902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.979 qpair failed and we were unable to recover it. 00:26:19.979 [2024-04-26 12:22:20.974210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.979 [2024-04-26 12:22:20.974491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.979 [2024-04-26 12:22:20.974500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.979 qpair failed and we were unable to recover it. 00:26:19.979 [2024-04-26 12:22:20.974792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.979 [2024-04-26 12:22:20.975090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.979 [2024-04-26 12:22:20.975099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.979 qpair failed and we were unable to recover it. 00:26:19.979 [2024-04-26 12:22:20.975410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.979 [2024-04-26 12:22:20.975762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.979 [2024-04-26 12:22:20.975771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.979 qpair failed and we were unable to recover it. 00:26:19.979 [2024-04-26 12:22:20.975972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.979 [2024-04-26 12:22:20.976358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.979 [2024-04-26 12:22:20.976367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.979 qpair failed and we were unable to recover it. 00:26:19.979 [2024-04-26 12:22:20.976685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.979 [2024-04-26 12:22:20.977014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.979 [2024-04-26 12:22:20.977023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.979 qpair failed and we were unable to recover it. 00:26:19.979 [2024-04-26 12:22:20.977249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.979 [2024-04-26 12:22:20.977565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.979 [2024-04-26 12:22:20.977574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.979 qpair failed and we were unable to recover it. 00:26:19.979 [2024-04-26 12:22:20.977894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.979 [2024-04-26 12:22:20.978204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.979 [2024-04-26 12:22:20.978213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.979 qpair failed and we were unable to recover it. 00:26:19.979 [2024-04-26 12:22:20.978548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.979 [2024-04-26 12:22:20.978879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.979 [2024-04-26 12:22:20.978888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.979 qpair failed and we were unable to recover it. 00:26:19.979 [2024-04-26 12:22:20.979201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.979 [2024-04-26 12:22:20.979403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.979 [2024-04-26 12:22:20.979413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.979 qpair failed and we were unable to recover it. 00:26:19.979 [2024-04-26 12:22:20.979705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.979 [2024-04-26 12:22:20.979988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.979 [2024-04-26 12:22:20.979998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.979 qpair failed and we were unable to recover it. 00:26:19.979 [2024-04-26 12:22:20.980285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.979 [2024-04-26 12:22:20.980591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.979 [2024-04-26 12:22:20.980600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.979 qpair failed and we were unable to recover it. 00:26:19.979 [2024-04-26 12:22:20.980968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.979 [2024-04-26 12:22:20.981278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.979 [2024-04-26 12:22:20.981287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.979 qpair failed and we were unable to recover it. 00:26:19.979 [2024-04-26 12:22:20.981595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.979 [2024-04-26 12:22:20.981765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.979 [2024-04-26 12:22:20.981774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.979 qpair failed and we were unable to recover it. 00:26:19.979 [2024-04-26 12:22:20.981954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.979 [2024-04-26 12:22:20.982233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.979 [2024-04-26 12:22:20.982242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.979 qpair failed and we were unable to recover it. 00:26:19.979 [2024-04-26 12:22:20.982443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.979 [2024-04-26 12:22:20.982749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.979 [2024-04-26 12:22:20.982759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.979 qpair failed and we were unable to recover it. 00:26:19.979 [2024-04-26 12:22:20.982958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.979 [2024-04-26 12:22:20.983262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.979 [2024-04-26 12:22:20.983272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.979 qpair failed and we were unable to recover it. 00:26:19.979 [2024-04-26 12:22:20.983584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.979 [2024-04-26 12:22:20.983851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.979 [2024-04-26 12:22:20.983861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.979 qpair failed and we were unable to recover it. 00:26:19.979 [2024-04-26 12:22:20.984174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.979 [2024-04-26 12:22:20.984348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.979 [2024-04-26 12:22:20.984357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.979 qpair failed and we were unable to recover it. 00:26:19.979 [2024-04-26 12:22:20.984558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.979 [2024-04-26 12:22:20.984845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.979 [2024-04-26 12:22:20.984855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.979 qpair failed and we were unable to recover it. 00:26:19.979 [2024-04-26 12:22:20.985143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.979 [2024-04-26 12:22:20.985468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.979 [2024-04-26 12:22:20.985479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.979 qpair failed and we were unable to recover it. 00:26:19.979 [2024-04-26 12:22:20.985802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.979 [2024-04-26 12:22:20.985983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.979 [2024-04-26 12:22:20.985993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.979 qpair failed and we were unable to recover it. 00:26:19.979 [2024-04-26 12:22:20.986161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.979 [2024-04-26 12:22:20.986500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.979 [2024-04-26 12:22:20.986510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.979 qpair failed and we were unable to recover it. 00:26:19.979 [2024-04-26 12:22:20.986778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.979 [2024-04-26 12:22:20.987008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.979 [2024-04-26 12:22:20.987025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.979 qpair failed and we were unable to recover it. 00:26:19.979 [2024-04-26 12:22:20.987306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.979 [2024-04-26 12:22:20.987612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.979 [2024-04-26 12:22:20.987620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.979 qpair failed and we were unable to recover it. 00:26:19.979 [2024-04-26 12:22:20.987821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.979 [2024-04-26 12:22:20.988121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.979 [2024-04-26 12:22:20.988130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.979 qpair failed and we were unable to recover it. 00:26:19.979 [2024-04-26 12:22:20.988311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.979 [2024-04-26 12:22:20.988527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.980 [2024-04-26 12:22:20.988544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.980 qpair failed and we were unable to recover it. 00:26:19.980 [2024-04-26 12:22:20.988893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.980 [2024-04-26 12:22:20.989083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.980 [2024-04-26 12:22:20.989093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.980 qpair failed and we were unable to recover it. 00:26:19.980 [2024-04-26 12:22:20.989275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.980 [2024-04-26 12:22:20.989491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.980 [2024-04-26 12:22:20.989500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.980 qpair failed and we were unable to recover it. 00:26:19.980 [2024-04-26 12:22:20.989683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.980 [2024-04-26 12:22:20.990016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.980 [2024-04-26 12:22:20.990026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.980 qpair failed and we were unable to recover it. 00:26:19.980 [2024-04-26 12:22:20.990353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.980 [2024-04-26 12:22:20.990701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.980 [2024-04-26 12:22:20.990710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.980 qpair failed and we were unable to recover it. 00:26:19.980 [2024-04-26 12:22:20.990883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.980 [2024-04-26 12:22:20.991177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.980 [2024-04-26 12:22:20.991186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.980 qpair failed and we were unable to recover it. 00:26:19.980 [2024-04-26 12:22:20.991469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.980 [2024-04-26 12:22:20.991664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.980 [2024-04-26 12:22:20.991673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.980 qpair failed and we were unable to recover it. 00:26:19.980 [2024-04-26 12:22:20.991950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.980 [2024-04-26 12:22:20.992246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.980 [2024-04-26 12:22:20.992256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.980 qpair failed and we were unable to recover it. 00:26:19.980 [2024-04-26 12:22:20.992304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.980 [2024-04-26 12:22:20.992584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.980 [2024-04-26 12:22:20.992593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.980 qpair failed and we were unable to recover it. 00:26:19.980 [2024-04-26 12:22:20.992915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.980 [2024-04-26 12:22:20.993219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.980 [2024-04-26 12:22:20.993228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.980 qpair failed and we were unable to recover it. 00:26:19.980 [2024-04-26 12:22:20.993546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.980 [2024-04-26 12:22:20.993810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.980 [2024-04-26 12:22:20.993819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.980 qpair failed and we were unable to recover it. 00:26:19.980 [2024-04-26 12:22:20.994113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.980 [2024-04-26 12:22:20.994376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.980 [2024-04-26 12:22:20.994386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.980 qpair failed and we were unable to recover it. 00:26:19.980 [2024-04-26 12:22:20.994723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.980 [2024-04-26 12:22:20.994884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.980 [2024-04-26 12:22:20.994894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.980 qpair failed and we were unable to recover it. 00:26:19.980 [2024-04-26 12:22:20.995186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.980 [2024-04-26 12:22:20.995231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.980 [2024-04-26 12:22:20.995240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.980 qpair failed and we were unable to recover it. 00:26:19.980 [2024-04-26 12:22:20.995531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.980 [2024-04-26 12:22:20.995825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.980 [2024-04-26 12:22:20.995835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.980 qpair failed and we were unable to recover it. 00:26:19.980 [2024-04-26 12:22:20.996167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.980 [2024-04-26 12:22:20.996456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.980 [2024-04-26 12:22:20.996466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.980 qpair failed and we were unable to recover it. 00:26:19.980 [2024-04-26 12:22:20.996802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.980 [2024-04-26 12:22:20.997033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.980 [2024-04-26 12:22:20.997042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.980 qpair failed and we were unable to recover it. 00:26:19.980 [2024-04-26 12:22:20.997264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.980 [2024-04-26 12:22:20.997447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.980 [2024-04-26 12:22:20.997456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.980 qpair failed and we were unable to recover it. 00:26:19.980 [2024-04-26 12:22:20.997654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.980 [2024-04-26 12:22:20.997998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.980 [2024-04-26 12:22:20.998007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.980 qpair failed and we were unable to recover it. 00:26:19.980 [2024-04-26 12:22:20.998223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.980 [2024-04-26 12:22:20.998379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.980 [2024-04-26 12:22:20.998388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.980 qpair failed and we were unable to recover it. 00:26:19.980 [2024-04-26 12:22:20.998707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.980 [2024-04-26 12:22:20.999012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.980 [2024-04-26 12:22:20.999022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.980 qpair failed and we were unable to recover it. 00:26:19.980 [2024-04-26 12:22:20.999395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.980 [2024-04-26 12:22:20.999572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.980 [2024-04-26 12:22:20.999581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.980 qpair failed and we were unable to recover it. 00:26:19.980 [2024-04-26 12:22:20.999900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.980 [2024-04-26 12:22:21.000105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.980 [2024-04-26 12:22:21.000114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.980 qpair failed and we were unable to recover it. 00:26:19.980 [2024-04-26 12:22:21.000295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.980 [2024-04-26 12:22:21.000565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.980 [2024-04-26 12:22:21.000574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.980 qpair failed and we were unable to recover it. 00:26:19.980 [2024-04-26 12:22:21.000764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.980 [2024-04-26 12:22:21.000944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.980 [2024-04-26 12:22:21.000953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.980 qpair failed and we were unable to recover it. 00:26:19.980 [2024-04-26 12:22:21.001268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.980 [2024-04-26 12:22:21.001469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.980 [2024-04-26 12:22:21.001478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.980 qpair failed and we were unable to recover it. 00:26:19.980 [2024-04-26 12:22:21.001743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.980 [2024-04-26 12:22:21.002012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.980 [2024-04-26 12:22:21.002022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.980 qpair failed and we were unable to recover it. 00:26:19.980 [2024-04-26 12:22:21.002204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.980 [2024-04-26 12:22:21.002488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.980 [2024-04-26 12:22:21.002498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.981 qpair failed and we were unable to recover it. 00:26:19.981 [2024-04-26 12:22:21.002680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.981 [2024-04-26 12:22:21.003019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.981 [2024-04-26 12:22:21.003029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.981 qpair failed and we were unable to recover it. 00:26:19.981 [2024-04-26 12:22:21.003375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.981 [2024-04-26 12:22:21.003645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.981 [2024-04-26 12:22:21.003655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.981 qpair failed and we were unable to recover it. 00:26:19.981 [2024-04-26 12:22:21.003835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.981 [2024-04-26 12:22:21.004160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.981 [2024-04-26 12:22:21.004169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.981 qpair failed and we were unable to recover it. 00:26:19.981 [2024-04-26 12:22:21.004342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.981 [2024-04-26 12:22:21.004663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.981 [2024-04-26 12:22:21.004672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.981 qpair failed and we were unable to recover it. 00:26:19.981 [2024-04-26 12:22:21.005023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.981 [2024-04-26 12:22:21.005320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.981 [2024-04-26 12:22:21.005329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.981 qpair failed and we were unable to recover it. 00:26:19.981 [2024-04-26 12:22:21.005500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.981 [2024-04-26 12:22:21.005801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.981 [2024-04-26 12:22:21.005810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.981 qpair failed and we were unable to recover it. 00:26:19.981 [2024-04-26 12:22:21.006126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.981 [2024-04-26 12:22:21.006285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.981 [2024-04-26 12:22:21.006294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.981 qpair failed and we were unable to recover it. 00:26:19.981 [2024-04-26 12:22:21.006625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.981 [2024-04-26 12:22:21.006960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.981 [2024-04-26 12:22:21.006969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.981 qpair failed and we were unable to recover it. 00:26:19.981 [2024-04-26 12:22:21.007264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.981 [2024-04-26 12:22:21.007555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.981 [2024-04-26 12:22:21.007563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.981 qpair failed and we were unable to recover it. 00:26:19.981 [2024-04-26 12:22:21.007729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.981 [2024-04-26 12:22:21.008108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.981 [2024-04-26 12:22:21.008118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.981 qpair failed and we were unable to recover it. 00:26:19.981 [2024-04-26 12:22:21.008411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.981 [2024-04-26 12:22:21.008701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.981 [2024-04-26 12:22:21.008710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.981 qpair failed and we were unable to recover it. 00:26:19.981 [2024-04-26 12:22:21.009060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.981 [2024-04-26 12:22:21.009369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.981 [2024-04-26 12:22:21.009378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.981 qpair failed and we were unable to recover it. 00:26:19.981 [2024-04-26 12:22:21.009699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.981 [2024-04-26 12:22:21.009893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.981 [2024-04-26 12:22:21.009902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.981 qpair failed and we were unable to recover it. 00:26:19.981 [2024-04-26 12:22:21.010223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.981 [2024-04-26 12:22:21.010532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.981 [2024-04-26 12:22:21.010541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.981 qpair failed and we were unable to recover it. 00:26:19.981 [2024-04-26 12:22:21.010754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.981 [2024-04-26 12:22:21.011043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.981 [2024-04-26 12:22:21.011052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.981 qpair failed and we were unable to recover it. 00:26:19.981 [2024-04-26 12:22:21.011383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.981 [2024-04-26 12:22:21.011584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.981 [2024-04-26 12:22:21.011593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.981 qpair failed and we were unable to recover it. 00:26:19.981 [2024-04-26 12:22:21.011772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.981 [2024-04-26 12:22:21.011941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.981 [2024-04-26 12:22:21.011958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.981 qpair failed and we were unable to recover it. 00:26:19.981 [2024-04-26 12:22:21.012254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.981 [2024-04-26 12:22:21.012547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.981 [2024-04-26 12:22:21.012559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.981 qpair failed and we were unable to recover it. 00:26:19.981 [2024-04-26 12:22:21.012868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.981 [2024-04-26 12:22:21.013149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.981 [2024-04-26 12:22:21.013159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.981 qpair failed and we were unable to recover it. 00:26:19.981 [2024-04-26 12:22:21.013495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.981 [2024-04-26 12:22:21.013810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.981 [2024-04-26 12:22:21.013819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.981 qpair failed and we were unable to recover it. 00:26:19.981 [2024-04-26 12:22:21.014112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.981 [2024-04-26 12:22:21.014446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.981 [2024-04-26 12:22:21.014455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.981 qpair failed and we were unable to recover it. 00:26:19.981 [2024-04-26 12:22:21.014636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.981 [2024-04-26 12:22:21.014854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.981 [2024-04-26 12:22:21.014864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.981 qpair failed and we were unable to recover it. 00:26:19.981 [2024-04-26 12:22:21.015049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.981 [2024-04-26 12:22:21.015270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.981 [2024-04-26 12:22:21.015279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.981 qpair failed and we were unable to recover it. 00:26:19.981 [2024-04-26 12:22:21.015602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.981 [2024-04-26 12:22:21.015923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.981 [2024-04-26 12:22:21.015934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.981 qpair failed and we were unable to recover it. 00:26:19.981 [2024-04-26 12:22:21.016198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.981 [2024-04-26 12:22:21.016388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.981 [2024-04-26 12:22:21.016397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.981 qpair failed and we were unable to recover it. 00:26:19.981 [2024-04-26 12:22:21.016752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.981 [2024-04-26 12:22:21.017036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.981 [2024-04-26 12:22:21.017046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.981 qpair failed and we were unable to recover it. 00:26:19.981 [2024-04-26 12:22:21.017397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.981 [2024-04-26 12:22:21.017716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.981 [2024-04-26 12:22:21.017725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.981 qpair failed and we were unable to recover it. 00:26:19.982 [2024-04-26 12:22:21.018078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.982 [2024-04-26 12:22:21.018388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.982 [2024-04-26 12:22:21.018398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.982 qpair failed and we were unable to recover it. 00:26:19.982 [2024-04-26 12:22:21.018736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.982 [2024-04-26 12:22:21.018900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.982 [2024-04-26 12:22:21.018909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.982 qpair failed and we were unable to recover it. 00:26:19.982 [2024-04-26 12:22:21.019227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.982 [2024-04-26 12:22:21.019562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.982 [2024-04-26 12:22:21.019571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.982 qpair failed and we were unable to recover it. 00:26:19.982 [2024-04-26 12:22:21.019896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.982 [2024-04-26 12:22:21.020214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.982 [2024-04-26 12:22:21.020223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.982 qpair failed and we were unable to recover it. 00:26:19.982 [2024-04-26 12:22:21.020540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.982 [2024-04-26 12:22:21.020791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.982 [2024-04-26 12:22:21.020800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.982 qpair failed and we were unable to recover it. 00:26:19.982 [2024-04-26 12:22:21.020973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.982 [2024-04-26 12:22:21.021293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.982 [2024-04-26 12:22:21.021302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.982 qpair failed and we were unable to recover it. 00:26:19.982 [2024-04-26 12:22:21.021598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.982 [2024-04-26 12:22:21.021648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.982 [2024-04-26 12:22:21.021656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.982 qpair failed and we were unable to recover it. 00:26:19.982 [2024-04-26 12:22:21.021988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.982 [2024-04-26 12:22:21.022293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.982 [2024-04-26 12:22:21.022302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.982 qpair failed and we were unable to recover it. 00:26:19.982 [2024-04-26 12:22:21.022642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.982 [2024-04-26 12:22:21.022847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.982 [2024-04-26 12:22:21.022857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.982 qpair failed and we were unable to recover it. 00:26:19.982 [2024-04-26 12:22:21.023166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.982 [2024-04-26 12:22:21.023491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.982 [2024-04-26 12:22:21.023500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.982 qpair failed and we were unable to recover it. 00:26:19.982 [2024-04-26 12:22:21.023848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.982 [2024-04-26 12:22:21.024012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.982 [2024-04-26 12:22:21.024021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.982 qpair failed and we were unable to recover it. 00:26:19.982 [2024-04-26 12:22:21.024256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.982 [2024-04-26 12:22:21.024438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.982 [2024-04-26 12:22:21.024446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.982 qpair failed and we were unable to recover it. 00:26:19.982 [2024-04-26 12:22:21.024790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.982 [2024-04-26 12:22:21.025051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.982 [2024-04-26 12:22:21.025061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.982 qpair failed and we were unable to recover it. 00:26:19.982 [2024-04-26 12:22:21.025392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.982 [2024-04-26 12:22:21.025660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.982 [2024-04-26 12:22:21.025669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.982 qpair failed and we were unable to recover it. 00:26:19.982 [2024-04-26 12:22:21.025977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.982 [2024-04-26 12:22:21.026173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.982 [2024-04-26 12:22:21.026182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.982 qpair failed and we were unable to recover it. 00:26:19.982 [2024-04-26 12:22:21.026367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.982 [2024-04-26 12:22:21.026715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.982 [2024-04-26 12:22:21.026725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.982 qpair failed and we were unable to recover it. 00:26:19.982 [2024-04-26 12:22:21.026910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.982 [2024-04-26 12:22:21.027228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.982 [2024-04-26 12:22:21.027237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.982 qpair failed and we were unable to recover it. 00:26:19.982 [2024-04-26 12:22:21.027571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.982 [2024-04-26 12:22:21.027925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.982 [2024-04-26 12:22:21.027934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.982 qpair failed and we were unable to recover it. 00:26:19.982 [2024-04-26 12:22:21.028279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.982 [2024-04-26 12:22:21.028450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.982 [2024-04-26 12:22:21.028459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.982 qpair failed and we were unable to recover it. 00:26:19.982 [2024-04-26 12:22:21.028784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.982 [2024-04-26 12:22:21.029084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.982 [2024-04-26 12:22:21.029093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.982 qpair failed and we were unable to recover it. 00:26:19.982 [2024-04-26 12:22:21.029419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.982 [2024-04-26 12:22:21.029614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.982 [2024-04-26 12:22:21.029624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.982 qpair failed and we were unable to recover it. 00:26:19.982 [2024-04-26 12:22:21.029875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.982 [2024-04-26 12:22:21.030186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.982 [2024-04-26 12:22:21.030195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.982 qpair failed and we were unable to recover it. 00:26:19.982 [2024-04-26 12:22:21.030494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.982 [2024-04-26 12:22:21.030825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.982 [2024-04-26 12:22:21.030835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.982 qpair failed and we were unable to recover it. 00:26:19.982 [2024-04-26 12:22:21.031174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.982 [2024-04-26 12:22:21.031517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.982 [2024-04-26 12:22:21.031527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.982 qpair failed and we were unable to recover it. 00:26:19.982 [2024-04-26 12:22:21.031737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.982 [2024-04-26 12:22:21.032013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.982 [2024-04-26 12:22:21.032022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.982 qpair failed and we were unable to recover it. 00:26:19.982 [2024-04-26 12:22:21.032219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.982 [2024-04-26 12:22:21.032393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.982 [2024-04-26 12:22:21.032402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.982 qpair failed and we were unable to recover it. 00:26:19.982 [2024-04-26 12:22:21.032713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.982 [2024-04-26 12:22:21.033009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.982 [2024-04-26 12:22:21.033019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.982 qpair failed and we were unable to recover it. 00:26:19.982 [2024-04-26 12:22:21.033366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.982 [2024-04-26 12:22:21.033553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.983 [2024-04-26 12:22:21.033562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.983 qpair failed and we were unable to recover it. 00:26:19.983 [2024-04-26 12:22:21.033872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.983 [2024-04-26 12:22:21.034197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.983 [2024-04-26 12:22:21.034206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.983 qpair failed and we were unable to recover it. 00:26:19.983 [2024-04-26 12:22:21.034529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.983 [2024-04-26 12:22:21.034698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.983 [2024-04-26 12:22:21.034706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.983 qpair failed and we were unable to recover it. 00:26:19.983 [2024-04-26 12:22:21.035025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.983 [2024-04-26 12:22:21.035332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.983 [2024-04-26 12:22:21.035341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.983 qpair failed and we were unable to recover it. 00:26:19.983 [2024-04-26 12:22:21.035539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.983 [2024-04-26 12:22:21.035726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.983 [2024-04-26 12:22:21.035736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.983 qpair failed and we were unable to recover it. 00:26:19.983 [2024-04-26 12:22:21.036063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.983 [2024-04-26 12:22:21.036339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.983 [2024-04-26 12:22:21.036348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.983 qpair failed and we were unable to recover it. 00:26:19.983 [2024-04-26 12:22:21.036648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.983 [2024-04-26 12:22:21.036984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.983 [2024-04-26 12:22:21.036994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.983 qpair failed and we were unable to recover it. 00:26:19.983 [2024-04-26 12:22:21.037333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.983 [2024-04-26 12:22:21.037655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.983 [2024-04-26 12:22:21.037664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.983 qpair failed and we were unable to recover it. 00:26:19.983 [2024-04-26 12:22:21.037852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.983 [2024-04-26 12:22:21.038117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.983 [2024-04-26 12:22:21.038126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.983 qpair failed and we were unable to recover it. 00:26:19.983 [2024-04-26 12:22:21.038476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.983 [2024-04-26 12:22:21.038826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.983 [2024-04-26 12:22:21.038835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.983 qpair failed and we were unable to recover it. 00:26:19.983 [2024-04-26 12:22:21.039138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.983 [2024-04-26 12:22:21.039447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.983 [2024-04-26 12:22:21.039457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.983 qpair failed and we were unable to recover it. 00:26:19.983 [2024-04-26 12:22:21.039800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.983 [2024-04-26 12:22:21.040152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.983 [2024-04-26 12:22:21.040162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.983 qpair failed and we were unable to recover it. 00:26:19.983 [2024-04-26 12:22:21.040496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.983 [2024-04-26 12:22:21.040853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.983 [2024-04-26 12:22:21.040863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.983 qpair failed and we were unable to recover it. 00:26:19.983 [2024-04-26 12:22:21.041052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.983 [2024-04-26 12:22:21.041387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.983 [2024-04-26 12:22:21.041396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.983 qpair failed and we were unable to recover it. 00:26:19.983 [2024-04-26 12:22:21.041719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.983 [2024-04-26 12:22:21.042002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.983 [2024-04-26 12:22:21.042013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.983 qpair failed and we were unable to recover it. 00:26:19.983 [2024-04-26 12:22:21.042267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.983 [2024-04-26 12:22:21.042453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.983 [2024-04-26 12:22:21.042462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.983 qpair failed and we were unable to recover it. 00:26:19.983 [2024-04-26 12:22:21.042659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.983 [2024-04-26 12:22:21.043026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.983 [2024-04-26 12:22:21.043036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.983 qpair failed and we were unable to recover it. 00:26:19.983 [2024-04-26 12:22:21.043225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.983 [2024-04-26 12:22:21.043556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.983 [2024-04-26 12:22:21.043565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.983 qpair failed and we were unable to recover it. 00:26:19.983 [2024-04-26 12:22:21.043752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.983 [2024-04-26 12:22:21.044071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.983 [2024-04-26 12:22:21.044081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.983 qpair failed and we were unable to recover it. 00:26:19.983 [2024-04-26 12:22:21.044422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.983 [2024-04-26 12:22:21.044587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.983 [2024-04-26 12:22:21.044595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.983 qpair failed and we were unable to recover it. 00:26:19.983 [2024-04-26 12:22:21.044876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.983 [2024-04-26 12:22:21.045198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.983 [2024-04-26 12:22:21.045207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.983 qpair failed and we were unable to recover it. 00:26:19.983 [2024-04-26 12:22:21.045531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.983 [2024-04-26 12:22:21.045725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.983 [2024-04-26 12:22:21.045734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.983 qpair failed and we were unable to recover it. 00:26:19.983 [2024-04-26 12:22:21.045915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.983 [2024-04-26 12:22:21.046255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.983 [2024-04-26 12:22:21.046264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.983 qpair failed and we were unable to recover it. 00:26:19.983 [2024-04-26 12:22:21.046573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.983 [2024-04-26 12:22:21.046777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.984 [2024-04-26 12:22:21.046787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.984 qpair failed and we were unable to recover it. 00:26:19.984 [2024-04-26 12:22:21.047119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.984 [2024-04-26 12:22:21.047425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.984 [2024-04-26 12:22:21.047434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.984 qpair failed and we were unable to recover it. 00:26:19.984 [2024-04-26 12:22:21.047775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.984 [2024-04-26 12:22:21.048090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.984 [2024-04-26 12:22:21.048100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.984 qpair failed and we were unable to recover it. 00:26:19.984 [2024-04-26 12:22:21.048481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.984 [2024-04-26 12:22:21.048650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.984 [2024-04-26 12:22:21.048661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.984 qpair failed and we were unable to recover it. 00:26:19.984 [2024-04-26 12:22:21.048877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.984 [2024-04-26 12:22:21.049189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.984 [2024-04-26 12:22:21.049198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.984 qpair failed and we were unable to recover it. 00:26:19.984 [2024-04-26 12:22:21.049481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.984 [2024-04-26 12:22:21.049836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.984 [2024-04-26 12:22:21.049849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.984 qpair failed and we were unable to recover it. 00:26:19.984 [2024-04-26 12:22:21.050033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.984 [2024-04-26 12:22:21.050194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.984 [2024-04-26 12:22:21.050203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.984 qpair failed and we were unable to recover it. 00:26:19.984 [2024-04-26 12:22:21.050550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.984 [2024-04-26 12:22:21.050738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.984 [2024-04-26 12:22:21.050747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.984 qpair failed and we were unable to recover it. 00:26:19.984 [2024-04-26 12:22:21.051077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.984 [2024-04-26 12:22:21.051411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.984 [2024-04-26 12:22:21.051419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.984 qpair failed and we were unable to recover it. 00:26:19.984 [2024-04-26 12:22:21.051738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.984 [2024-04-26 12:22:21.052079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.984 [2024-04-26 12:22:21.052089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.984 qpair failed and we were unable to recover it. 00:26:19.984 [2024-04-26 12:22:21.052403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.984 [2024-04-26 12:22:21.052579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.984 [2024-04-26 12:22:21.052589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.984 qpair failed and we were unable to recover it. 00:26:19.984 [2024-04-26 12:22:21.052995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.984 [2024-04-26 12:22:21.053378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.984 [2024-04-26 12:22:21.053388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.984 qpair failed and we were unable to recover it. 00:26:19.984 [2024-04-26 12:22:21.053738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.984 [2024-04-26 12:22:21.054048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.984 [2024-04-26 12:22:21.054058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.984 qpair failed and we were unable to recover it. 00:26:19.984 [2024-04-26 12:22:21.054365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.984 [2024-04-26 12:22:21.054707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.984 [2024-04-26 12:22:21.054716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.984 qpair failed and we were unable to recover it. 00:26:19.984 [2024-04-26 12:22:21.054887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.984 [2024-04-26 12:22:21.054961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.984 [2024-04-26 12:22:21.054970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.984 qpair failed and we were unable to recover it. 00:26:19.984 [2024-04-26 12:22:21.055311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.984 [2024-04-26 12:22:21.055612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.984 [2024-04-26 12:22:21.055622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.984 qpair failed and we were unable to recover it. 00:26:19.984 [2024-04-26 12:22:21.055808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.984 [2024-04-26 12:22:21.056113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.984 [2024-04-26 12:22:21.056122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.984 qpair failed and we were unable to recover it. 00:26:19.984 [2024-04-26 12:22:21.056311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.984 [2024-04-26 12:22:21.056607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.984 [2024-04-26 12:22:21.056616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.984 qpair failed and we were unable to recover it. 00:26:19.984 [2024-04-26 12:22:21.056807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.984 [2024-04-26 12:22:21.057027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.984 [2024-04-26 12:22:21.057037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.984 qpair failed and we were unable to recover it. 00:26:19.984 [2024-04-26 12:22:21.057233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.984 [2024-04-26 12:22:21.057538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.984 [2024-04-26 12:22:21.057549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.984 qpair failed and we were unable to recover it. 00:26:19.984 [2024-04-26 12:22:21.057730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.984 [2024-04-26 12:22:21.058027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.984 [2024-04-26 12:22:21.058037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.984 qpair failed and we were unable to recover it. 00:26:19.984 [2024-04-26 12:22:21.058362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.984 [2024-04-26 12:22:21.058630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.984 [2024-04-26 12:22:21.058639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.984 qpair failed and we were unable to recover it. 00:26:19.984 [2024-04-26 12:22:21.058849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.984 [2024-04-26 12:22:21.059159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.984 [2024-04-26 12:22:21.059168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.984 qpair failed and we were unable to recover it. 00:26:19.984 [2024-04-26 12:22:21.059351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.984 [2024-04-26 12:22:21.059609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.984 [2024-04-26 12:22:21.059618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.984 qpair failed and we were unable to recover it. 00:26:19.984 [2024-04-26 12:22:21.059966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.984 [2024-04-26 12:22:21.060153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.984 [2024-04-26 12:22:21.060162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.984 qpair failed and we were unable to recover it. 00:26:19.984 [2024-04-26 12:22:21.060477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.984 [2024-04-26 12:22:21.060671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.984 [2024-04-26 12:22:21.060680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.984 qpair failed and we were unable to recover it. 00:26:19.984 [2024-04-26 12:22:21.061033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.984 [2024-04-26 12:22:21.061336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.984 [2024-04-26 12:22:21.061346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.984 qpair failed and we were unable to recover it. 00:26:19.984 [2024-04-26 12:22:21.061607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.985 [2024-04-26 12:22:21.061785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.985 [2024-04-26 12:22:21.061794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.985 qpair failed and we were unable to recover it. 00:26:19.985 [2024-04-26 12:22:21.061986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.985 [2024-04-26 12:22:21.062173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.985 [2024-04-26 12:22:21.062182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.985 qpair failed and we were unable to recover it. 00:26:19.985 [2024-04-26 12:22:21.062353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.985 [2024-04-26 12:22:21.062535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.985 [2024-04-26 12:22:21.062545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.985 qpair failed and we were unable to recover it. 00:26:19.985 [2024-04-26 12:22:21.062885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.985 [2024-04-26 12:22:21.063218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.985 [2024-04-26 12:22:21.063228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.985 qpair failed and we were unable to recover it. 00:26:19.985 [2024-04-26 12:22:21.063549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.985 [2024-04-26 12:22:21.063860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.985 [2024-04-26 12:22:21.063871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.985 qpair failed and we were unable to recover it. 00:26:19.985 [2024-04-26 12:22:21.064193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.985 [2024-04-26 12:22:21.064503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.985 [2024-04-26 12:22:21.064512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.985 qpair failed and we were unable to recover it. 00:26:19.985 [2024-04-26 12:22:21.064856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.985 [2024-04-26 12:22:21.065185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.985 [2024-04-26 12:22:21.065194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.985 qpair failed and we were unable to recover it. 00:26:19.985 [2024-04-26 12:22:21.065587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.985 [2024-04-26 12:22:21.065866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.985 [2024-04-26 12:22:21.065875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.985 qpair failed and we were unable to recover it. 00:26:19.985 [2024-04-26 12:22:21.066048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.985 [2024-04-26 12:22:21.066343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.985 [2024-04-26 12:22:21.066352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.985 qpair failed and we were unable to recover it. 00:26:19.985 [2024-04-26 12:22:21.066529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.985 [2024-04-26 12:22:21.066711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.985 [2024-04-26 12:22:21.066720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.985 qpair failed and we were unable to recover it. 00:26:19.985 [2024-04-26 12:22:21.067031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.985 [2024-04-26 12:22:21.067352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.985 [2024-04-26 12:22:21.067362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.985 qpair failed and we were unable to recover it. 00:26:19.985 [2024-04-26 12:22:21.067568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.985 [2024-04-26 12:22:21.067861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.985 [2024-04-26 12:22:21.067870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.985 qpair failed and we were unable to recover it. 00:26:19.985 [2024-04-26 12:22:21.067923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.985 [2024-04-26 12:22:21.068119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.985 [2024-04-26 12:22:21.068129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.985 qpair failed and we were unable to recover it. 00:26:19.985 [2024-04-26 12:22:21.068324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.985 [2024-04-26 12:22:21.068641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.985 [2024-04-26 12:22:21.068650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.985 qpair failed and we were unable to recover it. 00:26:19.985 [2024-04-26 12:22:21.068699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.985 [2024-04-26 12:22:21.068997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.985 [2024-04-26 12:22:21.069007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.985 qpair failed and we were unable to recover it. 00:26:19.985 [2024-04-26 12:22:21.069321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.985 [2024-04-26 12:22:21.069598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.985 [2024-04-26 12:22:21.069610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.985 qpair failed and we were unable to recover it. 00:26:19.985 [2024-04-26 12:22:21.069958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.985 [2024-04-26 12:22:21.070262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.985 [2024-04-26 12:22:21.070271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.985 qpair failed and we were unable to recover it. 00:26:19.985 [2024-04-26 12:22:21.070674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.985 [2024-04-26 12:22:21.071007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.985 [2024-04-26 12:22:21.071017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.985 qpair failed and we were unable to recover it. 00:26:19.985 [2024-04-26 12:22:21.071168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.985 [2024-04-26 12:22:21.071482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.985 [2024-04-26 12:22:21.071491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.985 qpair failed and we were unable to recover it. 00:26:19.985 [2024-04-26 12:22:21.071832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.985 [2024-04-26 12:22:21.072155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.985 [2024-04-26 12:22:21.072165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.985 qpair failed and we were unable to recover it. 00:26:19.985 [2024-04-26 12:22:21.072482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.985 [2024-04-26 12:22:21.072663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.985 [2024-04-26 12:22:21.072672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.985 qpair failed and we were unable to recover it. 00:26:19.985 [2024-04-26 12:22:21.073033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.985 [2024-04-26 12:22:21.073330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.985 [2024-04-26 12:22:21.073340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.985 qpair failed and we were unable to recover it. 00:26:19.985 [2024-04-26 12:22:21.073704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.985 [2024-04-26 12:22:21.073997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.985 [2024-04-26 12:22:21.074007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.985 qpair failed and we were unable to recover it. 00:26:19.985 [2024-04-26 12:22:21.074200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.985 [2024-04-26 12:22:21.074563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.985 [2024-04-26 12:22:21.074573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.985 qpair failed and we were unable to recover it. 00:26:19.985 [2024-04-26 12:22:21.074920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.985 [2024-04-26 12:22:21.075246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.985 [2024-04-26 12:22:21.075262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.985 qpair failed and we were unable to recover it. 00:26:19.985 [2024-04-26 12:22:21.075561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.985 [2024-04-26 12:22:21.075887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.985 [2024-04-26 12:22:21.075899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.985 qpair failed and we were unable to recover it. 00:26:19.985 [2024-04-26 12:22:21.076206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.985 [2024-04-26 12:22:21.076397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.985 [2024-04-26 12:22:21.076407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.985 qpair failed and we were unable to recover it. 00:26:19.985 [2024-04-26 12:22:21.076743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.985 [2024-04-26 12:22:21.077050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.985 [2024-04-26 12:22:21.077060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.985 qpair failed and we were unable to recover it. 00:26:19.985 [2024-04-26 12:22:21.077366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.985 [2024-04-26 12:22:21.077558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.985 [2024-04-26 12:22:21.077567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.986 qpair failed and we were unable to recover it. 00:26:19.986 [2024-04-26 12:22:21.077874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.986 [2024-04-26 12:22:21.078192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.986 [2024-04-26 12:22:21.078201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.986 qpair failed and we were unable to recover it. 00:26:19.986 [2024-04-26 12:22:21.078426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.986 [2024-04-26 12:22:21.078625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.986 [2024-04-26 12:22:21.078639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.986 qpair failed and we were unable to recover it. 00:26:19.986 [2024-04-26 12:22:21.078967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.986 [2024-04-26 12:22:21.079291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.986 [2024-04-26 12:22:21.079300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.986 qpair failed and we were unable to recover it. 00:26:19.986 [2024-04-26 12:22:21.079593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.986 [2024-04-26 12:22:21.079879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.986 [2024-04-26 12:22:21.079889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.986 qpair failed and we were unable to recover it. 00:26:19.986 [2024-04-26 12:22:21.080198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.986 [2024-04-26 12:22:21.080555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.986 [2024-04-26 12:22:21.080565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.986 qpair failed and we were unable to recover it. 00:26:19.986 [2024-04-26 12:22:21.080791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.986 [2024-04-26 12:22:21.080983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.986 [2024-04-26 12:22:21.080993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.986 qpair failed and we were unable to recover it. 00:26:19.986 [2024-04-26 12:22:21.081328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.986 [2024-04-26 12:22:21.081637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.986 [2024-04-26 12:22:21.081647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.986 qpair failed and we were unable to recover it. 00:26:19.986 [2024-04-26 12:22:21.081986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.986 [2024-04-26 12:22:21.082280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.986 [2024-04-26 12:22:21.082290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.986 qpair failed and we were unable to recover it. 00:26:19.986 [2024-04-26 12:22:21.082635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.986 [2024-04-26 12:22:21.082931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.986 [2024-04-26 12:22:21.082941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.986 qpair failed and we were unable to recover it. 00:26:19.986 [2024-04-26 12:22:21.083263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.986 [2024-04-26 12:22:21.083581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.986 [2024-04-26 12:22:21.083591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.986 qpair failed and we were unable to recover it. 00:26:19.986 [2024-04-26 12:22:21.083910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.986 [2024-04-26 12:22:21.084184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.986 [2024-04-26 12:22:21.084193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.986 qpair failed and we were unable to recover it. 00:26:19.986 [2024-04-26 12:22:21.084499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.986 [2024-04-26 12:22:21.084905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.986 [2024-04-26 12:22:21.084915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.986 qpair failed and we were unable to recover it. 00:26:19.986 [2024-04-26 12:22:21.085095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.986 [2024-04-26 12:22:21.085381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.986 [2024-04-26 12:22:21.085390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.986 qpair failed and we were unable to recover it. 00:26:19.986 [2024-04-26 12:22:21.085688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.986 [2024-04-26 12:22:21.085887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.986 [2024-04-26 12:22:21.085897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.986 qpair failed and we were unable to recover it. 00:26:19.986 [2024-04-26 12:22:21.086092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.986 [2024-04-26 12:22:21.086375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.986 [2024-04-26 12:22:21.086384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.986 qpair failed and we were unable to recover it. 00:26:19.986 [2024-04-26 12:22:21.086614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.986 [2024-04-26 12:22:21.086928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.986 [2024-04-26 12:22:21.086938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.986 qpair failed and we were unable to recover it. 00:26:19.986 [2024-04-26 12:22:21.087131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.986 [2024-04-26 12:22:21.087313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.986 [2024-04-26 12:22:21.087323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.986 qpair failed and we were unable to recover it. 00:26:19.986 [2024-04-26 12:22:21.087500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.986 [2024-04-26 12:22:21.087677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.986 [2024-04-26 12:22:21.087686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.986 qpair failed and we were unable to recover it. 00:26:19.986 [2024-04-26 12:22:21.087945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.986 [2024-04-26 12:22:21.088122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.986 [2024-04-26 12:22:21.088132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.986 qpair failed and we were unable to recover it. 00:26:19.986 [2024-04-26 12:22:21.088469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.986 [2024-04-26 12:22:21.088788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.986 [2024-04-26 12:22:21.088797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.986 qpair failed and we were unable to recover it. 00:26:19.986 [2024-04-26 12:22:21.089151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.986 [2024-04-26 12:22:21.089475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.986 [2024-04-26 12:22:21.089484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.986 qpair failed and we were unable to recover it. 00:26:19.986 [2024-04-26 12:22:21.089802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.986 [2024-04-26 12:22:21.089982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.986 [2024-04-26 12:22:21.089991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.986 qpair failed and we were unable to recover it. 00:26:19.986 [2024-04-26 12:22:21.090204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.986 [2024-04-26 12:22:21.090416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.986 [2024-04-26 12:22:21.090425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.986 qpair failed and we were unable to recover it. 00:26:19.986 [2024-04-26 12:22:21.090629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.986 [2024-04-26 12:22:21.090994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.986 [2024-04-26 12:22:21.091003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.986 qpair failed and we were unable to recover it. 00:26:19.986 [2024-04-26 12:22:21.091322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.986 [2024-04-26 12:22:21.091515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.986 [2024-04-26 12:22:21.091523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.986 qpair failed and we were unable to recover it. 00:26:19.986 [2024-04-26 12:22:21.091818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.986 [2024-04-26 12:22:21.092110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.986 [2024-04-26 12:22:21.092120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.986 qpair failed and we were unable to recover it. 00:26:19.986 [2024-04-26 12:22:21.092289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.986 [2024-04-26 12:22:21.092529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.986 [2024-04-26 12:22:21.092538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.986 qpair failed and we were unable to recover it. 00:26:19.986 [2024-04-26 12:22:21.092771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.986 [2024-04-26 12:22:21.093079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.986 [2024-04-26 12:22:21.093088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.986 qpair failed and we were unable to recover it. 00:26:19.986 [2024-04-26 12:22:21.093476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.986 [2024-04-26 12:22:21.093801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.987 [2024-04-26 12:22:21.093810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.987 qpair failed and we were unable to recover it. 00:26:19.987 [2024-04-26 12:22:21.094127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.987 [2024-04-26 12:22:21.094469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.987 [2024-04-26 12:22:21.094478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.987 qpair failed and we were unable to recover it. 00:26:19.987 [2024-04-26 12:22:21.094808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.987 [2024-04-26 12:22:21.095117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.987 [2024-04-26 12:22:21.095126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.987 qpair failed and we were unable to recover it. 00:26:19.987 [2024-04-26 12:22:21.095464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.987 [2024-04-26 12:22:21.095774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.987 [2024-04-26 12:22:21.095784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.987 qpair failed and we were unable to recover it. 00:26:19.987 [2024-04-26 12:22:21.096094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.987 [2024-04-26 12:22:21.096423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.987 [2024-04-26 12:22:21.096432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.987 qpair failed and we were unable to recover it. 00:26:19.987 [2024-04-26 12:22:21.096770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.987 [2024-04-26 12:22:21.097081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.987 [2024-04-26 12:22:21.097090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.987 qpair failed and we were unable to recover it. 00:26:19.987 [2024-04-26 12:22:21.097412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.987 [2024-04-26 12:22:21.097729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.987 [2024-04-26 12:22:21.097738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.987 qpair failed and we were unable to recover it. 00:26:19.987 [2024-04-26 12:22:21.097923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.987 [2024-04-26 12:22:21.098276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.987 [2024-04-26 12:22:21.098285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.987 qpair failed and we were unable to recover it. 00:26:19.987 [2024-04-26 12:22:21.098455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.987 [2024-04-26 12:22:21.098768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.987 [2024-04-26 12:22:21.098777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.987 qpair failed and we were unable to recover it. 00:26:19.987 [2024-04-26 12:22:21.099116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.987 [2024-04-26 12:22:21.099314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.987 [2024-04-26 12:22:21.099324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.987 qpair failed and we were unable to recover it. 00:26:19.987 [2024-04-26 12:22:21.099616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.987 [2024-04-26 12:22:21.099920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.987 [2024-04-26 12:22:21.099929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.987 qpair failed and we were unable to recover it. 00:26:19.987 [2024-04-26 12:22:21.100255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.987 [2024-04-26 12:22:21.100601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.987 [2024-04-26 12:22:21.100610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.987 qpair failed and we were unable to recover it. 00:26:19.987 [2024-04-26 12:22:21.100954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.987 [2024-04-26 12:22:21.101243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.987 [2024-04-26 12:22:21.101252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.987 qpair failed and we were unable to recover it. 00:26:19.987 [2024-04-26 12:22:21.101429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.987 [2024-04-26 12:22:21.101636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.987 [2024-04-26 12:22:21.101645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.987 qpair failed and we were unable to recover it. 00:26:19.987 [2024-04-26 12:22:21.101961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.987 [2024-04-26 12:22:21.102196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.987 [2024-04-26 12:22:21.102212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.987 qpair failed and we were unable to recover it. 00:26:19.987 [2024-04-26 12:22:21.102483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.987 [2024-04-26 12:22:21.102792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.987 [2024-04-26 12:22:21.102801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.987 qpair failed and we were unable to recover it. 00:26:19.987 [2024-04-26 12:22:21.102854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.987 [2024-04-26 12:22:21.103029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.987 [2024-04-26 12:22:21.103039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.987 qpair failed and we were unable to recover it. 00:26:19.987 [2024-04-26 12:22:21.103378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.987 [2024-04-26 12:22:21.103573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.987 [2024-04-26 12:22:21.103582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.987 qpair failed and we were unable to recover it. 00:26:19.987 [2024-04-26 12:22:21.103908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.987 [2024-04-26 12:22:21.104120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.987 [2024-04-26 12:22:21.104129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.987 qpair failed and we were unable to recover it. 00:26:19.987 [2024-04-26 12:22:21.104460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.987 [2024-04-26 12:22:21.104801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.987 [2024-04-26 12:22:21.104812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.987 qpair failed and we were unable to recover it. 00:26:19.987 [2024-04-26 12:22:21.105109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.987 [2024-04-26 12:22:21.105309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.987 [2024-04-26 12:22:21.105318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.987 qpair failed and we were unable to recover it. 00:26:19.987 [2024-04-26 12:22:21.105486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.987 [2024-04-26 12:22:21.105673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.987 [2024-04-26 12:22:21.105682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.987 qpair failed and we were unable to recover it. 00:26:19.987 [2024-04-26 12:22:21.105982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.987 [2024-04-26 12:22:21.106295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.987 [2024-04-26 12:22:21.106303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.987 qpair failed and we were unable to recover it. 00:26:19.987 [2024-04-26 12:22:21.106580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.987 [2024-04-26 12:22:21.106895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.987 [2024-04-26 12:22:21.106905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.987 qpair failed and we were unable to recover it. 00:26:19.987 [2024-04-26 12:22:21.107163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.987 [2024-04-26 12:22:21.107343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.987 [2024-04-26 12:22:21.107351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.987 qpair failed and we were unable to recover it. 00:26:19.987 [2024-04-26 12:22:21.107540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.987 [2024-04-26 12:22:21.107895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.987 [2024-04-26 12:22:21.107905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.987 qpair failed and we were unable to recover it. 00:26:19.987 [2024-04-26 12:22:21.108213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.987 [2024-04-26 12:22:21.108543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.987 [2024-04-26 12:22:21.108552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.987 qpair failed and we were unable to recover it. 00:26:19.987 [2024-04-26 12:22:21.108890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.987 [2024-04-26 12:22:21.109184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.987 [2024-04-26 12:22:21.109192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.987 qpair failed and we were unable to recover it. 00:26:19.987 [2024-04-26 12:22:21.109488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.987 [2024-04-26 12:22:21.109787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.987 [2024-04-26 12:22:21.109796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.988 qpair failed and we were unable to recover it. 00:26:19.988 [2024-04-26 12:22:21.110104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.988 [2024-04-26 12:22:21.110272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.988 [2024-04-26 12:22:21.110281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.988 qpair failed and we were unable to recover it. 00:26:19.988 [2024-04-26 12:22:21.110470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.988 [2024-04-26 12:22:21.110803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.988 [2024-04-26 12:22:21.110811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.988 qpair failed and we were unable to recover it. 00:26:19.988 [2024-04-26 12:22:21.111150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.988 [2024-04-26 12:22:21.111481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.988 [2024-04-26 12:22:21.111489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.988 qpair failed and we were unable to recover it. 00:26:19.988 [2024-04-26 12:22:21.111674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.988 [2024-04-26 12:22:21.111976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.988 [2024-04-26 12:22:21.111986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.988 qpair failed and we were unable to recover it. 00:26:19.988 [2024-04-26 12:22:21.112301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.988 [2024-04-26 12:22:21.112627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.988 [2024-04-26 12:22:21.112637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.988 qpair failed and we were unable to recover it. 00:26:19.988 [2024-04-26 12:22:21.112820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.988 [2024-04-26 12:22:21.113014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.988 [2024-04-26 12:22:21.113024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.988 qpair failed and we were unable to recover it. 00:26:19.988 [2024-04-26 12:22:21.113361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.988 [2024-04-26 12:22:21.113521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.988 [2024-04-26 12:22:21.113531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.988 qpair failed and we were unable to recover it. 00:26:19.988 [2024-04-26 12:22:21.113875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.988 [2024-04-26 12:22:21.114138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.988 [2024-04-26 12:22:21.114146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.988 qpair failed and we were unable to recover it. 00:26:19.988 [2024-04-26 12:22:21.114451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.988 [2024-04-26 12:22:21.114644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.988 [2024-04-26 12:22:21.114654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.988 qpair failed and we were unable to recover it. 00:26:19.988 [2024-04-26 12:22:21.114956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.988 [2024-04-26 12:22:21.115267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.988 [2024-04-26 12:22:21.115276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.988 qpair failed and we were unable to recover it. 00:26:19.988 [2024-04-26 12:22:21.115477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.988 [2024-04-26 12:22:21.115788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.988 [2024-04-26 12:22:21.115797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.988 qpair failed and we were unable to recover it. 00:26:19.988 [2024-04-26 12:22:21.116055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.988 [2024-04-26 12:22:21.116407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.988 [2024-04-26 12:22:21.116416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.988 qpair failed and we were unable to recover it. 00:26:19.988 [2024-04-26 12:22:21.116602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.988 [2024-04-26 12:22:21.116905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.988 [2024-04-26 12:22:21.116915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.988 qpair failed and we were unable to recover it. 00:26:19.988 [2024-04-26 12:22:21.117242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.988 [2024-04-26 12:22:21.117441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.988 [2024-04-26 12:22:21.117450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.988 qpair failed and we were unable to recover it. 00:26:19.988 [2024-04-26 12:22:21.117766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.988 [2024-04-26 12:22:21.118053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.988 [2024-04-26 12:22:21.118063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.988 qpair failed and we were unable to recover it. 00:26:19.988 [2024-04-26 12:22:21.118234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.988 [2024-04-26 12:22:21.118559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.988 [2024-04-26 12:22:21.118568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.988 qpair failed and we were unable to recover it. 00:26:19.988 [2024-04-26 12:22:21.118887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.988 [2024-04-26 12:22:21.119065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.988 [2024-04-26 12:22:21.119074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.988 qpair failed and we were unable to recover it. 00:26:19.988 [2024-04-26 12:22:21.119411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.988 [2024-04-26 12:22:21.119734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.988 [2024-04-26 12:22:21.119744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.988 qpair failed and we were unable to recover it. 00:26:19.988 [2024-04-26 12:22:21.120020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.988 [2024-04-26 12:22:21.120235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.988 [2024-04-26 12:22:21.120244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.988 qpair failed and we were unable to recover it. 00:26:19.988 [2024-04-26 12:22:21.120569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.988 [2024-04-26 12:22:21.120904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.988 [2024-04-26 12:22:21.120913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.988 qpair failed and we were unable to recover it. 00:26:19.988 [2024-04-26 12:22:21.121237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.988 [2024-04-26 12:22:21.121551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.988 [2024-04-26 12:22:21.121560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.988 qpair failed and we were unable to recover it. 00:26:19.988 [2024-04-26 12:22:21.121860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.988 [2024-04-26 12:22:21.122174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.988 [2024-04-26 12:22:21.122183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.988 qpair failed and we were unable to recover it. 00:26:19.988 [2024-04-26 12:22:21.122505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.988 [2024-04-26 12:22:21.122794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.988 [2024-04-26 12:22:21.122804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.988 qpair failed and we were unable to recover it. 00:26:19.988 [2024-04-26 12:22:21.123172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.988 [2024-04-26 12:22:21.123492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.988 [2024-04-26 12:22:21.123501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.988 qpair failed and we were unable to recover it. 00:26:19.988 [2024-04-26 12:22:21.123819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.988 [2024-04-26 12:22:21.124121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.988 [2024-04-26 12:22:21.124131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.989 qpair failed and we were unable to recover it. 00:26:19.989 [2024-04-26 12:22:21.124323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.989 [2024-04-26 12:22:21.124712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.989 [2024-04-26 12:22:21.124721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.989 qpair failed and we were unable to recover it. 00:26:19.989 [2024-04-26 12:22:21.124952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.989 [2024-04-26 12:22:21.125187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.989 [2024-04-26 12:22:21.125196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.989 qpair failed and we were unable to recover it. 00:26:19.989 [2024-04-26 12:22:21.125381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.989 [2024-04-26 12:22:21.125769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.989 [2024-04-26 12:22:21.125778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.989 qpair failed and we were unable to recover it. 00:26:19.989 [2024-04-26 12:22:21.126087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.989 [2024-04-26 12:22:21.126254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.989 [2024-04-26 12:22:21.126262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.989 qpair failed and we were unable to recover it. 00:26:19.989 [2024-04-26 12:22:21.126610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.989 [2024-04-26 12:22:21.126969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.989 [2024-04-26 12:22:21.126978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.989 qpair failed and we were unable to recover it. 00:26:19.989 [2024-04-26 12:22:21.127204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.989 [2024-04-26 12:22:21.127574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.989 [2024-04-26 12:22:21.127583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.989 qpair failed and we were unable to recover it. 00:26:19.989 [2024-04-26 12:22:21.127768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.989 [2024-04-26 12:22:21.128089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.989 [2024-04-26 12:22:21.128099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.989 qpair failed and we were unable to recover it. 00:26:19.989 [2024-04-26 12:22:21.128326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.989 [2024-04-26 12:22:21.128656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.989 [2024-04-26 12:22:21.128665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.989 qpair failed and we were unable to recover it. 00:26:19.989 [2024-04-26 12:22:21.128980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.989 [2024-04-26 12:22:21.129281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.989 [2024-04-26 12:22:21.129290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.989 qpair failed and we were unable to recover it. 00:26:19.989 [2024-04-26 12:22:21.129603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.989 [2024-04-26 12:22:21.129795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.989 [2024-04-26 12:22:21.129805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.989 qpair failed and we were unable to recover it. 00:26:19.989 [2024-04-26 12:22:21.130015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.989 [2024-04-26 12:22:21.130258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.989 [2024-04-26 12:22:21.130267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.989 qpair failed and we were unable to recover it. 00:26:19.989 [2024-04-26 12:22:21.130584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.989 [2024-04-26 12:22:21.130888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.989 [2024-04-26 12:22:21.130897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.989 qpair failed and we were unable to recover it. 00:26:19.989 [2024-04-26 12:22:21.130978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.989 [2024-04-26 12:22:21.131027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.989 [2024-04-26 12:22:21.131035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.989 qpair failed and we were unable to recover it. 00:26:19.989 [2024-04-26 12:22:21.131326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.989 [2024-04-26 12:22:21.131520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.989 [2024-04-26 12:22:21.131529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.989 qpair failed and we were unable to recover it. 00:26:19.989 [2024-04-26 12:22:21.131724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.989 [2024-04-26 12:22:21.131994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.989 [2024-04-26 12:22:21.132003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.989 qpair failed and we were unable to recover it. 00:26:19.989 [2024-04-26 12:22:21.132305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.989 [2024-04-26 12:22:21.132608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.989 [2024-04-26 12:22:21.132617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.989 qpair failed and we were unable to recover it. 00:26:19.989 [2024-04-26 12:22:21.132953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.989 [2024-04-26 12:22:21.133120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.989 [2024-04-26 12:22:21.133131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.989 qpair failed and we were unable to recover it. 00:26:19.989 [2024-04-26 12:22:21.133296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.989 [2024-04-26 12:22:21.133585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.989 [2024-04-26 12:22:21.133594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.989 qpair failed and we were unable to recover it. 00:26:19.989 [2024-04-26 12:22:21.133921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.989 [2024-04-26 12:22:21.134210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.989 [2024-04-26 12:22:21.134219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.989 qpair failed and we were unable to recover it. 00:26:19.989 [2024-04-26 12:22:21.134403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.989 [2024-04-26 12:22:21.134790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.989 [2024-04-26 12:22:21.134799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.989 qpair failed and we were unable to recover it. 00:26:19.989 [2024-04-26 12:22:21.135153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.989 [2024-04-26 12:22:21.135460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.989 [2024-04-26 12:22:21.135470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.989 qpair failed and we were unable to recover it. 00:26:19.989 [2024-04-26 12:22:21.135788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.989 [2024-04-26 12:22:21.136094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.989 [2024-04-26 12:22:21.136104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.989 qpair failed and we were unable to recover it. 00:26:19.989 [2024-04-26 12:22:21.136390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.989 [2024-04-26 12:22:21.136707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.989 [2024-04-26 12:22:21.136717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.989 qpair failed and we were unable to recover it. 00:26:19.989 [2024-04-26 12:22:21.136912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.989 [2024-04-26 12:22:21.137191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.989 [2024-04-26 12:22:21.137200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.989 qpair failed and we were unable to recover it. 00:26:19.989 [2024-04-26 12:22:21.137367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.989 [2024-04-26 12:22:21.137688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.989 [2024-04-26 12:22:21.137696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.989 qpair failed and we were unable to recover it. 00:26:19.989 [2024-04-26 12:22:21.137913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.989 [2024-04-26 12:22:21.138092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.989 [2024-04-26 12:22:21.138100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.989 qpair failed and we were unable to recover it. 00:26:19.989 [2024-04-26 12:22:21.138283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.989 [2024-04-26 12:22:21.138582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.989 [2024-04-26 12:22:21.138590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.989 qpair failed and we were unable to recover it. 00:26:19.989 [2024-04-26 12:22:21.138954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.989 [2024-04-26 12:22:21.139264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.989 [2024-04-26 12:22:21.139273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.989 qpair failed and we were unable to recover it. 00:26:19.989 [2024-04-26 12:22:21.139447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.989 [2024-04-26 12:22:21.139740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.990 [2024-04-26 12:22:21.139749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.990 qpair failed and we were unable to recover it. 00:26:19.990 [2024-04-26 12:22:21.140042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.990 [2024-04-26 12:22:21.140252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.990 [2024-04-26 12:22:21.140261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.990 qpair failed and we were unable to recover it. 00:26:19.990 [2024-04-26 12:22:21.140579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.990 [2024-04-26 12:22:21.140842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.990 [2024-04-26 12:22:21.140852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.990 qpair failed and we were unable to recover it. 00:26:19.990 [2024-04-26 12:22:21.141148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.990 [2024-04-26 12:22:21.141344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.990 [2024-04-26 12:22:21.141353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.990 qpair failed and we were unable to recover it. 00:26:19.990 [2024-04-26 12:22:21.141569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.990 [2024-04-26 12:22:21.141879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.990 [2024-04-26 12:22:21.141889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.990 qpair failed and we were unable to recover it. 00:26:19.990 [2024-04-26 12:22:21.142214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.990 [2024-04-26 12:22:21.142260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.990 [2024-04-26 12:22:21.142270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.990 qpair failed and we were unable to recover it. 00:26:19.990 [2024-04-26 12:22:21.142457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.990 [2024-04-26 12:22:21.142775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.990 [2024-04-26 12:22:21.142785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.990 qpair failed and we were unable to recover it. 00:26:19.990 [2024-04-26 12:22:21.143114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.990 [2024-04-26 12:22:21.143424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.990 [2024-04-26 12:22:21.143433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.990 qpair failed and we were unable to recover it. 00:26:19.990 [2024-04-26 12:22:21.143611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.990 [2024-04-26 12:22:21.143890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.990 [2024-04-26 12:22:21.143900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.990 qpair failed and we were unable to recover it. 00:26:19.990 [2024-04-26 12:22:21.144220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.990 [2024-04-26 12:22:21.144406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.990 [2024-04-26 12:22:21.144415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.990 qpair failed and we were unable to recover it. 00:26:19.990 [2024-04-26 12:22:21.144594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.990 [2024-04-26 12:22:21.144892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.990 [2024-04-26 12:22:21.144902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.990 qpair failed and we were unable to recover it. 00:26:19.990 [2024-04-26 12:22:21.145238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.990 [2024-04-26 12:22:21.145553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.990 [2024-04-26 12:22:21.145563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.990 qpair failed and we were unable to recover it. 00:26:19.990 [2024-04-26 12:22:21.145847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.990 [2024-04-26 12:22:21.146030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.990 [2024-04-26 12:22:21.146040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.990 qpair failed and we were unable to recover it. 00:26:19.990 [2024-04-26 12:22:21.146376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.990 [2024-04-26 12:22:21.146698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.990 [2024-04-26 12:22:21.146708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.990 qpair failed and we were unable to recover it. 00:26:19.990 [2024-04-26 12:22:21.147099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.990 [2024-04-26 12:22:21.147445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.990 [2024-04-26 12:22:21.147454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.990 qpair failed and we were unable to recover it. 00:26:19.990 [2024-04-26 12:22:21.147773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.990 [2024-04-26 12:22:21.147968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.990 [2024-04-26 12:22:21.147978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.990 qpair failed and we were unable to recover it. 00:26:19.990 [2024-04-26 12:22:21.148282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.990 [2024-04-26 12:22:21.148615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.990 [2024-04-26 12:22:21.148625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.990 qpair failed and we were unable to recover it. 00:26:19.990 [2024-04-26 12:22:21.148814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.990 [2024-04-26 12:22:21.149029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.990 [2024-04-26 12:22:21.149039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.990 qpair failed and we were unable to recover it. 00:26:19.990 [2024-04-26 12:22:21.149259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.990 [2024-04-26 12:22:21.149573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.990 [2024-04-26 12:22:21.149583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.990 qpair failed and we were unable to recover it. 00:26:19.990 [2024-04-26 12:22:21.149908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.990 [2024-04-26 12:22:21.150089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.990 [2024-04-26 12:22:21.150098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.990 qpair failed and we were unable to recover it. 00:26:19.990 [2024-04-26 12:22:21.150290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.990 [2024-04-26 12:22:21.150535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.990 [2024-04-26 12:22:21.150544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.990 qpair failed and we were unable to recover it. 00:26:19.990 [2024-04-26 12:22:21.150732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.990 [2024-04-26 12:22:21.151037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.990 [2024-04-26 12:22:21.151048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.990 qpair failed and we were unable to recover it. 00:26:19.990 [2024-04-26 12:22:21.151263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.990 [2024-04-26 12:22:21.151608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.990 [2024-04-26 12:22:21.151618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.990 qpair failed and we were unable to recover it. 00:26:19.990 [2024-04-26 12:22:21.151820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.990 [2024-04-26 12:22:21.152134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.990 [2024-04-26 12:22:21.152145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.990 qpair failed and we were unable to recover it. 00:26:19.990 [2024-04-26 12:22:21.152455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.990 [2024-04-26 12:22:21.152782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.990 [2024-04-26 12:22:21.152791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.990 qpair failed and we were unable to recover it. 00:26:19.990 [2024-04-26 12:22:21.153172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.990 [2024-04-26 12:22:21.153491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.990 [2024-04-26 12:22:21.153501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.990 qpair failed and we were unable to recover it. 00:26:19.990 [2024-04-26 12:22:21.153549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.990 [2024-04-26 12:22:21.153733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.990 [2024-04-26 12:22:21.153742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.990 qpair failed and we were unable to recover it. 00:26:19.990 [2024-04-26 12:22:21.154066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.990 [2024-04-26 12:22:21.154253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.990 [2024-04-26 12:22:21.154262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.990 qpair failed and we were unable to recover it. 00:26:19.990 [2024-04-26 12:22:21.154580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.990 [2024-04-26 12:22:21.154849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.990 [2024-04-26 12:22:21.154859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.990 qpair failed and we were unable to recover it. 00:26:19.990 [2024-04-26 12:22:21.155185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.991 [2024-04-26 12:22:21.155349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.991 [2024-04-26 12:22:21.155359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.991 qpair failed and we were unable to recover it. 00:26:19.991 [2024-04-26 12:22:21.155564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.991 [2024-04-26 12:22:21.155776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.991 [2024-04-26 12:22:21.155786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.991 qpair failed and we were unable to recover it. 00:26:19.991 [2024-04-26 12:22:21.155982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.991 [2024-04-26 12:22:21.156343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.991 [2024-04-26 12:22:21.156353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.991 qpair failed and we were unable to recover it. 00:26:19.991 [2024-04-26 12:22:21.156673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.991 [2024-04-26 12:22:21.157021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.991 [2024-04-26 12:22:21.157031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.991 qpair failed and we were unable to recover it. 00:26:19.991 [2024-04-26 12:22:21.157360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.991 [2024-04-26 12:22:21.157649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.991 [2024-04-26 12:22:21.157658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.991 qpair failed and we were unable to recover it. 00:26:19.991 [2024-04-26 12:22:21.157852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.991 [2024-04-26 12:22:21.158174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.991 [2024-04-26 12:22:21.158183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.991 qpair failed and we were unable to recover it. 00:26:19.991 [2024-04-26 12:22:21.158505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.991 [2024-04-26 12:22:21.158828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.991 [2024-04-26 12:22:21.158841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.991 qpair failed and we were unable to recover it. 00:26:19.991 [2024-04-26 12:22:21.159007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.991 [2024-04-26 12:22:21.159303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.991 [2024-04-26 12:22:21.159313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.991 qpair failed and we were unable to recover it. 00:26:19.991 [2024-04-26 12:22:21.159515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.991 [2024-04-26 12:22:21.159749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.991 [2024-04-26 12:22:21.159759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.991 qpair failed and we were unable to recover it. 00:26:19.991 [2024-04-26 12:22:21.159953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.991 [2024-04-26 12:22:21.160284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.991 [2024-04-26 12:22:21.160294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.991 qpair failed and we were unable to recover it. 00:26:19.991 [2024-04-26 12:22:21.160630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.991 [2024-04-26 12:22:21.160859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.991 [2024-04-26 12:22:21.160870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.991 qpair failed and we were unable to recover it. 00:26:19.991 [2024-04-26 12:22:21.161175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.991 [2024-04-26 12:22:21.161222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.991 [2024-04-26 12:22:21.161231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.991 qpair failed and we were unable to recover it. 00:26:19.991 [2024-04-26 12:22:21.161587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.991 [2024-04-26 12:22:21.161885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.991 [2024-04-26 12:22:21.161895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.991 qpair failed and we were unable to recover it. 00:26:19.991 [2024-04-26 12:22:21.162103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.991 [2024-04-26 12:22:21.162392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.991 [2024-04-26 12:22:21.162402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.991 qpair failed and we were unable to recover it. 00:26:19.991 [2024-04-26 12:22:21.162579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.991 [2024-04-26 12:22:21.162917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.991 [2024-04-26 12:22:21.162926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.991 qpair failed and we were unable to recover it. 00:26:19.991 [2024-04-26 12:22:21.163097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.991 [2024-04-26 12:22:21.163321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.991 [2024-04-26 12:22:21.163331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.991 qpair failed and we were unable to recover it. 00:26:19.991 [2024-04-26 12:22:21.163449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.991 [2024-04-26 12:22:21.163550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.991 [2024-04-26 12:22:21.163559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.991 qpair failed and we were unable to recover it. 00:26:19.991 [2024-04-26 12:22:21.163881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.991 [2024-04-26 12:22:21.164065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.991 [2024-04-26 12:22:21.164074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.991 qpair failed and we were unable to recover it. 00:26:19.991 [2024-04-26 12:22:21.164419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.991 [2024-04-26 12:22:21.164616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.991 [2024-04-26 12:22:21.164625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.991 qpair failed and we were unable to recover it. 00:26:19.991 [2024-04-26 12:22:21.164820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.991 [2024-04-26 12:22:21.165149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.991 [2024-04-26 12:22:21.165159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.991 qpair failed and we were unable to recover it. 00:26:19.991 [2024-04-26 12:22:21.165370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.991 [2024-04-26 12:22:21.165686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.991 [2024-04-26 12:22:21.165699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.991 qpair failed and we were unable to recover it. 00:26:19.991 [2024-04-26 12:22:21.165901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.991 [2024-04-26 12:22:21.166182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.991 [2024-04-26 12:22:21.166191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.991 qpair failed and we were unable to recover it. 00:26:19.991 [2024-04-26 12:22:21.166354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.991 [2024-04-26 12:22:21.166657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.991 [2024-04-26 12:22:21.166666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.991 qpair failed and we were unable to recover it. 00:26:19.991 [2024-04-26 12:22:21.166884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.991 [2024-04-26 12:22:21.167238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.991 [2024-04-26 12:22:21.167247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.991 qpair failed and we were unable to recover it. 00:26:19.991 [2024-04-26 12:22:21.167536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.991 [2024-04-26 12:22:21.167857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.991 [2024-04-26 12:22:21.167867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.991 qpair failed and we were unable to recover it. 00:26:19.991 [2024-04-26 12:22:21.168099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.991 [2024-04-26 12:22:21.168390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.991 [2024-04-26 12:22:21.168400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.991 qpair failed and we were unable to recover it. 00:26:19.991 [2024-04-26 12:22:21.168707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.991 [2024-04-26 12:22:21.168888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.991 [2024-04-26 12:22:21.168898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.991 qpair failed and we were unable to recover it. 00:26:19.991 [2024-04-26 12:22:21.169198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.991 [2024-04-26 12:22:21.169384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.991 [2024-04-26 12:22:21.169395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.991 qpair failed and we were unable to recover it. 00:26:19.991 [2024-04-26 12:22:21.169677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.991 [2024-04-26 12:22:21.169857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.991 [2024-04-26 12:22:21.169867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.991 qpair failed and we were unable to recover it. 00:26:19.992 [2024-04-26 12:22:21.170148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.992 [2024-04-26 12:22:21.170444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.992 [2024-04-26 12:22:21.170453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.992 qpair failed and we were unable to recover it. 00:26:19.992 [2024-04-26 12:22:21.170794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.992 [2024-04-26 12:22:21.170985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.992 [2024-04-26 12:22:21.170995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.992 qpair failed and we were unable to recover it. 00:26:19.992 [2024-04-26 12:22:21.171196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.992 [2024-04-26 12:22:21.171397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.992 [2024-04-26 12:22:21.171406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.992 qpair failed and we were unable to recover it. 00:26:19.992 [2024-04-26 12:22:21.171664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.992 [2024-04-26 12:22:21.171956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.992 [2024-04-26 12:22:21.171965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.992 qpair failed and we were unable to recover it. 00:26:19.992 [2024-04-26 12:22:21.172317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.992 [2024-04-26 12:22:21.172516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.992 [2024-04-26 12:22:21.172525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.992 qpair failed and we were unable to recover it. 00:26:19.992 [2024-04-26 12:22:21.172848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.992 [2024-04-26 12:22:21.173189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.992 [2024-04-26 12:22:21.173199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.992 qpair failed and we were unable to recover it. 00:26:19.992 [2024-04-26 12:22:21.173435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.992 [2024-04-26 12:22:21.173713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.992 [2024-04-26 12:22:21.173723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.992 qpair failed and we were unable to recover it. 00:26:19.992 [2024-04-26 12:22:21.173932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.992 [2024-04-26 12:22:21.174243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.992 [2024-04-26 12:22:21.174253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.992 qpair failed and we were unable to recover it. 00:26:19.992 [2024-04-26 12:22:21.174643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.992 [2024-04-26 12:22:21.174964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.992 [2024-04-26 12:22:21.174974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.992 qpair failed and we were unable to recover it. 00:26:19.992 [2024-04-26 12:22:21.175381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.992 [2024-04-26 12:22:21.175555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.992 [2024-04-26 12:22:21.175564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.992 qpair failed and we were unable to recover it. 00:26:19.992 [2024-04-26 12:22:21.175824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.992 [2024-04-26 12:22:21.176162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.992 [2024-04-26 12:22:21.176172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.992 qpair failed and we were unable to recover it. 00:26:19.992 [2024-04-26 12:22:21.176342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.992 [2024-04-26 12:22:21.176405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.992 [2024-04-26 12:22:21.176414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.992 qpair failed and we were unable to recover it. 00:26:19.992 [2024-04-26 12:22:21.176607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.992 [2024-04-26 12:22:21.176685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.992 [2024-04-26 12:22:21.176694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.992 qpair failed and we were unable to recover it. 00:26:19.992 [2024-04-26 12:22:21.176984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.992 [2024-04-26 12:22:21.177326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.992 [2024-04-26 12:22:21.177335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.992 qpair failed and we were unable to recover it. 00:26:19.992 [2024-04-26 12:22:21.177667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.992 [2024-04-26 12:22:21.177992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.992 [2024-04-26 12:22:21.178002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.992 qpair failed and we were unable to recover it. 00:26:19.992 [2024-04-26 12:22:21.178424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.992 [2024-04-26 12:22:21.178734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.992 [2024-04-26 12:22:21.178744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.992 qpair failed and we were unable to recover it. 00:26:19.992 [2024-04-26 12:22:21.179124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.992 [2024-04-26 12:22:21.179224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.992 [2024-04-26 12:22:21.179233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.992 qpair failed and we were unable to recover it. 00:26:19.992 [2024-04-26 12:22:21.179561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.992 [2024-04-26 12:22:21.179819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.992 [2024-04-26 12:22:21.179829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.992 qpair failed and we were unable to recover it. 00:26:19.992 [2024-04-26 12:22:21.180175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.992 [2024-04-26 12:22:21.180535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.992 [2024-04-26 12:22:21.180545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.992 qpair failed and we were unable to recover it. 00:26:19.992 [2024-04-26 12:22:21.180857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.992 [2024-04-26 12:22:21.181102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.992 [2024-04-26 12:22:21.181112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.992 qpair failed and we were unable to recover it. 00:26:19.992 [2024-04-26 12:22:21.181160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.992 [2024-04-26 12:22:21.181529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.992 [2024-04-26 12:22:21.181539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.992 qpair failed and we were unable to recover it. 00:26:19.992 [2024-04-26 12:22:21.181705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.992 [2024-04-26 12:22:21.182033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.992 [2024-04-26 12:22:21.182043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.992 qpair failed and we were unable to recover it. 00:26:19.992 [2024-04-26 12:22:21.182232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.992 [2024-04-26 12:22:21.182425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.992 [2024-04-26 12:22:21.182435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.992 qpair failed and we were unable to recover it. 00:26:19.992 [2024-04-26 12:22:21.182598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.992 [2024-04-26 12:22:21.182766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.992 [2024-04-26 12:22:21.182776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.992 qpair failed and we were unable to recover it. 00:26:19.992 [2024-04-26 12:22:21.182967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.992 [2024-04-26 12:22:21.183302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.992 [2024-04-26 12:22:21.183312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.992 qpair failed and we were unable to recover it. 00:26:19.993 [2024-04-26 12:22:21.183635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.993 [2024-04-26 12:22:21.183829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.993 [2024-04-26 12:22:21.183847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.993 qpair failed and we were unable to recover it. 00:26:19.993 [2024-04-26 12:22:21.184035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.993 [2024-04-26 12:22:21.184265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.993 [2024-04-26 12:22:21.184275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.993 qpair failed and we were unable to recover it. 00:26:19.993 [2024-04-26 12:22:21.184605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.993 [2024-04-26 12:22:21.184933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.993 [2024-04-26 12:22:21.184943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:19.993 qpair failed and we were unable to recover it. 00:26:19.993 [2024-04-26 12:22:21.185176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.257 [2024-04-26 12:22:21.185407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.257 [2024-04-26 12:22:21.185418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.257 qpair failed and we were unable to recover it. 00:26:20.257 [2024-04-26 12:22:21.185697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.257 [2024-04-26 12:22:21.186022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.257 [2024-04-26 12:22:21.186032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.257 qpair failed and we were unable to recover it. 00:26:20.257 [2024-04-26 12:22:21.186349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.257 [2024-04-26 12:22:21.186563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.257 [2024-04-26 12:22:21.186573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.257 qpair failed and we were unable to recover it. 00:26:20.257 [2024-04-26 12:22:21.186947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.257 [2024-04-26 12:22:21.187261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.257 [2024-04-26 12:22:21.187270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.257 qpair failed and we were unable to recover it. 00:26:20.257 [2024-04-26 12:22:21.187592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.257 [2024-04-26 12:22:21.187640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.257 [2024-04-26 12:22:21.187649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.257 qpair failed and we were unable to recover it. 00:26:20.257 [2024-04-26 12:22:21.187975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.257 [2024-04-26 12:22:21.188292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.257 [2024-04-26 12:22:21.188302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.257 qpair failed and we were unable to recover it. 00:26:20.257 [2024-04-26 12:22:21.188482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.257 [2024-04-26 12:22:21.188767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.257 [2024-04-26 12:22:21.188777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.257 qpair failed and we were unable to recover it. 00:26:20.257 [2024-04-26 12:22:21.189111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.257 [2024-04-26 12:22:21.189317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.257 [2024-04-26 12:22:21.189326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.257 qpair failed and we were unable to recover it. 00:26:20.257 [2024-04-26 12:22:21.189668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.257 [2024-04-26 12:22:21.189983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.257 [2024-04-26 12:22:21.189993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.257 qpair failed and we were unable to recover it. 00:26:20.257 [2024-04-26 12:22:21.190160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.257 [2024-04-26 12:22:21.190431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.257 [2024-04-26 12:22:21.190448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.257 qpair failed and we were unable to recover it. 00:26:20.257 [2024-04-26 12:22:21.190828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.257 [2024-04-26 12:22:21.191189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.257 [2024-04-26 12:22:21.191198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.257 qpair failed and we were unable to recover it. 00:26:20.257 [2024-04-26 12:22:21.191408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.257 [2024-04-26 12:22:21.191447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.257 [2024-04-26 12:22:21.191456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.257 qpair failed and we were unable to recover it. 00:26:20.257 [2024-04-26 12:22:21.191637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.257 [2024-04-26 12:22:21.191934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.257 [2024-04-26 12:22:21.191944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.257 qpair failed and we were unable to recover it. 00:26:20.257 [2024-04-26 12:22:21.191992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.257 [2024-04-26 12:22:21.192316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.257 [2024-04-26 12:22:21.192326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.257 qpair failed and we were unable to recover it. 00:26:20.257 [2024-04-26 12:22:21.192511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.257 [2024-04-26 12:22:21.192795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.258 [2024-04-26 12:22:21.192807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.258 qpair failed and we were unable to recover it. 00:26:20.258 [2024-04-26 12:22:21.193112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.258 [2024-04-26 12:22:21.193542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.258 [2024-04-26 12:22:21.193550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.258 qpair failed and we were unable to recover it. 00:26:20.258 [2024-04-26 12:22:21.193599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.258 [2024-04-26 12:22:21.193917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.258 [2024-04-26 12:22:21.193928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.258 qpair failed and we were unable to recover it. 00:26:20.258 [2024-04-26 12:22:21.194108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.258 [2024-04-26 12:22:21.194291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.258 [2024-04-26 12:22:21.194299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.258 qpair failed and we were unable to recover it. 00:26:20.258 [2024-04-26 12:22:21.194487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.258 [2024-04-26 12:22:21.194569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.258 [2024-04-26 12:22:21.194577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.258 qpair failed and we were unable to recover it. 00:26:20.258 [2024-04-26 12:22:21.194897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.258 [2024-04-26 12:22:21.195192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.258 [2024-04-26 12:22:21.195201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.258 qpair failed and we were unable to recover it. 00:26:20.258 [2024-04-26 12:22:21.195570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.258 [2024-04-26 12:22:21.195892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.258 [2024-04-26 12:22:21.195902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.258 qpair failed and we were unable to recover it. 00:26:20.258 [2024-04-26 12:22:21.196207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.258 [2024-04-26 12:22:21.196519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.258 [2024-04-26 12:22:21.196528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.258 qpair failed and we were unable to recover it. 00:26:20.258 [2024-04-26 12:22:21.196742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.258 [2024-04-26 12:22:21.196956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.258 [2024-04-26 12:22:21.196965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.258 qpair failed and we were unable to recover it. 00:26:20.258 [2024-04-26 12:22:21.197287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.258 [2024-04-26 12:22:21.197604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.258 [2024-04-26 12:22:21.197613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.258 qpair failed and we were unable to recover it. 00:26:20.258 [2024-04-26 12:22:21.197925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.258 [2024-04-26 12:22:21.198265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.258 [2024-04-26 12:22:21.198274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.258 qpair failed and we were unable to recover it. 00:26:20.258 [2024-04-26 12:22:21.198493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.258 [2024-04-26 12:22:21.198727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.258 [2024-04-26 12:22:21.198735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.258 qpair failed and we were unable to recover it. 00:26:20.258 [2024-04-26 12:22:21.199038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.258 [2024-04-26 12:22:21.199341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.258 [2024-04-26 12:22:21.199350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.258 qpair failed and we were unable to recover it. 00:26:20.258 [2024-04-26 12:22:21.199575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.258 [2024-04-26 12:22:21.199625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.258 [2024-04-26 12:22:21.199634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.258 qpair failed and we were unable to recover it. 00:26:20.258 [2024-04-26 12:22:21.199880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.258 [2024-04-26 12:22:21.200176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.258 [2024-04-26 12:22:21.200185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.258 qpair failed and we were unable to recover it. 00:26:20.258 [2024-04-26 12:22:21.200526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.258 [2024-04-26 12:22:21.200806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.258 [2024-04-26 12:22:21.200816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.258 qpair failed and we were unable to recover it. 00:26:20.258 [2024-04-26 12:22:21.201153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.258 [2024-04-26 12:22:21.201192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.258 [2024-04-26 12:22:21.201201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.258 qpair failed and we were unable to recover it. 00:26:20.258 [2024-04-26 12:22:21.201488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.258 [2024-04-26 12:22:21.201556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.258 [2024-04-26 12:22:21.201564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.258 qpair failed and we were unable to recover it. 00:26:20.258 [2024-04-26 12:22:21.201877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.258 [2024-04-26 12:22:21.202185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.258 [2024-04-26 12:22:21.202194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.258 qpair failed and we were unable to recover it. 00:26:20.258 [2024-04-26 12:22:21.202513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.258 [2024-04-26 12:22:21.202855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.258 [2024-04-26 12:22:21.202865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.258 qpair failed and we were unable to recover it. 00:26:20.258 [2024-04-26 12:22:21.202953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.258 [2024-04-26 12:22:21.203256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.258 [2024-04-26 12:22:21.203265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.258 qpair failed and we were unable to recover it. 00:26:20.258 [2024-04-26 12:22:21.203430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.258 [2024-04-26 12:22:21.203680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.258 [2024-04-26 12:22:21.203689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.258 qpair failed and we were unable to recover it. 00:26:20.258 [2024-04-26 12:22:21.204040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.258 [2024-04-26 12:22:21.204229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.258 [2024-04-26 12:22:21.204238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.258 qpair failed and we were unable to recover it. 00:26:20.258 [2024-04-26 12:22:21.204565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.258 [2024-04-26 12:22:21.204791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.258 [2024-04-26 12:22:21.204800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.258 qpair failed and we were unable to recover it. 00:26:20.258 [2024-04-26 12:22:21.204983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.259 [2024-04-26 12:22:21.205385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.259 [2024-04-26 12:22:21.205394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.259 qpair failed and we were unable to recover it. 00:26:20.259 [2024-04-26 12:22:21.205694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.259 [2024-04-26 12:22:21.205981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.259 [2024-04-26 12:22:21.205991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.259 qpair failed and we were unable to recover it. 00:26:20.259 [2024-04-26 12:22:21.206178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.259 [2024-04-26 12:22:21.206358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.259 [2024-04-26 12:22:21.206367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.259 qpair failed and we were unable to recover it. 00:26:20.259 [2024-04-26 12:22:21.206638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.259 [2024-04-26 12:22:21.206869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.259 [2024-04-26 12:22:21.206878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.259 qpair failed and we were unable to recover it. 00:26:20.259 [2024-04-26 12:22:21.207062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.259 [2024-04-26 12:22:21.207354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.259 [2024-04-26 12:22:21.207363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.259 qpair failed and we were unable to recover it. 00:26:20.259 [2024-04-26 12:22:21.207662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.259 [2024-04-26 12:22:21.207845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.259 [2024-04-26 12:22:21.207854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.259 qpair failed and we were unable to recover it. 00:26:20.259 [2024-04-26 12:22:21.208146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.259 [2024-04-26 12:22:21.208335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.259 [2024-04-26 12:22:21.208345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.259 qpair failed and we were unable to recover it. 00:26:20.259 [2024-04-26 12:22:21.208720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.259 [2024-04-26 12:22:21.208913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.259 [2024-04-26 12:22:21.208923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.259 qpair failed and we were unable to recover it. 00:26:20.259 [2024-04-26 12:22:21.209149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.259 [2024-04-26 12:22:21.209461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.259 [2024-04-26 12:22:21.209470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.259 qpair failed and we were unable to recover it. 00:26:20.259 [2024-04-26 12:22:21.209774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.259 [2024-04-26 12:22:21.210035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.259 [2024-04-26 12:22:21.210044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.259 qpair failed and we were unable to recover it. 00:26:20.259 [2024-04-26 12:22:21.210404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.259 [2024-04-26 12:22:21.210691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.259 [2024-04-26 12:22:21.210700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.259 qpair failed and we were unable to recover it. 00:26:20.259 [2024-04-26 12:22:21.211048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.259 [2024-04-26 12:22:21.211229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.259 [2024-04-26 12:22:21.211238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.259 qpair failed and we were unable to recover it. 00:26:20.259 [2024-04-26 12:22:21.211558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.259 [2024-04-26 12:22:21.211846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.259 [2024-04-26 12:22:21.211855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.259 qpair failed and we were unable to recover it. 00:26:20.259 [2024-04-26 12:22:21.212236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.259 [2024-04-26 12:22:21.212420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.259 [2024-04-26 12:22:21.212429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.259 qpair failed and we were unable to recover it. 00:26:20.259 [2024-04-26 12:22:21.212528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.259 [2024-04-26 12:22:21.212709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.259 [2024-04-26 12:22:21.212719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.259 qpair failed and we were unable to recover it. 00:26:20.259 [2024-04-26 12:22:21.213111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.259 [2024-04-26 12:22:21.213284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.259 [2024-04-26 12:22:21.213293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.259 qpair failed and we were unable to recover it. 00:26:20.259 [2024-04-26 12:22:21.213447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.259 [2024-04-26 12:22:21.213758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.259 [2024-04-26 12:22:21.213767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.259 qpair failed and we were unable to recover it. 00:26:20.259 [2024-04-26 12:22:21.213958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.259 [2024-04-26 12:22:21.214138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.259 [2024-04-26 12:22:21.214147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.259 qpair failed and we were unable to recover it. 00:26:20.259 [2024-04-26 12:22:21.214345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.259 [2024-04-26 12:22:21.214532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.259 [2024-04-26 12:22:21.214541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.259 qpair failed and we were unable to recover it. 00:26:20.259 [2024-04-26 12:22:21.214826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.259 [2024-04-26 12:22:21.215159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.259 [2024-04-26 12:22:21.215169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.259 qpair failed and we were unable to recover it. 00:26:20.259 [2024-04-26 12:22:21.215492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.259 [2024-04-26 12:22:21.215707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.259 [2024-04-26 12:22:21.215716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.259 qpair failed and we were unable to recover it. 00:26:20.259 [2024-04-26 12:22:21.215958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.259 [2024-04-26 12:22:21.216170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.259 [2024-04-26 12:22:21.216180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.259 qpair failed and we were unable to recover it. 00:26:20.259 [2024-04-26 12:22:21.216375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.259 [2024-04-26 12:22:21.216556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.259 [2024-04-26 12:22:21.216565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.259 qpair failed and we were unable to recover it. 00:26:20.259 [2024-04-26 12:22:21.216801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.259 [2024-04-26 12:22:21.217120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.259 [2024-04-26 12:22:21.217130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.259 qpair failed and we were unable to recover it. 00:26:20.259 [2024-04-26 12:22:21.217490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.259 [2024-04-26 12:22:21.217889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.260 [2024-04-26 12:22:21.217899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.260 qpair failed and we were unable to recover it. 00:26:20.260 [2024-04-26 12:22:21.218254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.260 [2024-04-26 12:22:21.218573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.260 [2024-04-26 12:22:21.218583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.260 qpair failed and we were unable to recover it. 00:26:20.260 [2024-04-26 12:22:21.218890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.260 [2024-04-26 12:22:21.218935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.260 [2024-04-26 12:22:21.218943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.260 qpair failed and we were unable to recover it. 00:26:20.260 [2024-04-26 12:22:21.219255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.260 [2024-04-26 12:22:21.219431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.260 [2024-04-26 12:22:21.219442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.260 qpair failed and we were unable to recover it. 00:26:20.260 [2024-04-26 12:22:21.219816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.260 [2024-04-26 12:22:21.220129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.260 [2024-04-26 12:22:21.220138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.260 qpair failed and we were unable to recover it. 00:26:20.260 [2024-04-26 12:22:21.220455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.260 [2024-04-26 12:22:21.220758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.260 [2024-04-26 12:22:21.220767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.260 qpair failed and we were unable to recover it. 00:26:20.260 [2024-04-26 12:22:21.221138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.260 [2024-04-26 12:22:21.221503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.260 [2024-04-26 12:22:21.221512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.260 qpair failed and we were unable to recover it. 00:26:20.260 [2024-04-26 12:22:21.221891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.260 [2024-04-26 12:22:21.222209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.260 [2024-04-26 12:22:21.222217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.260 qpair failed and we were unable to recover it. 00:26:20.260 [2024-04-26 12:22:21.222522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.260 [2024-04-26 12:22:21.222851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.260 [2024-04-26 12:22:21.222860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.260 qpair failed and we were unable to recover it. 00:26:20.260 [2024-04-26 12:22:21.222938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.260 [2024-04-26 12:22:21.223172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.260 [2024-04-26 12:22:21.223181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.260 qpair failed and we were unable to recover it. 00:26:20.260 [2024-04-26 12:22:21.223375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.260 [2024-04-26 12:22:21.223574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.260 [2024-04-26 12:22:21.223583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.260 qpair failed and we were unable to recover it. 00:26:20.260 [2024-04-26 12:22:21.223912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.260 [2024-04-26 12:22:21.224306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.260 [2024-04-26 12:22:21.224315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.260 qpair failed and we were unable to recover it. 00:26:20.260 [2024-04-26 12:22:21.224627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.260 [2024-04-26 12:22:21.224939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.260 [2024-04-26 12:22:21.224948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.260 qpair failed and we were unable to recover it. 00:26:20.260 [2024-04-26 12:22:21.225268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.260 [2024-04-26 12:22:21.225574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.260 [2024-04-26 12:22:21.225583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.260 qpair failed and we were unable to recover it. 00:26:20.260 [2024-04-26 12:22:21.225915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.260 [2024-04-26 12:22:21.226042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.260 [2024-04-26 12:22:21.226051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.260 qpair failed and we were unable to recover it. 00:26:20.260 [2024-04-26 12:22:21.226103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.260 [2024-04-26 12:22:21.226319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.260 [2024-04-26 12:22:21.226328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.260 qpair failed and we were unable to recover it. 00:26:20.260 [2024-04-26 12:22:21.226663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.260 [2024-04-26 12:22:21.226952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.260 [2024-04-26 12:22:21.226962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.260 qpair failed and we were unable to recover it. 00:26:20.260 [2024-04-26 12:22:21.227136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.260 [2024-04-26 12:22:21.227521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.260 [2024-04-26 12:22:21.227531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.260 qpair failed and we were unable to recover it. 00:26:20.260 [2024-04-26 12:22:21.227719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.260 [2024-04-26 12:22:21.228015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.260 [2024-04-26 12:22:21.228024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.260 qpair failed and we were unable to recover it. 00:26:20.260 12:22:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:20.260 [2024-04-26 12:22:21.228438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.260 12:22:21 -- common/autotest_common.sh@850 -- # return 0 00:26:20.260 [2024-04-26 12:22:21.228619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.260 [2024-04-26 12:22:21.228628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.260 qpair failed and we were unable to recover it. 00:26:20.260 [2024-04-26 12:22:21.228796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.260 12:22:21 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:20.260 [2024-04-26 12:22:21.229104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.260 [2024-04-26 12:22:21.229114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.260 qpair failed and we were unable to recover it. 00:26:20.260 12:22:21 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:20.260 [2024-04-26 12:22:21.229289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.260 12:22:21 -- common/autotest_common.sh@10 -- # set +x 00:26:20.260 [2024-04-26 12:22:21.229595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.260 [2024-04-26 12:22:21.229605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.260 qpair failed and we were unable to recover it. 00:26:20.260 [2024-04-26 12:22:21.229788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.260 [2024-04-26 12:22:21.230116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.260 [2024-04-26 12:22:21.230126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.260 qpair failed and we were unable to recover it. 00:26:20.260 [2024-04-26 12:22:21.230342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.260 [2024-04-26 12:22:21.230553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.260 [2024-04-26 12:22:21.230564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.260 qpair failed and we were unable to recover it. 00:26:20.260 [2024-04-26 12:22:21.230757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.260 [2024-04-26 12:22:21.231051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.260 [2024-04-26 12:22:21.231061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.260 qpair failed and we were unable to recover it. 00:26:20.260 [2024-04-26 12:22:21.231260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.260 [2024-04-26 12:22:21.231671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.261 [2024-04-26 12:22:21.231680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.261 qpair failed and we were unable to recover it. 00:26:20.261 [2024-04-26 12:22:21.231808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.261 [2024-04-26 12:22:21.232119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.261 [2024-04-26 12:22:21.232129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.261 qpair failed and we were unable to recover it. 00:26:20.261 [2024-04-26 12:22:21.232552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.261 [2024-04-26 12:22:21.232914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.261 [2024-04-26 12:22:21.232924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.261 qpair failed and we were unable to recover it. 00:26:20.261 [2024-04-26 12:22:21.233233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.261 [2024-04-26 12:22:21.233539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.261 [2024-04-26 12:22:21.233548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.261 qpair failed and we were unable to recover it. 00:26:20.261 [2024-04-26 12:22:21.233853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.261 [2024-04-26 12:22:21.234150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.261 [2024-04-26 12:22:21.234161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.261 qpair failed and we were unable to recover it. 00:26:20.261 [2024-04-26 12:22:21.234511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.261 [2024-04-26 12:22:21.234699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.261 [2024-04-26 12:22:21.234708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.261 qpair failed and we were unable to recover it. 00:26:20.261 [2024-04-26 12:22:21.234756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.261 [2024-04-26 12:22:21.234940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.261 [2024-04-26 12:22:21.234950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.261 qpair failed and we were unable to recover it. 00:26:20.261 [2024-04-26 12:22:21.235135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.261 [2024-04-26 12:22:21.235379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.261 [2024-04-26 12:22:21.235388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.261 qpair failed and we were unable to recover it. 00:26:20.261 [2024-04-26 12:22:21.235669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.261 [2024-04-26 12:22:21.235987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.261 [2024-04-26 12:22:21.235998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.261 qpair failed and we were unable to recover it. 00:26:20.261 [2024-04-26 12:22:21.236317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.261 [2024-04-26 12:22:21.236590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.261 [2024-04-26 12:22:21.236599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.261 qpair failed and we were unable to recover it. 00:26:20.261 [2024-04-26 12:22:21.236915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.261 [2024-04-26 12:22:21.237100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.261 [2024-04-26 12:22:21.237109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.261 qpair failed and we were unable to recover it. 00:26:20.261 [2024-04-26 12:22:21.237304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.261 [2024-04-26 12:22:21.237541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.261 [2024-04-26 12:22:21.237550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.261 qpair failed and we were unable to recover it. 00:26:20.261 [2024-04-26 12:22:21.237869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.261 [2024-04-26 12:22:21.238178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.261 [2024-04-26 12:22:21.238187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.261 qpair failed and we were unable to recover it. 00:26:20.261 [2024-04-26 12:22:21.238360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.261 [2024-04-26 12:22:21.238555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.261 [2024-04-26 12:22:21.238564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.261 qpair failed and we were unable to recover it. 00:26:20.261 [2024-04-26 12:22:21.238887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.261 [2024-04-26 12:22:21.239185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.261 [2024-04-26 12:22:21.239195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.261 qpair failed and we were unable to recover it. 00:26:20.261 [2024-04-26 12:22:21.239583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.261 [2024-04-26 12:22:21.239734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.261 [2024-04-26 12:22:21.239743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.261 qpair failed and we were unable to recover it. 00:26:20.261 [2024-04-26 12:22:21.240128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.261 [2024-04-26 12:22:21.240326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.261 [2024-04-26 12:22:21.240335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.261 qpair failed and we were unable to recover it. 00:26:20.261 [2024-04-26 12:22:21.240678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.261 [2024-04-26 12:22:21.241038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.261 [2024-04-26 12:22:21.241048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.261 qpair failed and we were unable to recover it. 00:26:20.261 [2024-04-26 12:22:21.241245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.261 [2024-04-26 12:22:21.241530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.261 [2024-04-26 12:22:21.241541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.261 qpair failed and we were unable to recover it. 00:26:20.261 [2024-04-26 12:22:21.241738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.261 [2024-04-26 12:22:21.241959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.261 [2024-04-26 12:22:21.241970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.261 qpair failed and we were unable to recover it. 00:26:20.261 [2024-04-26 12:22:21.242277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.261 [2024-04-26 12:22:21.242599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.261 [2024-04-26 12:22:21.242607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.261 qpair failed and we were unable to recover it. 00:26:20.261 [2024-04-26 12:22:21.242944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.261 [2024-04-26 12:22:21.243288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.261 [2024-04-26 12:22:21.243297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.261 qpair failed and we were unable to recover it. 00:26:20.261 [2024-04-26 12:22:21.243612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.261 [2024-04-26 12:22:21.243903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.261 [2024-04-26 12:22:21.243913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.261 qpair failed and we were unable to recover it. 00:26:20.261 [2024-04-26 12:22:21.244256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.261 [2024-04-26 12:22:21.244568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.261 [2024-04-26 12:22:21.244578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.261 qpair failed and we were unable to recover it. 00:26:20.261 [2024-04-26 12:22:21.244875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.261 [2024-04-26 12:22:21.245211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.261 [2024-04-26 12:22:21.245220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.261 qpair failed and we were unable to recover it. 00:26:20.261 [2024-04-26 12:22:21.245521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.261 [2024-04-26 12:22:21.245720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.261 [2024-04-26 12:22:21.245730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.261 qpair failed and we were unable to recover it. 00:26:20.261 [2024-04-26 12:22:21.246041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.261 [2024-04-26 12:22:21.246346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.261 [2024-04-26 12:22:21.246356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.262 qpair failed and we were unable to recover it. 00:26:20.262 [2024-04-26 12:22:21.246672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.262 [2024-04-26 12:22:21.246997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.262 [2024-04-26 12:22:21.247007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.262 qpair failed and we were unable to recover it. 00:26:20.262 [2024-04-26 12:22:21.247336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.262 [2024-04-26 12:22:21.247625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.262 [2024-04-26 12:22:21.247634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.262 qpair failed and we were unable to recover it. 00:26:20.262 [2024-04-26 12:22:21.248003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.262 [2024-04-26 12:22:21.248172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.262 [2024-04-26 12:22:21.248181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.262 qpair failed and we were unable to recover it. 00:26:20.262 [2024-04-26 12:22:21.248371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.262 [2024-04-26 12:22:21.248692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.262 [2024-04-26 12:22:21.248702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.262 qpair failed and we were unable to recover it. 00:26:20.262 [2024-04-26 12:22:21.249031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.262 [2024-04-26 12:22:21.249343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.262 [2024-04-26 12:22:21.249352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.262 qpair failed and we were unable to recover it. 00:26:20.262 [2024-04-26 12:22:21.249674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.262 [2024-04-26 12:22:21.249995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.262 [2024-04-26 12:22:21.250004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.262 qpair failed and we were unable to recover it. 00:26:20.262 [2024-04-26 12:22:21.250238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.262 [2024-04-26 12:22:21.250547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.262 [2024-04-26 12:22:21.250556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.262 qpair failed and we were unable to recover it. 00:26:20.262 [2024-04-26 12:22:21.250748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.262 [2024-04-26 12:22:21.250955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.262 [2024-04-26 12:22:21.250965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.262 qpair failed and we were unable to recover it. 00:26:20.262 [2024-04-26 12:22:21.251286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.262 [2024-04-26 12:22:21.251620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.262 [2024-04-26 12:22:21.251629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.262 qpair failed and we were unable to recover it. 00:26:20.262 [2024-04-26 12:22:21.251817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.262 [2024-04-26 12:22:21.252135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.262 [2024-04-26 12:22:21.252145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.262 qpair failed and we were unable to recover it. 00:26:20.262 [2024-04-26 12:22:21.252304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.262 [2024-04-26 12:22:21.252350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.262 [2024-04-26 12:22:21.252358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.262 qpair failed and we were unable to recover it. 00:26:20.262 [2024-04-26 12:22:21.252636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.262 [2024-04-26 12:22:21.252957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.262 [2024-04-26 12:22:21.252967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.262 qpair failed and we were unable to recover it. 00:26:20.262 [2024-04-26 12:22:21.253166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.262 [2024-04-26 12:22:21.253425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.262 [2024-04-26 12:22:21.253433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.262 qpair failed and we were unable to recover it. 00:26:20.262 [2024-04-26 12:22:21.253748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.262 [2024-04-26 12:22:21.254058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.262 [2024-04-26 12:22:21.254067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.262 qpair failed and we were unable to recover it. 00:26:20.262 [2024-04-26 12:22:21.254298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.262 [2024-04-26 12:22:21.254655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.262 [2024-04-26 12:22:21.254664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.262 qpair failed and we were unable to recover it. 00:26:20.262 [2024-04-26 12:22:21.254898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.262 [2024-04-26 12:22:21.255262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.262 [2024-04-26 12:22:21.255273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.262 qpair failed and we were unable to recover it. 00:26:20.262 [2024-04-26 12:22:21.255456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.262 [2024-04-26 12:22:21.255686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.262 [2024-04-26 12:22:21.255695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.262 qpair failed and we were unable to recover it. 00:26:20.262 [2024-04-26 12:22:21.256008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.262 [2024-04-26 12:22:21.256215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.263 [2024-04-26 12:22:21.256225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.263 qpair failed and we were unable to recover it. 00:26:20.263 [2024-04-26 12:22:21.256559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.263 [2024-04-26 12:22:21.256727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.263 [2024-04-26 12:22:21.256736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.263 qpair failed and we were unable to recover it. 00:26:20.263 [2024-04-26 12:22:21.257015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.263 [2024-04-26 12:22:21.257194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.263 [2024-04-26 12:22:21.257203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.263 qpair failed and we were unable to recover it. 00:26:20.263 [2024-04-26 12:22:21.257599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.263 [2024-04-26 12:22:21.257929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.263 [2024-04-26 12:22:21.257939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.263 qpair failed and we were unable to recover it. 00:26:20.263 [2024-04-26 12:22:21.258244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.263 [2024-04-26 12:22:21.258405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.263 [2024-04-26 12:22:21.258413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.263 qpair failed and we were unable to recover it. 00:26:20.263 [2024-04-26 12:22:21.258613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.263 [2024-04-26 12:22:21.258953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.263 [2024-04-26 12:22:21.258962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.263 qpair failed and we were unable to recover it. 00:26:20.263 [2024-04-26 12:22:21.259176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.263 [2024-04-26 12:22:21.259472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.263 [2024-04-26 12:22:21.259482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.263 qpair failed and we were unable to recover it. 00:26:20.263 [2024-04-26 12:22:21.259823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.263 [2024-04-26 12:22:21.260112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.263 [2024-04-26 12:22:21.260122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.263 qpair failed and we were unable to recover it. 00:26:20.263 [2024-04-26 12:22:21.260315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.263 [2024-04-26 12:22:21.260611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.263 [2024-04-26 12:22:21.260620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.263 qpair failed and we were unable to recover it. 00:26:20.263 [2024-04-26 12:22:21.260968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.263 [2024-04-26 12:22:21.261262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.263 [2024-04-26 12:22:21.261271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.263 qpair failed and we were unable to recover it. 00:26:20.263 [2024-04-26 12:22:21.261612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.263 [2024-04-26 12:22:21.261925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.263 [2024-04-26 12:22:21.261936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.263 qpair failed and we were unable to recover it. 00:26:20.263 [2024-04-26 12:22:21.262277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.263 [2024-04-26 12:22:21.262596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.263 [2024-04-26 12:22:21.262606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.263 qpair failed and we were unable to recover it. 00:26:20.263 [2024-04-26 12:22:21.262915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.263 [2024-04-26 12:22:21.263102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.263 [2024-04-26 12:22:21.263113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.263 qpair failed and we were unable to recover it. 00:26:20.263 [2024-04-26 12:22:21.263359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.263 [2024-04-26 12:22:21.263542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.263 [2024-04-26 12:22:21.263552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.263 qpair failed and we were unable to recover it. 00:26:20.263 [2024-04-26 12:22:21.263902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.263 [2024-04-26 12:22:21.264204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.263 [2024-04-26 12:22:21.264213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.263 qpair failed and we were unable to recover it. 00:26:20.263 [2024-04-26 12:22:21.264397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.263 [2024-04-26 12:22:21.264786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.263 [2024-04-26 12:22:21.264796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.263 qpair failed and we were unable to recover it. 00:26:20.263 [2024-04-26 12:22:21.265096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.263 [2024-04-26 12:22:21.265447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.263 [2024-04-26 12:22:21.265456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.263 qpair failed and we were unable to recover it. 00:26:20.263 [2024-04-26 12:22:21.265635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.263 [2024-04-26 12:22:21.265930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.263 [2024-04-26 12:22:21.265940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.263 qpair failed and we were unable to recover it. 00:26:20.263 [2024-04-26 12:22:21.266257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.263 [2024-04-26 12:22:21.266547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.263 [2024-04-26 12:22:21.266556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.263 qpair failed and we were unable to recover it. 00:26:20.263 [2024-04-26 12:22:21.266879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.263 [2024-04-26 12:22:21.267189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.263 [2024-04-26 12:22:21.267199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.263 qpair failed and we were unable to recover it. 00:26:20.263 [2024-04-26 12:22:21.267517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.263 12:22:21 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:20.263 [2024-04-26 12:22:21.267853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.263 [2024-04-26 12:22:21.267865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.263 qpair failed and we were unable to recover it. 00:26:20.263 12:22:21 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:20.263 [2024-04-26 12:22:21.268097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.263 [2024-04-26 12:22:21.268385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.263 [2024-04-26 12:22:21.268396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.263 qpair failed and we were unable to recover it. 00:26:20.263 12:22:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:20.263 [2024-04-26 12:22:21.268711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.263 12:22:21 -- common/autotest_common.sh@10 -- # set +x 00:26:20.263 [2024-04-26 12:22:21.268934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.264 [2024-04-26 12:22:21.268944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.264 qpair failed and we were unable to recover it. 00:26:20.264 [2024-04-26 12:22:21.269276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.264 [2024-04-26 12:22:21.269454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.264 [2024-04-26 12:22:21.269463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.264 qpair failed and we were unable to recover it. 00:26:20.264 [2024-04-26 12:22:21.269793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.264 [2024-04-26 12:22:21.270106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.264 [2024-04-26 12:22:21.270115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.264 qpair failed and we were unable to recover it. 00:26:20.264 [2024-04-26 12:22:21.270470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.264 [2024-04-26 12:22:21.270784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.264 [2024-04-26 12:22:21.270793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.264 qpair failed and we were unable to recover it. 00:26:20.264 [2024-04-26 12:22:21.271116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.264 [2024-04-26 12:22:21.271391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.264 [2024-04-26 12:22:21.271401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.264 qpair failed and we were unable to recover it. 00:26:20.264 [2024-04-26 12:22:21.271616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.264 [2024-04-26 12:22:21.271846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.264 [2024-04-26 12:22:21.271856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.264 qpair failed and we were unable to recover it. 00:26:20.264 [2024-04-26 12:22:21.272134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.264 [2024-04-26 12:22:21.272451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.264 [2024-04-26 12:22:21.272460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.264 qpair failed and we were unable to recover it. 00:26:20.264 [2024-04-26 12:22:21.272658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.264 [2024-04-26 12:22:21.273001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.264 [2024-04-26 12:22:21.273010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.264 qpair failed and we were unable to recover it. 00:26:20.264 [2024-04-26 12:22:21.273360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.264 [2024-04-26 12:22:21.273551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.264 [2024-04-26 12:22:21.273560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.264 qpair failed and we were unable to recover it. 00:26:20.264 [2024-04-26 12:22:21.273885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.264 [2024-04-26 12:22:21.274210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.264 [2024-04-26 12:22:21.274218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.264 qpair failed and we were unable to recover it. 00:26:20.264 [2024-04-26 12:22:21.274536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.264 [2024-04-26 12:22:21.274900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.264 [2024-04-26 12:22:21.274909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.264 qpair failed and we were unable to recover it. 00:26:20.264 [2024-04-26 12:22:21.275206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.264 [2024-04-26 12:22:21.275526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.264 [2024-04-26 12:22:21.275535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.264 qpair failed and we were unable to recover it. 00:26:20.264 [2024-04-26 12:22:21.275852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.264 [2024-04-26 12:22:21.276203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.264 [2024-04-26 12:22:21.276213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.264 qpair failed and we were unable to recover it. 00:26:20.264 [2024-04-26 12:22:21.276528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.264 [2024-04-26 12:22:21.276748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.264 [2024-04-26 12:22:21.276756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.264 qpair failed and we were unable to recover it. 00:26:20.264 [2024-04-26 12:22:21.277076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.264 [2024-04-26 12:22:21.277399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.264 [2024-04-26 12:22:21.277408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.264 qpair failed and we were unable to recover it. 00:26:20.264 [2024-04-26 12:22:21.277718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.264 [2024-04-26 12:22:21.277908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.264 [2024-04-26 12:22:21.277918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.264 qpair failed and we were unable to recover it. 00:26:20.264 [2024-04-26 12:22:21.278104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.264 [2024-04-26 12:22:21.278417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.264 [2024-04-26 12:22:21.278426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.264 qpair failed and we were unable to recover it. 00:26:20.264 [2024-04-26 12:22:21.278757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.264 [2024-04-26 12:22:21.279065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.264 [2024-04-26 12:22:21.279075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.264 qpair failed and we were unable to recover it. 00:26:20.264 [2024-04-26 12:22:21.279264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.264 [2024-04-26 12:22:21.279555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.264 [2024-04-26 12:22:21.279565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.264 qpair failed and we were unable to recover it. 00:26:20.264 [2024-04-26 12:22:21.279903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.264 [2024-04-26 12:22:21.280195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.264 [2024-04-26 12:22:21.280204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.264 qpair failed and we were unable to recover it. 00:26:20.264 [2024-04-26 12:22:21.280500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.264 [2024-04-26 12:22:21.280670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.264 [2024-04-26 12:22:21.280679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.264 qpair failed and we were unable to recover it. 00:26:20.264 [2024-04-26 12:22:21.280999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.264 [2024-04-26 12:22:21.281279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.264 [2024-04-26 12:22:21.281289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.264 qpair failed and we were unable to recover it. 00:26:20.264 [2024-04-26 12:22:21.281691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.264 [2024-04-26 12:22:21.281960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.264 [2024-04-26 12:22:21.281970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.264 qpair failed and we were unable to recover it. 00:26:20.264 [2024-04-26 12:22:21.282273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.264 [2024-04-26 12:22:21.282584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.264 [2024-04-26 12:22:21.282594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.264 qpair failed and we were unable to recover it. 00:26:20.264 [2024-04-26 12:22:21.282912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.265 [2024-04-26 12:22:21.283247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.265 [2024-04-26 12:22:21.283258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.265 qpair failed and we were unable to recover it. 00:26:20.265 [2024-04-26 12:22:21.283494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.265 [2024-04-26 12:22:21.283692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.265 [2024-04-26 12:22:21.283702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.265 qpair failed and we were unable to recover it. 00:26:20.265 [2024-04-26 12:22:21.284065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.265 [2024-04-26 12:22:21.284105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.265 [2024-04-26 12:22:21.284114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.265 qpair failed and we were unable to recover it. 00:26:20.265 [2024-04-26 12:22:21.284398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.265 [2024-04-26 12:22:21.284728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.265 [2024-04-26 12:22:21.284737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.265 qpair failed and we were unable to recover it. 00:26:20.265 [2024-04-26 12:22:21.285040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.265 [2024-04-26 12:22:21.285374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.265 [2024-04-26 12:22:21.285384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.265 qpair failed and we were unable to recover it. 00:26:20.265 [2024-04-26 12:22:21.285788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.265 [2024-04-26 12:22:21.286049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.265 [2024-04-26 12:22:21.286058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.265 qpair failed and we were unable to recover it. 00:26:20.265 [2024-04-26 12:22:21.286383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.265 [2024-04-26 12:22:21.286586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.265 [2024-04-26 12:22:21.286595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.265 qpair failed and we were unable to recover it. 00:26:20.265 [2024-04-26 12:22:21.286791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.265 [2024-04-26 12:22:21.287017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.265 [2024-04-26 12:22:21.287027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.265 qpair failed and we were unable to recover it. 00:26:20.265 Malloc0 00:26:20.265 [2024-04-26 12:22:21.287317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.265 [2024-04-26 12:22:21.287600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.265 [2024-04-26 12:22:21.287609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.265 qpair failed and we were unable to recover it. 00:26:20.265 12:22:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:20.265 [2024-04-26 12:22:21.287924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.265 12:22:21 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:20.265 [2024-04-26 12:22:21.288232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.265 [2024-04-26 12:22:21.288241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.265 qpair failed and we were unable to recover it. 00:26:20.265 12:22:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:20.265 [2024-04-26 12:22:21.288459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.265 12:22:21 -- common/autotest_common.sh@10 -- # set +x 00:26:20.265 [2024-04-26 12:22:21.288753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.265 [2024-04-26 12:22:21.288762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.265 qpair failed and we were unable to recover it. 00:26:20.265 [2024-04-26 12:22:21.288973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.265 [2024-04-26 12:22:21.289209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.265 [2024-04-26 12:22:21.289218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.265 qpair failed and we were unable to recover it. 00:26:20.265 [2024-04-26 12:22:21.289541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.265 [2024-04-26 12:22:21.289859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.265 [2024-04-26 12:22:21.289868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.265 qpair failed and we were unable to recover it. 00:26:20.265 [2024-04-26 12:22:21.290196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.265 [2024-04-26 12:22:21.290549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.265 [2024-04-26 12:22:21.290559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.265 qpair failed and we were unable to recover it. 00:26:20.265 [2024-04-26 12:22:21.290789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.265 [2024-04-26 12:22:21.291066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.265 [2024-04-26 12:22:21.291076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.265 qpair failed and we were unable to recover it. 00:26:20.265 [2024-04-26 12:22:21.291429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.265 [2024-04-26 12:22:21.291733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.265 [2024-04-26 12:22:21.291743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.265 qpair failed and we were unable to recover it. 00:26:20.265 [2024-04-26 12:22:21.291933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.265 [2024-04-26 12:22:21.292290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.265 [2024-04-26 12:22:21.292300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.265 qpair failed and we were unable to recover it. 00:26:20.265 [2024-04-26 12:22:21.292627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.265 [2024-04-26 12:22:21.292925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.265 [2024-04-26 12:22:21.292934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.265 qpair failed and we were unable to recover it. 00:26:20.265 [2024-04-26 12:22:21.293124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.265 [2024-04-26 12:22:21.293439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.265 [2024-04-26 12:22:21.293448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.265 qpair failed and we were unable to recover it. 00:26:20.265 [2024-04-26 12:22:21.293786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.265 [2024-04-26 12:22:21.293973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.265 [2024-04-26 12:22:21.293982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.265 qpair failed and we were unable to recover it. 00:26:20.265 [2024-04-26 12:22:21.294150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.265 [2024-04-26 12:22:21.294429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.265 [2024-04-26 12:22:21.294437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.265 qpair failed and we were unable to recover it. 00:26:20.265 [2024-04-26 12:22:21.294468] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:20.265 [2024-04-26 12:22:21.294775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.265 [2024-04-26 12:22:21.295058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.265 [2024-04-26 12:22:21.295068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.265 qpair failed and we were unable to recover it. 00:26:20.265 [2024-04-26 12:22:21.295362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.265 [2024-04-26 12:22:21.295674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.265 [2024-04-26 12:22:21.295683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.265 qpair failed and we were unable to recover it. 00:26:20.265 [2024-04-26 12:22:21.296075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.265 [2024-04-26 12:22:21.296383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.265 [2024-04-26 12:22:21.296393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.266 qpair failed and we were unable to recover it. 00:26:20.266 [2024-04-26 12:22:21.296732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.266 [2024-04-26 12:22:21.296926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.266 [2024-04-26 12:22:21.296936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.266 qpair failed and we were unable to recover it. 00:26:20.266 [2024-04-26 12:22:21.297304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.266 [2024-04-26 12:22:21.297615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.266 [2024-04-26 12:22:21.297624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.266 qpair failed and we were unable to recover it. 00:26:20.266 [2024-04-26 12:22:21.297816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.266 [2024-04-26 12:22:21.297868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.266 [2024-04-26 12:22:21.297877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.266 qpair failed and we were unable to recover it. 00:26:20.266 [2024-04-26 12:22:21.298191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.266 [2024-04-26 12:22:21.298514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.266 [2024-04-26 12:22:21.298523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.266 qpair failed and we were unable to recover it. 00:26:20.266 [2024-04-26 12:22:21.298729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.266 [2024-04-26 12:22:21.299048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.266 [2024-04-26 12:22:21.299057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.266 qpair failed and we were unable to recover it. 00:26:20.266 [2024-04-26 12:22:21.299254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.266 [2024-04-26 12:22:21.299539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.266 [2024-04-26 12:22:21.299548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.266 qpair failed and we were unable to recover it. 00:26:20.266 [2024-04-26 12:22:21.299877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.266 [2024-04-26 12:22:21.300224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.266 [2024-04-26 12:22:21.300233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.266 qpair failed and we were unable to recover it. 00:26:20.266 [2024-04-26 12:22:21.300353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.266 [2024-04-26 12:22:21.300683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.266 [2024-04-26 12:22:21.300693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.266 qpair failed and we were unable to recover it. 00:26:20.266 [2024-04-26 12:22:21.300996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.266 [2024-04-26 12:22:21.301180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.266 [2024-04-26 12:22:21.301189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.266 qpair failed and we were unable to recover it. 00:26:20.266 [2024-04-26 12:22:21.301537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.266 [2024-04-26 12:22:21.301761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.266 [2024-04-26 12:22:21.301770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.266 qpair failed and we were unable to recover it. 00:26:20.266 [2024-04-26 12:22:21.302095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.266 [2024-04-26 12:22:21.302264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.266 [2024-04-26 12:22:21.302273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.266 qpair failed and we were unable to recover it. 00:26:20.266 [2024-04-26 12:22:21.302701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.266 [2024-04-26 12:22:21.303035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.266 [2024-04-26 12:22:21.303044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.266 qpair failed and we were unable to recover it. 00:26:20.266 [2024-04-26 12:22:21.303375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.266 12:22:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:20.266 [2024-04-26 12:22:21.303682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.266 [2024-04-26 12:22:21.303691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.266 qpair failed and we were unable to recover it. 00:26:20.266 12:22:21 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:20.266 [2024-04-26 12:22:21.304061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.266 12:22:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:20.266 12:22:21 -- common/autotest_common.sh@10 -- # set +x 00:26:20.266 [2024-04-26 12:22:21.304369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.266 [2024-04-26 12:22:21.304378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.266 qpair failed and we were unable to recover it. 00:26:20.266 [2024-04-26 12:22:21.304630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.266 [2024-04-26 12:22:21.305002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.266 [2024-04-26 12:22:21.305011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.266 qpair failed and we were unable to recover it. 00:26:20.266 [2024-04-26 12:22:21.305129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.266 [2024-04-26 12:22:21.305408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.266 [2024-04-26 12:22:21.305417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.266 qpair failed and we were unable to recover it. 00:26:20.266 [2024-04-26 12:22:21.305474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.266 [2024-04-26 12:22:21.305709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.266 [2024-04-26 12:22:21.305718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.266 qpair failed and we were unable to recover it. 00:26:20.266 [2024-04-26 12:22:21.306005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.266 [2024-04-26 12:22:21.306354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.266 [2024-04-26 12:22:21.306363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.266 qpair failed and we were unable to recover it. 00:26:20.266 [2024-04-26 12:22:21.306681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.266 [2024-04-26 12:22:21.306984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.266 [2024-04-26 12:22:21.306993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.266 qpair failed and we were unable to recover it. 00:26:20.266 [2024-04-26 12:22:21.307344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.266 [2024-04-26 12:22:21.307513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.266 [2024-04-26 12:22:21.307522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.266 qpair failed and we were unable to recover it. 00:26:20.266 [2024-04-26 12:22:21.307845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.266 [2024-04-26 12:22:21.308063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.266 [2024-04-26 12:22:21.308072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.266 qpair failed and we were unable to recover it. 00:26:20.266 [2024-04-26 12:22:21.308336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.266 [2024-04-26 12:22:21.308674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.266 [2024-04-26 12:22:21.308683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.266 qpair failed and we were unable to recover it. 00:26:20.266 [2024-04-26 12:22:21.308865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.266 [2024-04-26 12:22:21.309146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.266 [2024-04-26 12:22:21.309154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.266 qpair failed and we were unable to recover it. 00:26:20.266 [2024-04-26 12:22:21.309338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.266 [2024-04-26 12:22:21.309665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.266 [2024-04-26 12:22:21.309674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.266 qpair failed and we were unable to recover it. 00:26:20.267 [2024-04-26 12:22:21.309973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.267 [2024-04-26 12:22:21.310317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.267 [2024-04-26 12:22:21.310326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.267 qpair failed and we were unable to recover it. 00:26:20.267 [2024-04-26 12:22:21.310519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.267 [2024-04-26 12:22:21.310835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.267 [2024-04-26 12:22:21.310850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.267 qpair failed and we were unable to recover it. 00:26:20.267 [2024-04-26 12:22:21.311179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.267 [2024-04-26 12:22:21.311493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.267 [2024-04-26 12:22:21.311502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.267 qpair failed and we were unable to recover it. 00:26:20.267 [2024-04-26 12:22:21.311851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.267 [2024-04-26 12:22:21.312149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.267 [2024-04-26 12:22:21.312158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.267 qpair failed and we were unable to recover it. 00:26:20.267 [2024-04-26 12:22:21.312303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.267 [2024-04-26 12:22:21.312592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.267 [2024-04-26 12:22:21.312601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.267 qpair failed and we were unable to recover it. 00:26:20.267 [2024-04-26 12:22:21.312921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.267 [2024-04-26 12:22:21.313215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.267 [2024-04-26 12:22:21.313224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.267 qpair failed and we were unable to recover it. 00:26:20.267 [2024-04-26 12:22:21.313561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.267 [2024-04-26 12:22:21.313781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.267 [2024-04-26 12:22:21.313790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.267 qpair failed and we were unable to recover it. 00:26:20.267 [2024-04-26 12:22:21.314031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.267 [2024-04-26 12:22:21.314322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.267 [2024-04-26 12:22:21.314331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.267 qpair failed and we were unable to recover it. 00:26:20.267 [2024-04-26 12:22:21.314634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.267 [2024-04-26 12:22:21.314964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.267 [2024-04-26 12:22:21.314973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.267 qpair failed and we were unable to recover it. 00:26:20.267 [2024-04-26 12:22:21.315288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.267 12:22:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:20.267 [2024-04-26 12:22:21.315618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.267 [2024-04-26 12:22:21.315628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.267 qpair failed and we were unable to recover it. 00:26:20.267 [2024-04-26 12:22:21.315821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.267 12:22:21 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:20.267 [2024-04-26 12:22:21.316016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.267 [2024-04-26 12:22:21.316027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.267 qpair failed and we were unable to recover it. 00:26:20.267 12:22:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:20.267 [2024-04-26 12:22:21.316239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.267 12:22:21 -- common/autotest_common.sh@10 -- # set +x 00:26:20.267 [2024-04-26 12:22:21.316589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.267 [2024-04-26 12:22:21.316598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.267 qpair failed and we were unable to recover it. 00:26:20.267 [2024-04-26 12:22:21.316785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.267 [2024-04-26 12:22:21.317089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.267 [2024-04-26 12:22:21.317099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.267 qpair failed and we were unable to recover it. 00:26:20.267 [2024-04-26 12:22:21.317397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.267 [2024-04-26 12:22:21.317736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.267 [2024-04-26 12:22:21.317745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.267 qpair failed and we were unable to recover it. 00:26:20.267 [2024-04-26 12:22:21.318003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.267 [2024-04-26 12:22:21.318348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.267 [2024-04-26 12:22:21.318358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.267 qpair failed and we were unable to recover it. 00:26:20.267 [2024-04-26 12:22:21.318551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.267 [2024-04-26 12:22:21.318736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.267 [2024-04-26 12:22:21.318746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.267 qpair failed and we were unable to recover it. 00:26:20.267 [2024-04-26 12:22:21.319081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.267 [2024-04-26 12:22:21.319355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.267 [2024-04-26 12:22:21.319366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.267 qpair failed and we were unable to recover it. 00:26:20.267 [2024-04-26 12:22:21.319684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.267 [2024-04-26 12:22:21.319989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.267 [2024-04-26 12:22:21.319998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.267 qpair failed and we were unable to recover it. 00:26:20.267 [2024-04-26 12:22:21.320323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.267 [2024-04-26 12:22:21.320663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.267 [2024-04-26 12:22:21.320672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.267 qpair failed and we were unable to recover it. 00:26:20.267 [2024-04-26 12:22:21.320848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.267 [2024-04-26 12:22:21.321068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.267 [2024-04-26 12:22:21.321077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.267 qpair failed and we were unable to recover it. 00:26:20.267 [2024-04-26 12:22:21.321398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.267 [2024-04-26 12:22:21.321570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.267 [2024-04-26 12:22:21.321579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.267 qpair failed and we were unable to recover it. 00:26:20.267 [2024-04-26 12:22:21.321898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.267 [2024-04-26 12:22:21.322207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.267 [2024-04-26 12:22:21.322216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.267 qpair failed and we were unable to recover it. 00:26:20.267 [2024-04-26 12:22:21.322539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.267 [2024-04-26 12:22:21.322869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.267 [2024-04-26 12:22:21.322879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.267 qpair failed and we were unable to recover it. 00:26:20.267 [2024-04-26 12:22:21.323190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.267 [2024-04-26 12:22:21.323476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.268 [2024-04-26 12:22:21.323484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.268 qpair failed and we were unable to recover it. 00:26:20.268 [2024-04-26 12:22:21.323807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.268 [2024-04-26 12:22:21.324188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.268 [2024-04-26 12:22:21.324197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.268 qpair failed and we were unable to recover it. 00:26:20.268 [2024-04-26 12:22:21.324592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.268 [2024-04-26 12:22:21.324926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.268 [2024-04-26 12:22:21.324936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.268 qpair failed and we were unable to recover it. 00:26:20.268 [2024-04-26 12:22:21.325345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.268 [2024-04-26 12:22:21.325568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.268 [2024-04-26 12:22:21.325578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.268 qpair failed and we were unable to recover it. 00:26:20.268 [2024-04-26 12:22:21.325937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.268 [2024-04-26 12:22:21.326259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.268 [2024-04-26 12:22:21.326268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.268 qpair failed and we were unable to recover it. 00:26:20.268 [2024-04-26 12:22:21.326601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.268 [2024-04-26 12:22:21.326822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.268 [2024-04-26 12:22:21.326832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.268 qpair failed and we were unable to recover it. 00:26:20.268 [2024-04-26 12:22:21.327061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.268 [2024-04-26 12:22:21.327251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.268 [2024-04-26 12:22:21.327261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.268 qpair failed and we were unable to recover it. 00:26:20.268 12:22:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:20.268 [2024-04-26 12:22:21.327525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.268 [2024-04-26 12:22:21.327827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.268 [2024-04-26 12:22:21.327841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.268 qpair failed and we were unable to recover it. 00:26:20.268 12:22:21 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:20.268 [2024-04-26 12:22:21.328152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.268 12:22:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:20.268 [2024-04-26 12:22:21.328469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.268 [2024-04-26 12:22:21.328478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.268 12:22:21 -- common/autotest_common.sh@10 -- # set +x 00:26:20.268 qpair failed and we were unable to recover it. 00:26:20.268 [2024-04-26 12:22:21.328878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.268 [2024-04-26 12:22:21.329261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.268 [2024-04-26 12:22:21.329270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.268 qpair failed and we were unable to recover it. 00:26:20.268 [2024-04-26 12:22:21.329611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.268 [2024-04-26 12:22:21.329933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.268 [2024-04-26 12:22:21.329942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.268 qpair failed and we were unable to recover it. 00:26:20.268 [2024-04-26 12:22:21.330116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.268 [2024-04-26 12:22:21.330480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.268 [2024-04-26 12:22:21.330489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.268 qpair failed and we were unable to recover it. 00:26:20.268 [2024-04-26 12:22:21.330831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.268 [2024-04-26 12:22:21.331162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.268 [2024-04-26 12:22:21.331171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.268 qpair failed and we were unable to recover it. 00:26:20.268 [2024-04-26 12:22:21.331472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.268 [2024-04-26 12:22:21.331800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.268 [2024-04-26 12:22:21.331809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.268 qpair failed and we were unable to recover it. 00:26:20.268 [2024-04-26 12:22:21.332155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.268 [2024-04-26 12:22:21.332471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.268 [2024-04-26 12:22:21.332481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.268 qpair failed and we were unable to recover it. 00:26:20.268 [2024-04-26 12:22:21.332675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.268 [2024-04-26 12:22:21.332975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.268 [2024-04-26 12:22:21.332991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.268 qpair failed and we were unable to recover it. 00:26:20.268 [2024-04-26 12:22:21.333292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.268 [2024-04-26 12:22:21.333453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.268 [2024-04-26 12:22:21.333464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.268 qpair failed and we were unable to recover it. 00:26:20.268 [2024-04-26 12:22:21.333866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.268 [2024-04-26 12:22:21.334135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.268 [2024-04-26 12:22:21.334144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178c650 with addr=10.0.0.2, port=4420 00:26:20.269 qpair failed and we were unable to recover it. 00:26:20.269 [2024-04-26 12:22:21.334465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.269 [2024-04-26 12:22:21.334767] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:20.269 [2024-04-26 12:22:21.337066] posix.c: 675:posix_sock_psk_use_session_client_cb: *ERROR*: PSK is not set 00:26:20.269 [2024-04-26 12:22:21.337111] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x178c650 (107): Transport endpoint is not connected 00:26:20.269 [2024-04-26 12:22:21.337154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.269 qpair failed and we were unable to recover it. 00:26:20.269 12:22:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:20.269 12:22:21 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:20.269 12:22:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:20.269 12:22:21 -- common/autotest_common.sh@10 -- # set +x 00:26:20.269 [2024-04-26 12:22:21.345455] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.269 [2024-04-26 12:22:21.345529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.269 [2024-04-26 12:22:21.345547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.269 [2024-04-26 12:22:21.345555] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.269 [2024-04-26 12:22:21.345561] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:20.269 [2024-04-26 12:22:21.345577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.269 qpair failed and we were unable to recover it. 00:26:20.269 12:22:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:20.269 12:22:21 -- host/target_disconnect.sh@58 -- # wait 3567953 00:26:20.269 [2024-04-26 12:22:21.355278] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.269 [2024-04-26 12:22:21.355339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.269 [2024-04-26 12:22:21.355354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.269 [2024-04-26 12:22:21.355361] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.269 [2024-04-26 12:22:21.355368] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:20.269 [2024-04-26 12:22:21.355382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.269 qpair failed and we were unable to recover it. 00:26:20.269 [2024-04-26 12:22:21.365286] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.269 [2024-04-26 12:22:21.365355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.269 [2024-04-26 12:22:21.365372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.269 [2024-04-26 12:22:21.365379] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.269 [2024-04-26 12:22:21.365389] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:20.269 [2024-04-26 12:22:21.365403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.269 qpair failed and we were unable to recover it. 00:26:20.269 [2024-04-26 12:22:21.375312] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.269 [2024-04-26 12:22:21.375380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.269 [2024-04-26 12:22:21.375395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.269 [2024-04-26 12:22:21.375402] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.269 [2024-04-26 12:22:21.375408] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:20.269 [2024-04-26 12:22:21.375421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.269 qpair failed and we were unable to recover it. 00:26:20.269 [2024-04-26 12:22:21.385323] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.269 [2024-04-26 12:22:21.385414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.269 [2024-04-26 12:22:21.385430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.269 [2024-04-26 12:22:21.385436] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.269 [2024-04-26 12:22:21.385443] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:20.269 [2024-04-26 12:22:21.385456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.269 qpair failed and we were unable to recover it. 00:26:20.269 [2024-04-26 12:22:21.395334] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.269 [2024-04-26 12:22:21.395388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.269 [2024-04-26 12:22:21.395402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.269 [2024-04-26 12:22:21.395409] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.269 [2024-04-26 12:22:21.395415] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:20.269 [2024-04-26 12:22:21.395428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.269 qpair failed and we were unable to recover it. 00:26:20.269 [2024-04-26 12:22:21.405234] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.269 [2024-04-26 12:22:21.405292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.269 [2024-04-26 12:22:21.405308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.269 [2024-04-26 12:22:21.405314] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.269 [2024-04-26 12:22:21.405321] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:20.269 [2024-04-26 12:22:21.405337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.269 qpair failed and we were unable to recover it. 00:26:20.269 [2024-04-26 12:22:21.415293] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.269 [2024-04-26 12:22:21.415363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.269 [2024-04-26 12:22:21.415377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.269 [2024-04-26 12:22:21.415384] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.269 [2024-04-26 12:22:21.415390] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:20.269 [2024-04-26 12:22:21.415403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.269 qpair failed and we were unable to recover it. 00:26:20.269 [2024-04-26 12:22:21.425397] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.269 [2024-04-26 12:22:21.425451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.269 [2024-04-26 12:22:21.425466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.269 [2024-04-26 12:22:21.425473] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.269 [2024-04-26 12:22:21.425478] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:20.269 [2024-04-26 12:22:21.425492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.269 qpair failed and we were unable to recover it. 00:26:20.269 [2024-04-26 12:22:21.435432] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.269 [2024-04-26 12:22:21.435482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.269 [2024-04-26 12:22:21.435497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.269 [2024-04-26 12:22:21.435503] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.269 [2024-04-26 12:22:21.435509] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:20.269 [2024-04-26 12:22:21.435522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.269 qpair failed and we were unable to recover it. 00:26:20.269 [2024-04-26 12:22:21.445461] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.270 [2024-04-26 12:22:21.445515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.270 [2024-04-26 12:22:21.445529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.270 [2024-04-26 12:22:21.445536] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.270 [2024-04-26 12:22:21.445542] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:20.270 [2024-04-26 12:22:21.445555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.270 qpair failed and we were unable to recover it. 00:26:20.270 [2024-04-26 12:22:21.455491] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.270 [2024-04-26 12:22:21.455580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.270 [2024-04-26 12:22:21.455594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.270 [2024-04-26 12:22:21.455601] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.270 [2024-04-26 12:22:21.455611] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:20.270 [2024-04-26 12:22:21.455624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.270 qpair failed and we were unable to recover it. 00:26:20.270 [2024-04-26 12:22:21.465563] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.270 [2024-04-26 12:22:21.465626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.270 [2024-04-26 12:22:21.465651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.270 [2024-04-26 12:22:21.465659] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.270 [2024-04-26 12:22:21.465665] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:20.270 [2024-04-26 12:22:21.465683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.270 qpair failed and we were unable to recover it. 00:26:20.547 [2024-04-26 12:22:21.475478] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.547 [2024-04-26 12:22:21.475570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.547 [2024-04-26 12:22:21.475595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.547 [2024-04-26 12:22:21.475603] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.547 [2024-04-26 12:22:21.475610] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:20.547 [2024-04-26 12:22:21.475627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.547 qpair failed and we were unable to recover it. 00:26:20.547 [2024-04-26 12:22:21.485562] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.547 [2024-04-26 12:22:21.485627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.547 [2024-04-26 12:22:21.485652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.547 [2024-04-26 12:22:21.485660] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.547 [2024-04-26 12:22:21.485666] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:20.547 [2024-04-26 12:22:21.485684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.547 qpair failed and we were unable to recover it. 00:26:20.547 [2024-04-26 12:22:21.495635] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.547 [2024-04-26 12:22:21.495732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.547 [2024-04-26 12:22:21.495748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.547 [2024-04-26 12:22:21.495755] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.547 [2024-04-26 12:22:21.495761] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:20.547 [2024-04-26 12:22:21.495775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.547 qpair failed and we were unable to recover it. 00:26:20.547 [2024-04-26 12:22:21.505716] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.547 [2024-04-26 12:22:21.505771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.547 [2024-04-26 12:22:21.505787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.547 [2024-04-26 12:22:21.505793] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.547 [2024-04-26 12:22:21.505800] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:20.547 [2024-04-26 12:22:21.505813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.547 qpair failed and we were unable to recover it. 00:26:20.547 [2024-04-26 12:22:21.515556] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.547 [2024-04-26 12:22:21.515606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.547 [2024-04-26 12:22:21.515620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.547 [2024-04-26 12:22:21.515627] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.547 [2024-04-26 12:22:21.515633] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:20.547 [2024-04-26 12:22:21.515646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.547 qpair failed and we were unable to recover it. 00:26:20.547 [2024-04-26 12:22:21.525703] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.547 [2024-04-26 12:22:21.525757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.547 [2024-04-26 12:22:21.525771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.547 [2024-04-26 12:22:21.525778] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.547 [2024-04-26 12:22:21.525785] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:20.547 [2024-04-26 12:22:21.525797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.547 qpair failed and we were unable to recover it. 00:26:20.547 [2024-04-26 12:22:21.535765] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.547 [2024-04-26 12:22:21.535830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.547 [2024-04-26 12:22:21.535848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.547 [2024-04-26 12:22:21.535855] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.547 [2024-04-26 12:22:21.535861] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:20.547 [2024-04-26 12:22:21.535874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.547 qpair failed and we were unable to recover it. 00:26:20.547 [2024-04-26 12:22:21.545759] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.547 [2024-04-26 12:22:21.545813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.547 [2024-04-26 12:22:21.545827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.547 [2024-04-26 12:22:21.545841] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.547 [2024-04-26 12:22:21.545848] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:20.547 [2024-04-26 12:22:21.545861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.547 qpair failed and we were unable to recover it. 00:26:20.547 [2024-04-26 12:22:21.555805] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.547 [2024-04-26 12:22:21.555858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.547 [2024-04-26 12:22:21.555872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.547 [2024-04-26 12:22:21.555879] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.547 [2024-04-26 12:22:21.555885] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:20.547 [2024-04-26 12:22:21.555897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.547 qpair failed and we were unable to recover it. 00:26:20.547 [2024-04-26 12:22:21.565788] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.547 [2024-04-26 12:22:21.565877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.547 [2024-04-26 12:22:21.565891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.547 [2024-04-26 12:22:21.565898] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.547 [2024-04-26 12:22:21.565904] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:20.547 [2024-04-26 12:22:21.565917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.547 qpair failed and we were unable to recover it. 00:26:20.547 [2024-04-26 12:22:21.575913] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.547 [2024-04-26 12:22:21.575973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.547 [2024-04-26 12:22:21.575989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.547 [2024-04-26 12:22:21.575995] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.547 [2024-04-26 12:22:21.576001] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:20.547 [2024-04-26 12:22:21.576015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.547 qpair failed and we were unable to recover it. 00:26:20.547 [2024-04-26 12:22:21.585871] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.547 [2024-04-26 12:22:21.585928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.547 [2024-04-26 12:22:21.585943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.547 [2024-04-26 12:22:21.585949] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.547 [2024-04-26 12:22:21.585955] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:20.547 [2024-04-26 12:22:21.585969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.547 qpair failed and we were unable to recover it. 00:26:20.547 [2024-04-26 12:22:21.595788] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.547 [2024-04-26 12:22:21.595850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.547 [2024-04-26 12:22:21.595864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.548 [2024-04-26 12:22:21.595871] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.548 [2024-04-26 12:22:21.595877] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:20.548 [2024-04-26 12:22:21.595890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.548 qpair failed and we were unable to recover it. 00:26:20.548 [2024-04-26 12:22:21.605925] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.548 [2024-04-26 12:22:21.605980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.548 [2024-04-26 12:22:21.605994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.548 [2024-04-26 12:22:21.606001] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.548 [2024-04-26 12:22:21.606007] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:20.548 [2024-04-26 12:22:21.606020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.548 qpair failed and we were unable to recover it. 00:26:20.548 [2024-04-26 12:22:21.616077] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.548 [2024-04-26 12:22:21.616140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.548 [2024-04-26 12:22:21.616154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.548 [2024-04-26 12:22:21.616161] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.548 [2024-04-26 12:22:21.616167] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:20.548 [2024-04-26 12:22:21.616179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.548 qpair failed and we were unable to recover it. 00:26:20.548 [2024-04-26 12:22:21.626022] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.548 [2024-04-26 12:22:21.626078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.548 [2024-04-26 12:22:21.626093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.548 [2024-04-26 12:22:21.626099] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.548 [2024-04-26 12:22:21.626106] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:20.548 [2024-04-26 12:22:21.626118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.548 qpair failed and we were unable to recover it. 00:26:20.548 [2024-04-26 12:22:21.636091] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.548 [2024-04-26 12:22:21.636146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.548 [2024-04-26 12:22:21.636159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.548 [2024-04-26 12:22:21.636170] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.548 [2024-04-26 12:22:21.636176] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:20.548 [2024-04-26 12:22:21.636188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.548 qpair failed and we were unable to recover it. 00:26:20.548 [2024-04-26 12:22:21.646086] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.548 [2024-04-26 12:22:21.646143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.548 [2024-04-26 12:22:21.646157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.548 [2024-04-26 12:22:21.646164] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.548 [2024-04-26 12:22:21.646170] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:20.548 [2024-04-26 12:22:21.646182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.548 qpair failed and we were unable to recover it. 00:26:20.548 [2024-04-26 12:22:21.656086] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.548 [2024-04-26 12:22:21.656143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.548 [2024-04-26 12:22:21.656157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.548 [2024-04-26 12:22:21.656163] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.548 [2024-04-26 12:22:21.656170] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:20.548 [2024-04-26 12:22:21.656182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.548 qpair failed and we were unable to recover it. 00:26:20.548 [2024-04-26 12:22:21.666120] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.548 [2024-04-26 12:22:21.666178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.548 [2024-04-26 12:22:21.666192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.548 [2024-04-26 12:22:21.666199] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.548 [2024-04-26 12:22:21.666205] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:20.548 [2024-04-26 12:22:21.666218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.548 qpair failed and we were unable to recover it. 00:26:20.548 [2024-04-26 12:22:21.676138] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.548 [2024-04-26 12:22:21.676226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.548 [2024-04-26 12:22:21.676240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.548 [2024-04-26 12:22:21.676247] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.548 [2024-04-26 12:22:21.676253] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:20.548 [2024-04-26 12:22:21.676265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.548 qpair failed and we were unable to recover it. 00:26:20.548 [2024-04-26 12:22:21.686161] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.548 [2024-04-26 12:22:21.686216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.548 [2024-04-26 12:22:21.686230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.548 [2024-04-26 12:22:21.686237] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.548 [2024-04-26 12:22:21.686243] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:20.548 [2024-04-26 12:22:21.686256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.548 qpair failed and we were unable to recover it. 00:26:20.548 [2024-04-26 12:22:21.696193] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.548 [2024-04-26 12:22:21.696251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.548 [2024-04-26 12:22:21.696265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.548 [2024-04-26 12:22:21.696271] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.548 [2024-04-26 12:22:21.696277] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:20.548 [2024-04-26 12:22:21.696290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.548 qpair failed and we were unable to recover it. 00:26:20.548 [2024-04-26 12:22:21.706215] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.548 [2024-04-26 12:22:21.706286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.548 [2024-04-26 12:22:21.706300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.548 [2024-04-26 12:22:21.706307] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.548 [2024-04-26 12:22:21.706313] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:20.548 [2024-04-26 12:22:21.706326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.548 qpair failed and we were unable to recover it. 00:26:20.548 [2024-04-26 12:22:21.716228] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.548 [2024-04-26 12:22:21.716278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.548 [2024-04-26 12:22:21.716292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.548 [2024-04-26 12:22:21.716299] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.548 [2024-04-26 12:22:21.716305] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:20.548 [2024-04-26 12:22:21.716318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.548 qpair failed and we were unable to recover it. 00:26:20.548 [2024-04-26 12:22:21.726245] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.548 [2024-04-26 12:22:21.726308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.548 [2024-04-26 12:22:21.726326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.548 [2024-04-26 12:22:21.726333] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.548 [2024-04-26 12:22:21.726339] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:20.548 [2024-04-26 12:22:21.726351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.548 qpair failed and we were unable to recover it. 00:26:20.548 [2024-04-26 12:22:21.736148] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.548 [2024-04-26 12:22:21.736213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.548 [2024-04-26 12:22:21.736228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.548 [2024-04-26 12:22:21.736234] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.548 [2024-04-26 12:22:21.736241] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:20.548 [2024-04-26 12:22:21.736254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.548 qpair failed and we were unable to recover it. 00:26:20.548 [2024-04-26 12:22:21.746281] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.548 [2024-04-26 12:22:21.746334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.548 [2024-04-26 12:22:21.746348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.548 [2024-04-26 12:22:21.746355] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.548 [2024-04-26 12:22:21.746361] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:20.548 [2024-04-26 12:22:21.746374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.548 qpair failed and we were unable to recover it. 00:26:20.548 [2024-04-26 12:22:21.756373] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.548 [2024-04-26 12:22:21.756425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.548 [2024-04-26 12:22:21.756439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.548 [2024-04-26 12:22:21.756445] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.548 [2024-04-26 12:22:21.756452] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:20.548 [2024-04-26 12:22:21.756464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.548 qpair failed and we were unable to recover it. 00:26:20.810 [2024-04-26 12:22:21.766336] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.810 [2024-04-26 12:22:21.766404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.810 [2024-04-26 12:22:21.766418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.810 [2024-04-26 12:22:21.766425] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.810 [2024-04-26 12:22:21.766431] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:20.810 [2024-04-26 12:22:21.766444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.810 qpair failed and we were unable to recover it. 00:26:20.810 [2024-04-26 12:22:21.776364] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.810 [2024-04-26 12:22:21.776433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.810 [2024-04-26 12:22:21.776449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.810 [2024-04-26 12:22:21.776456] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.810 [2024-04-26 12:22:21.776462] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:20.810 [2024-04-26 12:22:21.776475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.810 qpair failed and we were unable to recover it. 00:26:20.810 [2024-04-26 12:22:21.786426] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.810 [2024-04-26 12:22:21.786481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.810 [2024-04-26 12:22:21.786496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.810 [2024-04-26 12:22:21.786503] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.810 [2024-04-26 12:22:21.786509] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:20.810 [2024-04-26 12:22:21.786521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.810 qpair failed and we were unable to recover it. 00:26:20.810 [2024-04-26 12:22:21.796436] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.810 [2024-04-26 12:22:21.796491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.810 [2024-04-26 12:22:21.796505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.810 [2024-04-26 12:22:21.796512] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.810 [2024-04-26 12:22:21.796518] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:20.810 [2024-04-26 12:22:21.796530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.810 qpair failed and we were unable to recover it. 00:26:20.810 [2024-04-26 12:22:21.806451] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.810 [2024-04-26 12:22:21.806505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.810 [2024-04-26 12:22:21.806519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.810 [2024-04-26 12:22:21.806526] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.810 [2024-04-26 12:22:21.806532] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:20.810 [2024-04-26 12:22:21.806545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.810 qpair failed and we were unable to recover it. 00:26:20.810 [2024-04-26 12:22:21.816529] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.810 [2024-04-26 12:22:21.816598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.810 [2024-04-26 12:22:21.816628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.810 [2024-04-26 12:22:21.816636] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.810 [2024-04-26 12:22:21.816643] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:20.810 [2024-04-26 12:22:21.816660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.810 qpair failed and we were unable to recover it. 00:26:20.810 [2024-04-26 12:22:21.826539] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.810 [2024-04-26 12:22:21.826603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.810 [2024-04-26 12:22:21.826628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.810 [2024-04-26 12:22:21.826636] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.810 [2024-04-26 12:22:21.826643] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:20.810 [2024-04-26 12:22:21.826660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.810 qpair failed and we were unable to recover it. 00:26:20.810 [2024-04-26 12:22:21.836555] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.810 [2024-04-26 12:22:21.836617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.810 [2024-04-26 12:22:21.836642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.810 [2024-04-26 12:22:21.836650] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.810 [2024-04-26 12:22:21.836657] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:20.810 [2024-04-26 12:22:21.836674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.810 qpair failed and we were unable to recover it. 00:26:20.810 [2024-04-26 12:22:21.846589] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.810 [2024-04-26 12:22:21.846647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.810 [2024-04-26 12:22:21.846662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.810 [2024-04-26 12:22:21.846669] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.810 [2024-04-26 12:22:21.846676] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:20.810 [2024-04-26 12:22:21.846689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.810 qpair failed and we were unable to recover it. 00:26:20.810 [2024-04-26 12:22:21.856608] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.810 [2024-04-26 12:22:21.856666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.810 [2024-04-26 12:22:21.856680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.810 [2024-04-26 12:22:21.856687] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.810 [2024-04-26 12:22:21.856693] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:20.810 [2024-04-26 12:22:21.856710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.810 qpair failed and we were unable to recover it. 00:26:20.810 [2024-04-26 12:22:21.866663] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.810 [2024-04-26 12:22:21.866716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.810 [2024-04-26 12:22:21.866730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.810 [2024-04-26 12:22:21.866737] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.810 [2024-04-26 12:22:21.866743] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:20.810 [2024-04-26 12:22:21.866756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.810 qpair failed and we were unable to recover it. 00:26:20.811 [2024-04-26 12:22:21.876670] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.811 [2024-04-26 12:22:21.876725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.811 [2024-04-26 12:22:21.876739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.811 [2024-04-26 12:22:21.876746] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.811 [2024-04-26 12:22:21.876752] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:20.811 [2024-04-26 12:22:21.876765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.811 qpair failed and we were unable to recover it. 00:26:20.811 [2024-04-26 12:22:21.886722] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.811 [2024-04-26 12:22:21.886780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.811 [2024-04-26 12:22:21.886795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.811 [2024-04-26 12:22:21.886802] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.811 [2024-04-26 12:22:21.886808] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:20.811 [2024-04-26 12:22:21.886821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.811 qpair failed and we were unable to recover it. 00:26:20.811 [2024-04-26 12:22:21.896768] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.811 [2024-04-26 12:22:21.896832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.811 [2024-04-26 12:22:21.896851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.811 [2024-04-26 12:22:21.896858] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.811 [2024-04-26 12:22:21.896864] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:20.811 [2024-04-26 12:22:21.896877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.811 qpair failed and we were unable to recover it. 00:26:20.811 [2024-04-26 12:22:21.906780] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.811 [2024-04-26 12:22:21.906834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.811 [2024-04-26 12:22:21.906859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.811 [2024-04-26 12:22:21.906866] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.811 [2024-04-26 12:22:21.906872] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:20.811 [2024-04-26 12:22:21.906885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.811 qpair failed and we were unable to recover it. 00:26:20.811 [2024-04-26 12:22:21.916665] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.811 [2024-04-26 12:22:21.916725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.811 [2024-04-26 12:22:21.916741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.811 [2024-04-26 12:22:21.916747] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.811 [2024-04-26 12:22:21.916753] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:20.811 [2024-04-26 12:22:21.916767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.811 qpair failed and we were unable to recover it. 00:26:20.811 [2024-04-26 12:22:21.926825] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.811 [2024-04-26 12:22:21.926886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.811 [2024-04-26 12:22:21.926901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.811 [2024-04-26 12:22:21.926909] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.811 [2024-04-26 12:22:21.926915] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:20.811 [2024-04-26 12:22:21.926928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.811 qpair failed and we were unable to recover it. 00:26:20.811 [2024-04-26 12:22:21.936858] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.811 [2024-04-26 12:22:21.936947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.811 [2024-04-26 12:22:21.936962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.811 [2024-04-26 12:22:21.936969] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.811 [2024-04-26 12:22:21.936975] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:20.811 [2024-04-26 12:22:21.936988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.811 qpair failed and we were unable to recover it. 00:26:20.811 [2024-04-26 12:22:21.946891] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.811 [2024-04-26 12:22:21.946947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.811 [2024-04-26 12:22:21.946961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.811 [2024-04-26 12:22:21.946968] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.811 [2024-04-26 12:22:21.946974] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:20.811 [2024-04-26 12:22:21.946990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.811 qpair failed and we were unable to recover it. 00:26:20.811 [2024-04-26 12:22:21.956835] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.811 [2024-04-26 12:22:21.956893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.811 [2024-04-26 12:22:21.956907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.811 [2024-04-26 12:22:21.956914] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.811 [2024-04-26 12:22:21.956920] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:20.811 [2024-04-26 12:22:21.956933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.811 qpair failed and we were unable to recover it. 00:26:20.811 [2024-04-26 12:22:21.966832] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.811 [2024-04-26 12:22:21.966899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.811 [2024-04-26 12:22:21.966913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.811 [2024-04-26 12:22:21.966919] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.811 [2024-04-26 12:22:21.966925] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:20.811 [2024-04-26 12:22:21.966939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.811 qpair failed and we were unable to recover it. 00:26:20.811 [2024-04-26 12:22:21.976978] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.811 [2024-04-26 12:22:21.977059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.811 [2024-04-26 12:22:21.977073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.811 [2024-04-26 12:22:21.977080] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.811 [2024-04-26 12:22:21.977086] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:20.811 [2024-04-26 12:22:21.977099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.811 qpair failed and we were unable to recover it. 00:26:20.811 [2024-04-26 12:22:21.987003] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.811 [2024-04-26 12:22:21.987059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.811 [2024-04-26 12:22:21.987074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.811 [2024-04-26 12:22:21.987081] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.811 [2024-04-26 12:22:21.987087] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:20.811 [2024-04-26 12:22:21.987101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.811 qpair failed and we were unable to recover it. 00:26:20.811 [2024-04-26 12:22:21.996945] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.811 [2024-04-26 12:22:21.996994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.811 [2024-04-26 12:22:21.997012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.811 [2024-04-26 12:22:21.997018] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.811 [2024-04-26 12:22:21.997024] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:20.811 [2024-04-26 12:22:21.997037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.811 qpair failed and we were unable to recover it. 00:26:20.811 [2024-04-26 12:22:22.007068] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.811 [2024-04-26 12:22:22.007123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.811 [2024-04-26 12:22:22.007137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.811 [2024-04-26 12:22:22.007144] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.811 [2024-04-26 12:22:22.007150] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:20.811 [2024-04-26 12:22:22.007163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.811 qpair failed and we were unable to recover it. 00:26:20.811 [2024-04-26 12:22:22.017137] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.811 [2024-04-26 12:22:22.017213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.811 [2024-04-26 12:22:22.017227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.811 [2024-04-26 12:22:22.017234] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.811 [2024-04-26 12:22:22.017240] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:20.811 [2024-04-26 12:22:22.017253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.811 qpair failed and we were unable to recover it. 00:26:20.811 [2024-04-26 12:22:22.027116] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.811 [2024-04-26 12:22:22.027173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.811 [2024-04-26 12:22:22.027189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.811 [2024-04-26 12:22:22.027196] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.811 [2024-04-26 12:22:22.027202] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:20.811 [2024-04-26 12:22:22.027219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.811 qpair failed and we were unable to recover it. 00:26:21.072 [2024-04-26 12:22:22.037030] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.072 [2024-04-26 12:22:22.037090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.072 [2024-04-26 12:22:22.037104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.072 [2024-04-26 12:22:22.037111] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.072 [2024-04-26 12:22:22.037121] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.072 [2024-04-26 12:22:22.037134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.072 qpair failed and we were unable to recover it. 00:26:21.072 [2024-04-26 12:22:22.047045] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.072 [2024-04-26 12:22:22.047099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.072 [2024-04-26 12:22:22.047113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.072 [2024-04-26 12:22:22.047120] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.072 [2024-04-26 12:22:22.047126] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.072 [2024-04-26 12:22:22.047139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.072 qpair failed and we were unable to recover it. 00:26:21.072 [2024-04-26 12:22:22.057194] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.072 [2024-04-26 12:22:22.057253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.072 [2024-04-26 12:22:22.057267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.072 [2024-04-26 12:22:22.057273] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.072 [2024-04-26 12:22:22.057279] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.072 [2024-04-26 12:22:22.057292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.072 qpair failed and we were unable to recover it. 00:26:21.072 [2024-04-26 12:22:22.067270] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.072 [2024-04-26 12:22:22.067329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.072 [2024-04-26 12:22:22.067343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.072 [2024-04-26 12:22:22.067350] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.072 [2024-04-26 12:22:22.067356] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.072 [2024-04-26 12:22:22.067368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.072 qpair failed and we were unable to recover it. 00:26:21.072 [2024-04-26 12:22:22.077233] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.072 [2024-04-26 12:22:22.077278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.072 [2024-04-26 12:22:22.077292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.072 [2024-04-26 12:22:22.077299] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.072 [2024-04-26 12:22:22.077305] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.072 [2024-04-26 12:22:22.077317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.072 qpair failed and we were unable to recover it. 00:26:21.072 [2024-04-26 12:22:22.087327] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.072 [2024-04-26 12:22:22.087414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.072 [2024-04-26 12:22:22.087429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.072 [2024-04-26 12:22:22.087435] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.072 [2024-04-26 12:22:22.087441] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.072 [2024-04-26 12:22:22.087454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.072 qpair failed and we were unable to recover it. 00:26:21.072 [2024-04-26 12:22:22.097314] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.072 [2024-04-26 12:22:22.097374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.072 [2024-04-26 12:22:22.097388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.072 [2024-04-26 12:22:22.097395] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.072 [2024-04-26 12:22:22.097401] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.072 [2024-04-26 12:22:22.097414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.072 qpair failed and we were unable to recover it. 00:26:21.072 [2024-04-26 12:22:22.107368] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.072 [2024-04-26 12:22:22.107446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.072 [2024-04-26 12:22:22.107460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.072 [2024-04-26 12:22:22.107467] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.072 [2024-04-26 12:22:22.107473] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.073 [2024-04-26 12:22:22.107486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.073 qpair failed and we were unable to recover it. 00:26:21.073 [2024-04-26 12:22:22.117376] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.073 [2024-04-26 12:22:22.117453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.073 [2024-04-26 12:22:22.117467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.073 [2024-04-26 12:22:22.117474] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.073 [2024-04-26 12:22:22.117480] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.073 [2024-04-26 12:22:22.117492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.073 qpair failed and we were unable to recover it. 00:26:21.073 [2024-04-26 12:22:22.127271] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.073 [2024-04-26 12:22:22.127327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.073 [2024-04-26 12:22:22.127341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.073 [2024-04-26 12:22:22.127348] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.073 [2024-04-26 12:22:22.127358] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.073 [2024-04-26 12:22:22.127372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.073 qpair failed and we were unable to recover it. 00:26:21.073 [2024-04-26 12:22:22.137433] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.073 [2024-04-26 12:22:22.137497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.073 [2024-04-26 12:22:22.137510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.073 [2024-04-26 12:22:22.137517] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.073 [2024-04-26 12:22:22.137523] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.073 [2024-04-26 12:22:22.137536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.073 qpair failed and we were unable to recover it. 00:26:21.073 [2024-04-26 12:22:22.147453] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.073 [2024-04-26 12:22:22.147501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.073 [2024-04-26 12:22:22.147514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.073 [2024-04-26 12:22:22.147521] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.073 [2024-04-26 12:22:22.147527] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.073 [2024-04-26 12:22:22.147539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.073 qpair failed and we were unable to recover it. 00:26:21.073 [2024-04-26 12:22:22.157363] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.073 [2024-04-26 12:22:22.157430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.073 [2024-04-26 12:22:22.157444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.073 [2024-04-26 12:22:22.157451] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.073 [2024-04-26 12:22:22.157457] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.073 [2024-04-26 12:22:22.157470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.073 qpair failed and we were unable to recover it. 00:26:21.073 [2024-04-26 12:22:22.167511] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.073 [2024-04-26 12:22:22.167565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.073 [2024-04-26 12:22:22.167578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.073 [2024-04-26 12:22:22.167585] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.073 [2024-04-26 12:22:22.167591] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.073 [2024-04-26 12:22:22.167603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.073 qpair failed and we were unable to recover it. 00:26:21.073 [2024-04-26 12:22:22.177451] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.073 [2024-04-26 12:22:22.177544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.073 [2024-04-26 12:22:22.177558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.073 [2024-04-26 12:22:22.177565] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.073 [2024-04-26 12:22:22.177571] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.073 [2024-04-26 12:22:22.177583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.073 qpair failed and we were unable to recover it. 00:26:21.073 [2024-04-26 12:22:22.187569] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.073 [2024-04-26 12:22:22.187630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.073 [2024-04-26 12:22:22.187644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.073 [2024-04-26 12:22:22.187651] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.073 [2024-04-26 12:22:22.187657] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.073 [2024-04-26 12:22:22.187669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.073 qpair failed and we were unable to recover it. 00:26:21.073 [2024-04-26 12:22:22.197599] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.073 [2024-04-26 12:22:22.197651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.073 [2024-04-26 12:22:22.197666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.073 [2024-04-26 12:22:22.197673] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.073 [2024-04-26 12:22:22.197679] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.073 [2024-04-26 12:22:22.197692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.073 qpair failed and we were unable to recover it. 00:26:21.073 [2024-04-26 12:22:22.207633] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.073 [2024-04-26 12:22:22.207689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.073 [2024-04-26 12:22:22.207703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.073 [2024-04-26 12:22:22.207709] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.073 [2024-04-26 12:22:22.207715] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.073 [2024-04-26 12:22:22.207728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.073 qpair failed and we were unable to recover it. 00:26:21.073 [2024-04-26 12:22:22.217582] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.073 [2024-04-26 12:22:22.217655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.073 [2024-04-26 12:22:22.217670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.073 [2024-04-26 12:22:22.217676] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.073 [2024-04-26 12:22:22.217686] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.073 [2024-04-26 12:22:22.217698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.073 qpair failed and we were unable to recover it. 00:26:21.073 [2024-04-26 12:22:22.227715] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.073 [2024-04-26 12:22:22.227798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.073 [2024-04-26 12:22:22.227813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.073 [2024-04-26 12:22:22.227820] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.073 [2024-04-26 12:22:22.227826] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.073 [2024-04-26 12:22:22.227842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.073 qpair failed and we were unable to recover it. 00:26:21.073 [2024-04-26 12:22:22.237578] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.073 [2024-04-26 12:22:22.237634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.073 [2024-04-26 12:22:22.237648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.073 [2024-04-26 12:22:22.237655] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.073 [2024-04-26 12:22:22.237661] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.073 [2024-04-26 12:22:22.237673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.073 qpair failed and we were unable to recover it. 00:26:21.073 [2024-04-26 12:22:22.247673] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.073 [2024-04-26 12:22:22.247729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.074 [2024-04-26 12:22:22.247743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.074 [2024-04-26 12:22:22.247749] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.074 [2024-04-26 12:22:22.247755] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.074 [2024-04-26 12:22:22.247768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.074 qpair failed and we were unable to recover it. 00:26:21.074 [2024-04-26 12:22:22.257739] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.074 [2024-04-26 12:22:22.257801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.074 [2024-04-26 12:22:22.257815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.074 [2024-04-26 12:22:22.257822] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.074 [2024-04-26 12:22:22.257828] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.074 [2024-04-26 12:22:22.257845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.074 qpair failed and we were unable to recover it. 00:26:21.074 [2024-04-26 12:22:22.267768] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.074 [2024-04-26 12:22:22.267822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.074 [2024-04-26 12:22:22.267836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.074 [2024-04-26 12:22:22.267848] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.074 [2024-04-26 12:22:22.267854] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.074 [2024-04-26 12:22:22.267867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.074 qpair failed and we were unable to recover it. 00:26:21.074 [2024-04-26 12:22:22.277795] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.074 [2024-04-26 12:22:22.277853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.074 [2024-04-26 12:22:22.277867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.074 [2024-04-26 12:22:22.277874] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.074 [2024-04-26 12:22:22.277880] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.074 [2024-04-26 12:22:22.277893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.074 qpair failed and we were unable to recover it. 00:26:21.074 [2024-04-26 12:22:22.287848] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.074 [2024-04-26 12:22:22.287898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.074 [2024-04-26 12:22:22.287912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.074 [2024-04-26 12:22:22.287919] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.074 [2024-04-26 12:22:22.287925] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.074 [2024-04-26 12:22:22.287938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.074 qpair failed and we were unable to recover it. 00:26:21.336 [2024-04-26 12:22:22.297892] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.336 [2024-04-26 12:22:22.297956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.336 [2024-04-26 12:22:22.297970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.336 [2024-04-26 12:22:22.297977] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.336 [2024-04-26 12:22:22.297983] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.336 [2024-04-26 12:22:22.297996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.336 qpair failed and we were unable to recover it. 00:26:21.336 [2024-04-26 12:22:22.307902] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.336 [2024-04-26 12:22:22.307963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.336 [2024-04-26 12:22:22.307976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.336 [2024-04-26 12:22:22.307988] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.336 [2024-04-26 12:22:22.307994] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.336 [2024-04-26 12:22:22.308007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.336 qpair failed and we were unable to recover it. 00:26:21.336 [2024-04-26 12:22:22.317934] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.336 [2024-04-26 12:22:22.317991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.336 [2024-04-26 12:22:22.318005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.336 [2024-04-26 12:22:22.318012] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.336 [2024-04-26 12:22:22.318017] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.336 [2024-04-26 12:22:22.318030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.336 qpair failed and we were unable to recover it. 00:26:21.336 [2024-04-26 12:22:22.327945] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.336 [2024-04-26 12:22:22.328003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.336 [2024-04-26 12:22:22.328016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.336 [2024-04-26 12:22:22.328023] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.336 [2024-04-26 12:22:22.328029] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.336 [2024-04-26 12:22:22.328042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.336 qpair failed and we were unable to recover it. 00:26:21.336 [2024-04-26 12:22:22.338001] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.336 [2024-04-26 12:22:22.338056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.336 [2024-04-26 12:22:22.338070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.336 [2024-04-26 12:22:22.338077] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.336 [2024-04-26 12:22:22.338083] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.336 [2024-04-26 12:22:22.338096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.336 qpair failed and we were unable to recover it. 00:26:21.336 [2024-04-26 12:22:22.348030] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.336 [2024-04-26 12:22:22.348119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.336 [2024-04-26 12:22:22.348132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.336 [2024-04-26 12:22:22.348139] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.336 [2024-04-26 12:22:22.348145] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.336 [2024-04-26 12:22:22.348158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.336 qpair failed and we were unable to recover it. 00:26:21.336 [2024-04-26 12:22:22.358037] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.336 [2024-04-26 12:22:22.358130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.336 [2024-04-26 12:22:22.358144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.336 [2024-04-26 12:22:22.358151] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.336 [2024-04-26 12:22:22.358157] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.336 [2024-04-26 12:22:22.358170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.336 qpair failed and we were unable to recover it. 00:26:21.336 [2024-04-26 12:22:22.368077] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.336 [2024-04-26 12:22:22.368133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.336 [2024-04-26 12:22:22.368147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.336 [2024-04-26 12:22:22.368154] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.336 [2024-04-26 12:22:22.368160] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.336 [2024-04-26 12:22:22.368173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.336 qpair failed and we were unable to recover it. 00:26:21.336 [2024-04-26 12:22:22.378083] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.336 [2024-04-26 12:22:22.378137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.336 [2024-04-26 12:22:22.378152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.336 [2024-04-26 12:22:22.378158] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.336 [2024-04-26 12:22:22.378164] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.336 [2024-04-26 12:22:22.378177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.336 qpair failed and we were unable to recover it. 00:26:21.336 [2024-04-26 12:22:22.388156] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.336 [2024-04-26 12:22:22.388204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.336 [2024-04-26 12:22:22.388219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.336 [2024-04-26 12:22:22.388226] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.336 [2024-04-26 12:22:22.388232] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.336 [2024-04-26 12:22:22.388247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.336 qpair failed and we were unable to recover it. 00:26:21.336 [2024-04-26 12:22:22.398179] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.336 [2024-04-26 12:22:22.398232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.336 [2024-04-26 12:22:22.398246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.336 [2024-04-26 12:22:22.398258] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.336 [2024-04-26 12:22:22.398264] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.336 [2024-04-26 12:22:22.398276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.336 qpair failed and we were unable to recover it. 00:26:21.336 [2024-04-26 12:22:22.408206] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.336 [2024-04-26 12:22:22.408256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.336 [2024-04-26 12:22:22.408271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.336 [2024-04-26 12:22:22.408278] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.336 [2024-04-26 12:22:22.408284] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.336 [2024-04-26 12:22:22.408297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.336 qpair failed and we were unable to recover it. 00:26:21.336 [2024-04-26 12:22:22.418096] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.336 [2024-04-26 12:22:22.418154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.336 [2024-04-26 12:22:22.418168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.336 [2024-04-26 12:22:22.418175] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.336 [2024-04-26 12:22:22.418181] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.336 [2024-04-26 12:22:22.418194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.336 qpair failed and we were unable to recover it. 00:26:21.336 [2024-04-26 12:22:22.428260] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.336 [2024-04-26 12:22:22.428312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.336 [2024-04-26 12:22:22.428327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.336 [2024-04-26 12:22:22.428334] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.336 [2024-04-26 12:22:22.428340] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.336 [2024-04-26 12:22:22.428353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.336 qpair failed and we were unable to recover it. 00:26:21.336 [2024-04-26 12:22:22.438320] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.336 [2024-04-26 12:22:22.438371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.336 [2024-04-26 12:22:22.438386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.336 [2024-04-26 12:22:22.438392] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.336 [2024-04-26 12:22:22.438399] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.336 [2024-04-26 12:22:22.438411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.336 qpair failed and we were unable to recover it. 00:26:21.336 [2024-04-26 12:22:22.448368] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.336 [2024-04-26 12:22:22.448425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.336 [2024-04-26 12:22:22.448440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.336 [2024-04-26 12:22:22.448447] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.336 [2024-04-26 12:22:22.448453] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.337 [2024-04-26 12:22:22.448470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.337 qpair failed and we were unable to recover it. 00:26:21.337 [2024-04-26 12:22:22.458349] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.337 [2024-04-26 12:22:22.458408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.337 [2024-04-26 12:22:22.458422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.337 [2024-04-26 12:22:22.458429] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.337 [2024-04-26 12:22:22.458436] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.337 [2024-04-26 12:22:22.458448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.337 qpair failed and we were unable to recover it. 00:26:21.337 [2024-04-26 12:22:22.468362] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.337 [2024-04-26 12:22:22.468411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.337 [2024-04-26 12:22:22.468425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.337 [2024-04-26 12:22:22.468431] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.337 [2024-04-26 12:22:22.468438] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.337 [2024-04-26 12:22:22.468450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.337 qpair failed and we were unable to recover it. 00:26:21.337 [2024-04-26 12:22:22.478271] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.337 [2024-04-26 12:22:22.478328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.337 [2024-04-26 12:22:22.478343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.337 [2024-04-26 12:22:22.478350] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.337 [2024-04-26 12:22:22.478356] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.337 [2024-04-26 12:22:22.478370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.337 qpair failed and we were unable to recover it. 00:26:21.337 [2024-04-26 12:22:22.488434] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.337 [2024-04-26 12:22:22.488490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.337 [2024-04-26 12:22:22.488505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.337 [2024-04-26 12:22:22.488516] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.337 [2024-04-26 12:22:22.488522] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.337 [2024-04-26 12:22:22.488535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.337 qpair failed and we were unable to recover it. 00:26:21.337 [2024-04-26 12:22:22.498457] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.337 [2024-04-26 12:22:22.498517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.337 [2024-04-26 12:22:22.498532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.337 [2024-04-26 12:22:22.498538] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.337 [2024-04-26 12:22:22.498545] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.337 [2024-04-26 12:22:22.498558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.337 qpair failed and we were unable to recover it. 00:26:21.337 [2024-04-26 12:22:22.508495] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.337 [2024-04-26 12:22:22.508546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.337 [2024-04-26 12:22:22.508560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.337 [2024-04-26 12:22:22.508567] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.337 [2024-04-26 12:22:22.508572] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.337 [2024-04-26 12:22:22.508586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.337 qpair failed and we were unable to recover it. 00:26:21.337 [2024-04-26 12:22:22.518387] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.337 [2024-04-26 12:22:22.518443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.337 [2024-04-26 12:22:22.518457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.337 [2024-04-26 12:22:22.518464] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.337 [2024-04-26 12:22:22.518470] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.337 [2024-04-26 12:22:22.518483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.337 qpair failed and we were unable to recover it. 00:26:21.337 [2024-04-26 12:22:22.528553] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.337 [2024-04-26 12:22:22.528607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.337 [2024-04-26 12:22:22.528621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.337 [2024-04-26 12:22:22.528628] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.337 [2024-04-26 12:22:22.528634] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.337 [2024-04-26 12:22:22.528646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.337 qpair failed and we were unable to recover it. 00:26:21.337 [2024-04-26 12:22:22.538603] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.337 [2024-04-26 12:22:22.538676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.337 [2024-04-26 12:22:22.538701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.337 [2024-04-26 12:22:22.538709] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.337 [2024-04-26 12:22:22.538716] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.337 [2024-04-26 12:22:22.538733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.337 qpair failed and we were unable to recover it. 00:26:21.337 [2024-04-26 12:22:22.548604] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.337 [2024-04-26 12:22:22.548656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.337 [2024-04-26 12:22:22.548672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.337 [2024-04-26 12:22:22.548680] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.337 [2024-04-26 12:22:22.548687] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.337 [2024-04-26 12:22:22.548700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.337 qpair failed and we were unable to recover it. 00:26:21.599 [2024-04-26 12:22:22.558625] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.599 [2024-04-26 12:22:22.558682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.599 [2024-04-26 12:22:22.558697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.599 [2024-04-26 12:22:22.558703] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.599 [2024-04-26 12:22:22.558709] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.599 [2024-04-26 12:22:22.558722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.599 qpair failed and we were unable to recover it. 00:26:21.599 [2024-04-26 12:22:22.568653] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.599 [2024-04-26 12:22:22.568707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.599 [2024-04-26 12:22:22.568721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.599 [2024-04-26 12:22:22.568728] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.599 [2024-04-26 12:22:22.568734] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.599 [2024-04-26 12:22:22.568747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.599 qpair failed and we were unable to recover it. 00:26:21.599 [2024-04-26 12:22:22.578604] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.599 [2024-04-26 12:22:22.578662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.599 [2024-04-26 12:22:22.578680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.600 [2024-04-26 12:22:22.578687] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.600 [2024-04-26 12:22:22.578693] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.600 [2024-04-26 12:22:22.578706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.600 qpair failed and we were unable to recover it. 00:26:21.600 [2024-04-26 12:22:22.588712] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.600 [2024-04-26 12:22:22.588764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.600 [2024-04-26 12:22:22.588779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.600 [2024-04-26 12:22:22.588786] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.600 [2024-04-26 12:22:22.588792] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.600 [2024-04-26 12:22:22.588805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.600 qpair failed and we were unable to recover it. 00:26:21.600 [2024-04-26 12:22:22.598758] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.600 [2024-04-26 12:22:22.598820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.600 [2024-04-26 12:22:22.598834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.600 [2024-04-26 12:22:22.598846] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.600 [2024-04-26 12:22:22.598852] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.600 [2024-04-26 12:22:22.598865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.600 qpair failed and we were unable to recover it. 00:26:21.600 [2024-04-26 12:22:22.608759] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.600 [2024-04-26 12:22:22.608813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.600 [2024-04-26 12:22:22.608828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.600 [2024-04-26 12:22:22.608834] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.600 [2024-04-26 12:22:22.608846] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.600 [2024-04-26 12:22:22.608860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.600 qpair failed and we were unable to recover it. 00:26:21.600 [2024-04-26 12:22:22.618672] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.600 [2024-04-26 12:22:22.618733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.600 [2024-04-26 12:22:22.618747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.600 [2024-04-26 12:22:22.618753] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.600 [2024-04-26 12:22:22.618759] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.600 [2024-04-26 12:22:22.618776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.600 qpair failed and we were unable to recover it. 00:26:21.600 [2024-04-26 12:22:22.628828] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.600 [2024-04-26 12:22:22.628904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.600 [2024-04-26 12:22:22.628919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.600 [2024-04-26 12:22:22.628926] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.600 [2024-04-26 12:22:22.628931] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.600 [2024-04-26 12:22:22.628945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.600 qpair failed and we were unable to recover it. 00:26:21.600 [2024-04-26 12:22:22.638854] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.600 [2024-04-26 12:22:22.638908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.600 [2024-04-26 12:22:22.638923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.600 [2024-04-26 12:22:22.638929] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.600 [2024-04-26 12:22:22.638935] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.600 [2024-04-26 12:22:22.638949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.600 qpair failed and we were unable to recover it. 00:26:21.600 [2024-04-26 12:22:22.648909] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.600 [2024-04-26 12:22:22.648975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.600 [2024-04-26 12:22:22.648990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.600 [2024-04-26 12:22:22.648996] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.600 [2024-04-26 12:22:22.649002] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.600 [2024-04-26 12:22:22.649016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.600 qpair failed and we were unable to recover it. 00:26:21.600 [2024-04-26 12:22:22.658921] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.600 [2024-04-26 12:22:22.659008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.600 [2024-04-26 12:22:22.659022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.600 [2024-04-26 12:22:22.659029] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.600 [2024-04-26 12:22:22.659035] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.600 [2024-04-26 12:22:22.659048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.600 qpair failed and we were unable to recover it. 00:26:21.600 [2024-04-26 12:22:22.668937] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.600 [2024-04-26 12:22:22.668987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.600 [2024-04-26 12:22:22.669008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.600 [2024-04-26 12:22:22.669014] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.600 [2024-04-26 12:22:22.669020] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.600 [2024-04-26 12:22:22.669034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.600 qpair failed and we were unable to recover it. 00:26:21.600 [2024-04-26 12:22:22.678935] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.600 [2024-04-26 12:22:22.678990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.600 [2024-04-26 12:22:22.679005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.600 [2024-04-26 12:22:22.679011] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.600 [2024-04-26 12:22:22.679017] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.600 [2024-04-26 12:22:22.679030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.600 qpair failed and we were unable to recover it. 00:26:21.600 [2024-04-26 12:22:22.688951] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.600 [2024-04-26 12:22:22.689012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.600 [2024-04-26 12:22:22.689026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.600 [2024-04-26 12:22:22.689033] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.600 [2024-04-26 12:22:22.689039] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.600 [2024-04-26 12:22:22.689052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.600 qpair failed and we were unable to recover it. 00:26:21.600 [2024-04-26 12:22:22.699019] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.600 [2024-04-26 12:22:22.699076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.600 [2024-04-26 12:22:22.699091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.600 [2024-04-26 12:22:22.699097] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.600 [2024-04-26 12:22:22.699103] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.600 [2024-04-26 12:22:22.699116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.600 qpair failed and we were unable to recover it. 00:26:21.600 [2024-04-26 12:22:22.709045] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.600 [2024-04-26 12:22:22.709098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.600 [2024-04-26 12:22:22.709111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.600 [2024-04-26 12:22:22.709118] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.600 [2024-04-26 12:22:22.709124] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.601 [2024-04-26 12:22:22.709141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.601 qpair failed and we were unable to recover it. 00:26:21.601 [2024-04-26 12:22:22.719076] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.601 [2024-04-26 12:22:22.719129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.601 [2024-04-26 12:22:22.719143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.601 [2024-04-26 12:22:22.719150] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.601 [2024-04-26 12:22:22.719156] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.601 [2024-04-26 12:22:22.719168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.601 qpair failed and we were unable to recover it. 00:26:21.601 [2024-04-26 12:22:22.729124] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.601 [2024-04-26 12:22:22.729178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.601 [2024-04-26 12:22:22.729191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.601 [2024-04-26 12:22:22.729198] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.601 [2024-04-26 12:22:22.729204] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.601 [2024-04-26 12:22:22.729217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.601 qpair failed and we were unable to recover it. 00:26:21.601 [2024-04-26 12:22:22.739153] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.601 [2024-04-26 12:22:22.739211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.601 [2024-04-26 12:22:22.739225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.601 [2024-04-26 12:22:22.739232] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.601 [2024-04-26 12:22:22.739238] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.601 [2024-04-26 12:22:22.739251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.601 qpair failed and we were unable to recover it. 00:26:21.601 [2024-04-26 12:22:22.749144] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.601 [2024-04-26 12:22:22.749205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.601 [2024-04-26 12:22:22.749218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.601 [2024-04-26 12:22:22.749225] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.601 [2024-04-26 12:22:22.749231] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.601 [2024-04-26 12:22:22.749244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.601 qpair failed and we were unable to recover it. 00:26:21.601 [2024-04-26 12:22:22.759167] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.601 [2024-04-26 12:22:22.759232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.601 [2024-04-26 12:22:22.759249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.601 [2024-04-26 12:22:22.759256] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.601 [2024-04-26 12:22:22.759262] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.601 [2024-04-26 12:22:22.759275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.601 qpair failed and we were unable to recover it. 00:26:21.601 [2024-04-26 12:22:22.769219] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.601 [2024-04-26 12:22:22.769273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.601 [2024-04-26 12:22:22.769287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.601 [2024-04-26 12:22:22.769294] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.601 [2024-04-26 12:22:22.769300] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.601 [2024-04-26 12:22:22.769313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.601 qpair failed and we were unable to recover it. 00:26:21.601 [2024-04-26 12:22:22.779128] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.601 [2024-04-26 12:22:22.779189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.601 [2024-04-26 12:22:22.779203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.601 [2024-04-26 12:22:22.779210] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.601 [2024-04-26 12:22:22.779216] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.601 [2024-04-26 12:22:22.779228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.601 qpair failed and we were unable to recover it. 00:26:21.601 [2024-04-26 12:22:22.789274] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.601 [2024-04-26 12:22:22.789334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.601 [2024-04-26 12:22:22.789349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.601 [2024-04-26 12:22:22.789356] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.601 [2024-04-26 12:22:22.789362] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.601 [2024-04-26 12:22:22.789375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.601 qpair failed and we were unable to recover it. 00:26:21.601 [2024-04-26 12:22:22.799310] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.601 [2024-04-26 12:22:22.799358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.601 [2024-04-26 12:22:22.799373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.601 [2024-04-26 12:22:22.799380] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.601 [2024-04-26 12:22:22.799386] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.601 [2024-04-26 12:22:22.799402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.601 qpair failed and we were unable to recover it. 00:26:21.601 [2024-04-26 12:22:22.809329] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.601 [2024-04-26 12:22:22.809385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.601 [2024-04-26 12:22:22.809399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.601 [2024-04-26 12:22:22.809406] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.601 [2024-04-26 12:22:22.809412] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.601 [2024-04-26 12:22:22.809425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.601 qpair failed and we were unable to recover it. 00:26:21.863 [2024-04-26 12:22:22.819359] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.863 [2024-04-26 12:22:22.819417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.863 [2024-04-26 12:22:22.819432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.863 [2024-04-26 12:22:22.819439] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.864 [2024-04-26 12:22:22.819445] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.864 [2024-04-26 12:22:22.819459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.864 qpair failed and we were unable to recover it. 00:26:21.864 [2024-04-26 12:22:22.829350] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.864 [2024-04-26 12:22:22.829446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.864 [2024-04-26 12:22:22.829461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.864 [2024-04-26 12:22:22.829468] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.864 [2024-04-26 12:22:22.829474] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.864 [2024-04-26 12:22:22.829487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.864 qpair failed and we were unable to recover it. 00:26:21.864 [2024-04-26 12:22:22.839414] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.864 [2024-04-26 12:22:22.839469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.864 [2024-04-26 12:22:22.839483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.864 [2024-04-26 12:22:22.839490] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.864 [2024-04-26 12:22:22.839496] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.864 [2024-04-26 12:22:22.839509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.864 qpair failed and we were unable to recover it. 00:26:21.864 [2024-04-26 12:22:22.849452] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.864 [2024-04-26 12:22:22.849510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.864 [2024-04-26 12:22:22.849528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.864 [2024-04-26 12:22:22.849535] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.864 [2024-04-26 12:22:22.849541] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.864 [2024-04-26 12:22:22.849553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.864 qpair failed and we were unable to recover it. 00:26:21.864 [2024-04-26 12:22:22.859517] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.864 [2024-04-26 12:22:22.859579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.864 [2024-04-26 12:22:22.859595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.864 [2024-04-26 12:22:22.859602] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.864 [2024-04-26 12:22:22.859608] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.864 [2024-04-26 12:22:22.859621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.864 qpair failed and we were unable to recover it. 00:26:21.864 [2024-04-26 12:22:22.869488] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.864 [2024-04-26 12:22:22.869542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.864 [2024-04-26 12:22:22.869556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.864 [2024-04-26 12:22:22.869563] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.864 [2024-04-26 12:22:22.869569] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.864 [2024-04-26 12:22:22.869581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.864 qpair failed and we were unable to recover it. 00:26:21.864 [2024-04-26 12:22:22.879391] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.864 [2024-04-26 12:22:22.879449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.864 [2024-04-26 12:22:22.879464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.864 [2024-04-26 12:22:22.879471] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.864 [2024-04-26 12:22:22.879477] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.864 [2024-04-26 12:22:22.879489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.864 qpair failed and we were unable to recover it. 00:26:21.864 [2024-04-26 12:22:22.889592] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.864 [2024-04-26 12:22:22.889676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.864 [2024-04-26 12:22:22.889691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.864 [2024-04-26 12:22:22.889697] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.864 [2024-04-26 12:22:22.889707] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.864 [2024-04-26 12:22:22.889720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.864 qpair failed and we were unable to recover it. 00:26:21.864 [2024-04-26 12:22:22.899592] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.864 [2024-04-26 12:22:22.899661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.864 [2024-04-26 12:22:22.899685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.864 [2024-04-26 12:22:22.899693] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.864 [2024-04-26 12:22:22.899700] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.864 [2024-04-26 12:22:22.899717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.864 qpair failed and we were unable to recover it. 00:26:21.864 [2024-04-26 12:22:22.909599] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.864 [2024-04-26 12:22:22.909664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.864 [2024-04-26 12:22:22.909689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.864 [2024-04-26 12:22:22.909697] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.864 [2024-04-26 12:22:22.909703] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.864 [2024-04-26 12:22:22.909721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.864 qpair failed and we were unable to recover it. 00:26:21.864 [2024-04-26 12:22:22.919611] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.864 [2024-04-26 12:22:22.919680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.864 [2024-04-26 12:22:22.919696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.864 [2024-04-26 12:22:22.919703] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.864 [2024-04-26 12:22:22.919709] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.864 [2024-04-26 12:22:22.919723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.864 qpair failed and we were unable to recover it. 00:26:21.864 [2024-04-26 12:22:22.929642] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.864 [2024-04-26 12:22:22.929694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.864 [2024-04-26 12:22:22.929708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.864 [2024-04-26 12:22:22.929715] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.864 [2024-04-26 12:22:22.929722] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.864 [2024-04-26 12:22:22.929734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.864 qpair failed and we were unable to recover it. 00:26:21.864 [2024-04-26 12:22:22.939565] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.864 [2024-04-26 12:22:22.939632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.864 [2024-04-26 12:22:22.939646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.864 [2024-04-26 12:22:22.939653] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.864 [2024-04-26 12:22:22.939659] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.864 [2024-04-26 12:22:22.939672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.864 qpair failed and we were unable to recover it. 00:26:21.864 [2024-04-26 12:22:22.949605] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.864 [2024-04-26 12:22:22.949675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.864 [2024-04-26 12:22:22.949689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.864 [2024-04-26 12:22:22.949696] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.864 [2024-04-26 12:22:22.949702] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.864 [2024-04-26 12:22:22.949715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.864 qpair failed and we were unable to recover it. 00:26:21.865 [2024-04-26 12:22:22.959655] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.865 [2024-04-26 12:22:22.959747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.865 [2024-04-26 12:22:22.959761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.865 [2024-04-26 12:22:22.959768] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.865 [2024-04-26 12:22:22.959774] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.865 [2024-04-26 12:22:22.959787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.865 qpair failed and we were unable to recover it. 00:26:21.865 [2024-04-26 12:22:22.969776] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.865 [2024-04-26 12:22:22.969832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.865 [2024-04-26 12:22:22.969852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.865 [2024-04-26 12:22:22.969859] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.865 [2024-04-26 12:22:22.969865] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.865 [2024-04-26 12:22:22.969878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.865 qpair failed and we were unable to recover it. 00:26:21.865 [2024-04-26 12:22:22.979754] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.865 [2024-04-26 12:22:22.979830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.865 [2024-04-26 12:22:22.979853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.865 [2024-04-26 12:22:22.979860] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.865 [2024-04-26 12:22:22.979871] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.865 [2024-04-26 12:22:22.979885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.865 qpair failed and we were unable to recover it. 00:26:21.865 [2024-04-26 12:22:22.989832] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.865 [2024-04-26 12:22:22.989893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.865 [2024-04-26 12:22:22.989907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.865 [2024-04-26 12:22:22.989914] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.865 [2024-04-26 12:22:22.989920] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.865 [2024-04-26 12:22:22.989933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.865 qpair failed and we were unable to recover it. 00:26:21.865 [2024-04-26 12:22:22.999848] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.865 [2024-04-26 12:22:22.999906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.865 [2024-04-26 12:22:22.999920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.865 [2024-04-26 12:22:22.999927] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.865 [2024-04-26 12:22:22.999933] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.865 [2024-04-26 12:22:22.999945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.865 qpair failed and we were unable to recover it. 00:26:21.865 [2024-04-26 12:22:23.009882] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.865 [2024-04-26 12:22:23.009939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.865 [2024-04-26 12:22:23.009953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.865 [2024-04-26 12:22:23.009960] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.865 [2024-04-26 12:22:23.009966] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.865 [2024-04-26 12:22:23.009979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.865 qpair failed and we were unable to recover it. 00:26:21.865 [2024-04-26 12:22:23.019901] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.865 [2024-04-26 12:22:23.020000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.865 [2024-04-26 12:22:23.020015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.865 [2024-04-26 12:22:23.020022] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.865 [2024-04-26 12:22:23.020028] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.865 [2024-04-26 12:22:23.020041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.865 qpair failed and we were unable to recover it. 00:26:21.865 [2024-04-26 12:22:23.029933] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.865 [2024-04-26 12:22:23.029989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.865 [2024-04-26 12:22:23.030004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.865 [2024-04-26 12:22:23.030011] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.865 [2024-04-26 12:22:23.030017] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.865 [2024-04-26 12:22:23.030030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.865 qpair failed and we were unable to recover it. 00:26:21.865 [2024-04-26 12:22:23.039846] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.865 [2024-04-26 12:22:23.039896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.865 [2024-04-26 12:22:23.039910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.865 [2024-04-26 12:22:23.039917] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.865 [2024-04-26 12:22:23.039923] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.865 [2024-04-26 12:22:23.039936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.865 qpair failed and we were unable to recover it. 00:26:21.865 [2024-04-26 12:22:23.050036] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.865 [2024-04-26 12:22:23.050093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.865 [2024-04-26 12:22:23.050107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.865 [2024-04-26 12:22:23.050114] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.865 [2024-04-26 12:22:23.050120] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.865 [2024-04-26 12:22:23.050133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.865 qpair failed and we were unable to recover it. 00:26:21.865 [2024-04-26 12:22:23.060015] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.865 [2024-04-26 12:22:23.060107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.865 [2024-04-26 12:22:23.060121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.865 [2024-04-26 12:22:23.060128] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.865 [2024-04-26 12:22:23.060134] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.865 [2024-04-26 12:22:23.060146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.865 qpair failed and we were unable to recover it. 00:26:21.865 [2024-04-26 12:22:23.070051] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.865 [2024-04-26 12:22:23.070105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.865 [2024-04-26 12:22:23.070119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.865 [2024-04-26 12:22:23.070129] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.865 [2024-04-26 12:22:23.070135] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.865 [2024-04-26 12:22:23.070148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.865 qpair failed and we were unable to recover it. 00:26:21.865 [2024-04-26 12:22:23.080090] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.865 [2024-04-26 12:22:23.080145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.865 [2024-04-26 12:22:23.080159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.865 [2024-04-26 12:22:23.080166] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.866 [2024-04-26 12:22:23.080172] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:21.866 [2024-04-26 12:22:23.080186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.866 qpair failed and we were unable to recover it. 00:26:22.128 [2024-04-26 12:22:23.090156] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.128 [2024-04-26 12:22:23.090215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.128 [2024-04-26 12:22:23.090229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.128 [2024-04-26 12:22:23.090236] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.128 [2024-04-26 12:22:23.090242] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.128 [2024-04-26 12:22:23.090255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.128 qpair failed and we were unable to recover it. 00:26:22.128 [2024-04-26 12:22:23.100141] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.128 [2024-04-26 12:22:23.100199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.128 [2024-04-26 12:22:23.100213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.128 [2024-04-26 12:22:23.100220] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.128 [2024-04-26 12:22:23.100226] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.128 [2024-04-26 12:22:23.100239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.128 qpair failed and we were unable to recover it. 00:26:22.128 [2024-04-26 12:22:23.110131] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.128 [2024-04-26 12:22:23.110181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.128 [2024-04-26 12:22:23.110195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.128 [2024-04-26 12:22:23.110202] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.128 [2024-04-26 12:22:23.110208] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.128 [2024-04-26 12:22:23.110221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.128 qpair failed and we were unable to recover it. 00:26:22.128 [2024-04-26 12:22:23.120191] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.128 [2024-04-26 12:22:23.120240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.128 [2024-04-26 12:22:23.120255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.128 [2024-04-26 12:22:23.120261] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.128 [2024-04-26 12:22:23.120267] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.128 [2024-04-26 12:22:23.120280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.128 qpair failed and we were unable to recover it. 00:26:22.128 [2024-04-26 12:22:23.130213] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.128 [2024-04-26 12:22:23.130268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.128 [2024-04-26 12:22:23.130282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.128 [2024-04-26 12:22:23.130288] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.128 [2024-04-26 12:22:23.130294] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.128 [2024-04-26 12:22:23.130307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.128 qpair failed and we were unable to recover it. 00:26:22.128 [2024-04-26 12:22:23.140250] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.128 [2024-04-26 12:22:23.140306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.128 [2024-04-26 12:22:23.140321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.128 [2024-04-26 12:22:23.140327] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.128 [2024-04-26 12:22:23.140333] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.128 [2024-04-26 12:22:23.140346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.128 qpair failed and we were unable to recover it. 00:26:22.128 [2024-04-26 12:22:23.150269] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.128 [2024-04-26 12:22:23.150322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.129 [2024-04-26 12:22:23.150336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.129 [2024-04-26 12:22:23.150342] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.129 [2024-04-26 12:22:23.150348] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.129 [2024-04-26 12:22:23.150361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.129 qpair failed and we were unable to recover it. 00:26:22.129 [2024-04-26 12:22:23.160281] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.129 [2024-04-26 12:22:23.160337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.129 [2024-04-26 12:22:23.160351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.129 [2024-04-26 12:22:23.160361] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.129 [2024-04-26 12:22:23.160368] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.129 [2024-04-26 12:22:23.160380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.129 qpair failed and we were unable to recover it. 00:26:22.129 [2024-04-26 12:22:23.170321] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.129 [2024-04-26 12:22:23.170373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.129 [2024-04-26 12:22:23.170387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.129 [2024-04-26 12:22:23.170394] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.129 [2024-04-26 12:22:23.170400] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.129 [2024-04-26 12:22:23.170412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.129 qpair failed and we were unable to recover it. 00:26:22.129 [2024-04-26 12:22:23.180258] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.129 [2024-04-26 12:22:23.180315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.129 [2024-04-26 12:22:23.180329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.129 [2024-04-26 12:22:23.180336] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.129 [2024-04-26 12:22:23.180342] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.129 [2024-04-26 12:22:23.180355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.129 qpair failed and we were unable to recover it. 00:26:22.129 [2024-04-26 12:22:23.190360] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.129 [2024-04-26 12:22:23.190409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.129 [2024-04-26 12:22:23.190424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.129 [2024-04-26 12:22:23.190431] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.129 [2024-04-26 12:22:23.190437] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.129 [2024-04-26 12:22:23.190449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.129 qpair failed and we were unable to recover it. 00:26:22.129 [2024-04-26 12:22:23.200422] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.129 [2024-04-26 12:22:23.200479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.129 [2024-04-26 12:22:23.200493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.129 [2024-04-26 12:22:23.200500] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.129 [2024-04-26 12:22:23.200506] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.129 [2024-04-26 12:22:23.200518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.129 qpair failed and we were unable to recover it. 00:26:22.129 [2024-04-26 12:22:23.210439] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.129 [2024-04-26 12:22:23.210494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.129 [2024-04-26 12:22:23.210509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.129 [2024-04-26 12:22:23.210515] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.129 [2024-04-26 12:22:23.210521] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.129 [2024-04-26 12:22:23.210533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.129 qpair failed and we were unable to recover it. 00:26:22.129 [2024-04-26 12:22:23.220470] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.129 [2024-04-26 12:22:23.220532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.129 [2024-04-26 12:22:23.220547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.129 [2024-04-26 12:22:23.220553] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.129 [2024-04-26 12:22:23.220559] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.129 [2024-04-26 12:22:23.220572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.129 qpair failed and we were unable to recover it. 00:26:22.129 [2024-04-26 12:22:23.230567] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.129 [2024-04-26 12:22:23.230623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.129 [2024-04-26 12:22:23.230637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.129 [2024-04-26 12:22:23.230644] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.129 [2024-04-26 12:22:23.230650] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.129 [2024-04-26 12:22:23.230662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.129 qpair failed and we were unable to recover it. 00:26:22.129 [2024-04-26 12:22:23.240534] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.129 [2024-04-26 12:22:23.240590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.129 [2024-04-26 12:22:23.240614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.129 [2024-04-26 12:22:23.240623] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.129 [2024-04-26 12:22:23.240629] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.129 [2024-04-26 12:22:23.240647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.129 qpair failed and we were unable to recover it. 00:26:22.129 [2024-04-26 12:22:23.250558] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.129 [2024-04-26 12:22:23.250651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.129 [2024-04-26 12:22:23.250676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.129 [2024-04-26 12:22:23.250689] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.129 [2024-04-26 12:22:23.250696] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.129 [2024-04-26 12:22:23.250714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.129 qpair failed and we were unable to recover it. 00:26:22.129 [2024-04-26 12:22:23.260595] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.129 [2024-04-26 12:22:23.260658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.129 [2024-04-26 12:22:23.260674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.129 [2024-04-26 12:22:23.260681] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.129 [2024-04-26 12:22:23.260687] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.129 [2024-04-26 12:22:23.260701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.129 qpair failed and we were unable to recover it. 00:26:22.129 [2024-04-26 12:22:23.270620] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.129 [2024-04-26 12:22:23.270672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.129 [2024-04-26 12:22:23.270687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.129 [2024-04-26 12:22:23.270694] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.129 [2024-04-26 12:22:23.270700] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.129 [2024-04-26 12:22:23.270714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.129 qpair failed and we were unable to recover it. 00:26:22.129 [2024-04-26 12:22:23.280635] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.129 [2024-04-26 12:22:23.280686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.129 [2024-04-26 12:22:23.280701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.130 [2024-04-26 12:22:23.280708] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.130 [2024-04-26 12:22:23.280714] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.130 [2024-04-26 12:22:23.280727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.130 qpair failed and we were unable to recover it. 00:26:22.130 [2024-04-26 12:22:23.290700] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.130 [2024-04-26 12:22:23.290759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.130 [2024-04-26 12:22:23.290774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.130 [2024-04-26 12:22:23.290781] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.130 [2024-04-26 12:22:23.290787] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.130 [2024-04-26 12:22:23.290799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.130 qpair failed and we were unable to recover it. 00:26:22.130 [2024-04-26 12:22:23.300725] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.130 [2024-04-26 12:22:23.300813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.130 [2024-04-26 12:22:23.300827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.130 [2024-04-26 12:22:23.300833] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.130 [2024-04-26 12:22:23.300844] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.130 [2024-04-26 12:22:23.300857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.130 qpair failed and we were unable to recover it. 00:26:22.130 [2024-04-26 12:22:23.310721] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.130 [2024-04-26 12:22:23.310773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.130 [2024-04-26 12:22:23.310787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.130 [2024-04-26 12:22:23.310794] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.130 [2024-04-26 12:22:23.310800] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.130 [2024-04-26 12:22:23.310812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.130 qpair failed and we were unable to recover it. 00:26:22.130 [2024-04-26 12:22:23.320739] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.130 [2024-04-26 12:22:23.320793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.130 [2024-04-26 12:22:23.320808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.130 [2024-04-26 12:22:23.320815] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.130 [2024-04-26 12:22:23.320820] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.130 [2024-04-26 12:22:23.320834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.130 qpair failed and we were unable to recover it. 00:26:22.130 [2024-04-26 12:22:23.330836] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.130 [2024-04-26 12:22:23.330923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.130 [2024-04-26 12:22:23.330937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.130 [2024-04-26 12:22:23.330944] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.130 [2024-04-26 12:22:23.330950] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.130 [2024-04-26 12:22:23.330963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.130 qpair failed and we were unable to recover it. 00:26:22.130 [2024-04-26 12:22:23.340821] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.130 [2024-04-26 12:22:23.340883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.130 [2024-04-26 12:22:23.340901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.130 [2024-04-26 12:22:23.340909] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.130 [2024-04-26 12:22:23.340915] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.130 [2024-04-26 12:22:23.340928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.130 qpair failed and we were unable to recover it. 00:26:22.392 [2024-04-26 12:22:23.350768] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.392 [2024-04-26 12:22:23.350824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.392 [2024-04-26 12:22:23.350842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.392 [2024-04-26 12:22:23.350849] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.392 [2024-04-26 12:22:23.350855] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.392 [2024-04-26 12:22:23.350869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.392 qpair failed and we were unable to recover it. 00:26:22.392 [2024-04-26 12:22:23.360839] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.392 [2024-04-26 12:22:23.360890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.392 [2024-04-26 12:22:23.360905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.392 [2024-04-26 12:22:23.360911] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.392 [2024-04-26 12:22:23.360917] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.392 [2024-04-26 12:22:23.360930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.392 qpair failed and we were unable to recover it. 00:26:22.392 [2024-04-26 12:22:23.370911] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.392 [2024-04-26 12:22:23.371008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.392 [2024-04-26 12:22:23.371021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.392 [2024-04-26 12:22:23.371028] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.392 [2024-04-26 12:22:23.371034] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.392 [2024-04-26 12:22:23.371047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.392 qpair failed and we were unable to recover it. 00:26:22.392 [2024-04-26 12:22:23.380806] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.392 [2024-04-26 12:22:23.380876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.392 [2024-04-26 12:22:23.380892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.392 [2024-04-26 12:22:23.380898] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.392 [2024-04-26 12:22:23.380904] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.392 [2024-04-26 12:22:23.380917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.392 qpair failed and we were unable to recover it. 00:26:22.392 [2024-04-26 12:22:23.390917] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.392 [2024-04-26 12:22:23.390964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.392 [2024-04-26 12:22:23.390978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.392 [2024-04-26 12:22:23.390985] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.392 [2024-04-26 12:22:23.390991] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.392 [2024-04-26 12:22:23.391003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.392 qpair failed and we were unable to recover it. 00:26:22.392 [2024-04-26 12:22:23.400918] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.392 [2024-04-26 12:22:23.400967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.392 [2024-04-26 12:22:23.400981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.392 [2024-04-26 12:22:23.400988] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.392 [2024-04-26 12:22:23.400994] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.392 [2024-04-26 12:22:23.401007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.392 qpair failed and we were unable to recover it. 00:26:22.392 [2024-04-26 12:22:23.411014] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.392 [2024-04-26 12:22:23.411069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.392 [2024-04-26 12:22:23.411083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.392 [2024-04-26 12:22:23.411089] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.392 [2024-04-26 12:22:23.411095] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.392 [2024-04-26 12:22:23.411108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.392 qpair failed and we were unable to recover it. 00:26:22.392 [2024-04-26 12:22:23.420963] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.392 [2024-04-26 12:22:23.421024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.392 [2024-04-26 12:22:23.421040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.392 [2024-04-26 12:22:23.421047] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.392 [2024-04-26 12:22:23.421053] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.392 [2024-04-26 12:22:23.421066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.392 qpair failed and we were unable to recover it. 00:26:22.392 [2024-04-26 12:22:23.431102] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.392 [2024-04-26 12:22:23.431164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.393 [2024-04-26 12:22:23.431185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.393 [2024-04-26 12:22:23.431192] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.393 [2024-04-26 12:22:23.431198] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.393 [2024-04-26 12:22:23.431211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.393 qpair failed and we were unable to recover it. 00:26:22.393 [2024-04-26 12:22:23.441072] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.393 [2024-04-26 12:22:23.441123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.393 [2024-04-26 12:22:23.441137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.393 [2024-04-26 12:22:23.441144] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.393 [2024-04-26 12:22:23.441150] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.393 [2024-04-26 12:22:23.441163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.393 qpair failed and we were unable to recover it. 00:26:22.393 [2024-04-26 12:22:23.451133] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.393 [2024-04-26 12:22:23.451190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.393 [2024-04-26 12:22:23.451205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.393 [2024-04-26 12:22:23.451211] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.393 [2024-04-26 12:22:23.451217] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.393 [2024-04-26 12:22:23.451230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.393 qpair failed and we were unable to recover it. 00:26:22.393 [2024-04-26 12:22:23.461153] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.393 [2024-04-26 12:22:23.461209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.393 [2024-04-26 12:22:23.461223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.393 [2024-04-26 12:22:23.461230] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.393 [2024-04-26 12:22:23.461236] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.393 [2024-04-26 12:22:23.461248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.393 qpair failed and we were unable to recover it. 00:26:22.393 [2024-04-26 12:22:23.471036] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.393 [2024-04-26 12:22:23.471093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.393 [2024-04-26 12:22:23.471107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.393 [2024-04-26 12:22:23.471114] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.393 [2024-04-26 12:22:23.471120] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.393 [2024-04-26 12:22:23.471136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.393 qpair failed and we were unable to recover it. 00:26:22.393 [2024-04-26 12:22:23.481186] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.393 [2024-04-26 12:22:23.481236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.393 [2024-04-26 12:22:23.481251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.393 [2024-04-26 12:22:23.481257] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.393 [2024-04-26 12:22:23.481263] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.393 [2024-04-26 12:22:23.481276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.393 qpair failed and we were unable to recover it. 00:26:22.393 [2024-04-26 12:22:23.491239] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.393 [2024-04-26 12:22:23.491296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.393 [2024-04-26 12:22:23.491310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.393 [2024-04-26 12:22:23.491317] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.393 [2024-04-26 12:22:23.491323] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.393 [2024-04-26 12:22:23.491336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.393 qpair failed and we were unable to recover it. 00:26:22.393 [2024-04-26 12:22:23.501272] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.393 [2024-04-26 12:22:23.501331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.393 [2024-04-26 12:22:23.501346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.393 [2024-04-26 12:22:23.501353] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.393 [2024-04-26 12:22:23.501359] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.393 [2024-04-26 12:22:23.501372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.393 qpair failed and we were unable to recover it. 00:26:22.393 [2024-04-26 12:22:23.511258] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.393 [2024-04-26 12:22:23.511302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.393 [2024-04-26 12:22:23.511316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.393 [2024-04-26 12:22:23.511323] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.393 [2024-04-26 12:22:23.511329] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.393 [2024-04-26 12:22:23.511342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.393 qpair failed and we were unable to recover it. 00:26:22.393 [2024-04-26 12:22:23.521290] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.393 [2024-04-26 12:22:23.521363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.393 [2024-04-26 12:22:23.521382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.393 [2024-04-26 12:22:23.521389] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.393 [2024-04-26 12:22:23.521398] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.393 [2024-04-26 12:22:23.521412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.393 qpair failed and we were unable to recover it. 00:26:22.393 [2024-04-26 12:22:23.531280] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.393 [2024-04-26 12:22:23.531377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.393 [2024-04-26 12:22:23.531391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.393 [2024-04-26 12:22:23.531398] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.393 [2024-04-26 12:22:23.531404] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.393 [2024-04-26 12:22:23.531417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.393 qpair failed and we were unable to recover it. 00:26:22.393 [2024-04-26 12:22:23.541333] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.393 [2024-04-26 12:22:23.541390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.393 [2024-04-26 12:22:23.541404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.393 [2024-04-26 12:22:23.541411] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.393 [2024-04-26 12:22:23.541417] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.393 [2024-04-26 12:22:23.541430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.393 qpair failed and we were unable to recover it. 00:26:22.393 [2024-04-26 12:22:23.551245] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.393 [2024-04-26 12:22:23.551330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.393 [2024-04-26 12:22:23.551344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.393 [2024-04-26 12:22:23.551351] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.393 [2024-04-26 12:22:23.551357] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.393 [2024-04-26 12:22:23.551370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.393 qpair failed and we were unable to recover it. 00:26:22.393 [2024-04-26 12:22:23.561282] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.393 [2024-04-26 12:22:23.561333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.393 [2024-04-26 12:22:23.561349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.393 [2024-04-26 12:22:23.561356] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.393 [2024-04-26 12:22:23.561362] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.393 [2024-04-26 12:22:23.561379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.393 qpair failed and we were unable to recover it. 00:26:22.393 [2024-04-26 12:22:23.571433] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.393 [2024-04-26 12:22:23.571495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.393 [2024-04-26 12:22:23.571510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.393 [2024-04-26 12:22:23.571516] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.393 [2024-04-26 12:22:23.571522] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.394 [2024-04-26 12:22:23.571535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.394 qpair failed and we were unable to recover it. 00:26:22.394 [2024-04-26 12:22:23.581474] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.394 [2024-04-26 12:22:23.581525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.394 [2024-04-26 12:22:23.581540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.394 [2024-04-26 12:22:23.581546] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.394 [2024-04-26 12:22:23.581552] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.394 [2024-04-26 12:22:23.581565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.394 qpair failed and we were unable to recover it. 00:26:22.394 [2024-04-26 12:22:23.591457] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.394 [2024-04-26 12:22:23.591501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.394 [2024-04-26 12:22:23.591515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.394 [2024-04-26 12:22:23.591522] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.394 [2024-04-26 12:22:23.591528] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.394 [2024-04-26 12:22:23.591540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.394 qpair failed and we were unable to recover it. 00:26:22.394 [2024-04-26 12:22:23.601556] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.394 [2024-04-26 12:22:23.601628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.394 [2024-04-26 12:22:23.601643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.394 [2024-04-26 12:22:23.601649] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.394 [2024-04-26 12:22:23.601656] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.394 [2024-04-26 12:22:23.601672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.394 qpair failed and we were unable to recover it. 00:26:22.659 [2024-04-26 12:22:23.611578] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.659 [2024-04-26 12:22:23.611634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.659 [2024-04-26 12:22:23.611652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.659 [2024-04-26 12:22:23.611659] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.659 [2024-04-26 12:22:23.611665] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.659 [2024-04-26 12:22:23.611678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.659 qpair failed and we were unable to recover it. 00:26:22.659 [2024-04-26 12:22:23.621508] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.659 [2024-04-26 12:22:23.621574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.659 [2024-04-26 12:22:23.621589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.659 [2024-04-26 12:22:23.621596] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.659 [2024-04-26 12:22:23.621602] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.659 [2024-04-26 12:22:23.621614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.659 qpair failed and we were unable to recover it. 00:26:22.659 [2024-04-26 12:22:23.631625] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.659 [2024-04-26 12:22:23.631679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.659 [2024-04-26 12:22:23.631694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.659 [2024-04-26 12:22:23.631701] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.659 [2024-04-26 12:22:23.631707] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.659 [2024-04-26 12:22:23.631719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.659 qpair failed and we were unable to recover it. 00:26:22.659 [2024-04-26 12:22:23.641654] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.659 [2024-04-26 12:22:23.641698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.659 [2024-04-26 12:22:23.641712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.659 [2024-04-26 12:22:23.641719] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.659 [2024-04-26 12:22:23.641725] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.659 [2024-04-26 12:22:23.641737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.659 qpair failed and we were unable to recover it. 00:26:22.659 [2024-04-26 12:22:23.651731] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.659 [2024-04-26 12:22:23.651787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.659 [2024-04-26 12:22:23.651803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.659 [2024-04-26 12:22:23.651809] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.660 [2024-04-26 12:22:23.651819] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.660 [2024-04-26 12:22:23.651832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.660 qpair failed and we were unable to recover it. 00:26:22.660 [2024-04-26 12:22:23.661676] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.660 [2024-04-26 12:22:23.661729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.660 [2024-04-26 12:22:23.661743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.660 [2024-04-26 12:22:23.661750] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.660 [2024-04-26 12:22:23.661756] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.660 [2024-04-26 12:22:23.661768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.660 qpair failed and we were unable to recover it. 00:26:22.660 [2024-04-26 12:22:23.671702] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.660 [2024-04-26 12:22:23.671749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.660 [2024-04-26 12:22:23.671764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.660 [2024-04-26 12:22:23.671770] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.660 [2024-04-26 12:22:23.671776] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.660 [2024-04-26 12:22:23.671789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.660 qpair failed and we were unable to recover it. 00:26:22.660 [2024-04-26 12:22:23.681701] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.660 [2024-04-26 12:22:23.681748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.660 [2024-04-26 12:22:23.681762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.660 [2024-04-26 12:22:23.681769] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.660 [2024-04-26 12:22:23.681775] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.660 [2024-04-26 12:22:23.681788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.660 qpair failed and we were unable to recover it. 00:26:22.660 [2024-04-26 12:22:23.691806] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.660 [2024-04-26 12:22:23.691864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.660 [2024-04-26 12:22:23.691878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.660 [2024-04-26 12:22:23.691884] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.660 [2024-04-26 12:22:23.691890] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.660 [2024-04-26 12:22:23.691903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.660 qpair failed and we were unable to recover it. 00:26:22.660 [2024-04-26 12:22:23.701846] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.660 [2024-04-26 12:22:23.701906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.660 [2024-04-26 12:22:23.701921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.660 [2024-04-26 12:22:23.701927] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.660 [2024-04-26 12:22:23.701933] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.660 [2024-04-26 12:22:23.701946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.660 qpair failed and we were unable to recover it. 00:26:22.660 [2024-04-26 12:22:23.711830] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.660 [2024-04-26 12:22:23.711884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.660 [2024-04-26 12:22:23.711898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.660 [2024-04-26 12:22:23.711905] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.660 [2024-04-26 12:22:23.711911] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.660 [2024-04-26 12:22:23.711923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.660 qpair failed and we were unable to recover it. 00:26:22.660 [2024-04-26 12:22:23.721895] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.660 [2024-04-26 12:22:23.721982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.660 [2024-04-26 12:22:23.721996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.660 [2024-04-26 12:22:23.722003] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.660 [2024-04-26 12:22:23.722009] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.660 [2024-04-26 12:22:23.722021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.660 qpair failed and we were unable to recover it. 00:26:22.660 [2024-04-26 12:22:23.731898] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.660 [2024-04-26 12:22:23.731951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.660 [2024-04-26 12:22:23.731965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.660 [2024-04-26 12:22:23.731972] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.660 [2024-04-26 12:22:23.731978] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.660 [2024-04-26 12:22:23.731991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.660 qpair failed and we were unable to recover it. 00:26:22.660 [2024-04-26 12:22:23.741967] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.660 [2024-04-26 12:22:23.742035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.660 [2024-04-26 12:22:23.742049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.660 [2024-04-26 12:22:23.742055] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.660 [2024-04-26 12:22:23.742065] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.660 [2024-04-26 12:22:23.742078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.660 qpair failed and we were unable to recover it. 00:26:22.660 [2024-04-26 12:22:23.751793] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.660 [2024-04-26 12:22:23.751880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.660 [2024-04-26 12:22:23.751894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.660 [2024-04-26 12:22:23.751901] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.660 [2024-04-26 12:22:23.751906] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.660 [2024-04-26 12:22:23.751919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.660 qpair failed and we were unable to recover it. 00:26:22.660 [2024-04-26 12:22:23.761941] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.660 [2024-04-26 12:22:23.761990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.660 [2024-04-26 12:22:23.762004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.660 [2024-04-26 12:22:23.762011] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.660 [2024-04-26 12:22:23.762017] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.661 [2024-04-26 12:22:23.762029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.661 qpair failed and we were unable to recover it. 00:26:22.661 [2024-04-26 12:22:23.771948] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.661 [2024-04-26 12:22:23.771997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.661 [2024-04-26 12:22:23.772010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.661 [2024-04-26 12:22:23.772017] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.661 [2024-04-26 12:22:23.772023] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.661 [2024-04-26 12:22:23.772036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.661 qpair failed and we were unable to recover it. 00:26:22.661 [2024-04-26 12:22:23.782027] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.661 [2024-04-26 12:22:23.782081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.661 [2024-04-26 12:22:23.782096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.661 [2024-04-26 12:22:23.782102] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.661 [2024-04-26 12:22:23.782108] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.661 [2024-04-26 12:22:23.782121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.661 qpair failed and we were unable to recover it. 00:26:22.661 [2024-04-26 12:22:23.792031] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.661 [2024-04-26 12:22:23.792122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.661 [2024-04-26 12:22:23.792137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.661 [2024-04-26 12:22:23.792144] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.661 [2024-04-26 12:22:23.792150] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.661 [2024-04-26 12:22:23.792163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.661 qpair failed and we were unable to recover it. 00:26:22.661 [2024-04-26 12:22:23.802071] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.661 [2024-04-26 12:22:23.802116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.661 [2024-04-26 12:22:23.802129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.661 [2024-04-26 12:22:23.802136] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.661 [2024-04-26 12:22:23.802142] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.661 [2024-04-26 12:22:23.802155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.661 qpair failed and we were unable to recover it. 00:26:22.661 [2024-04-26 12:22:23.812074] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.661 [2024-04-26 12:22:23.812157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.661 [2024-04-26 12:22:23.812171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.661 [2024-04-26 12:22:23.812178] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.661 [2024-04-26 12:22:23.812184] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.661 [2024-04-26 12:22:23.812197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.661 qpair failed and we were unable to recover it. 00:26:22.661 [2024-04-26 12:22:23.822116] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.661 [2024-04-26 12:22:23.822203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.661 [2024-04-26 12:22:23.822218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.661 [2024-04-26 12:22:23.822225] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.661 [2024-04-26 12:22:23.822233] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.661 [2024-04-26 12:22:23.822249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.661 qpair failed and we were unable to recover it. 00:26:22.661 [2024-04-26 12:22:23.832025] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.661 [2024-04-26 12:22:23.832090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.661 [2024-04-26 12:22:23.832104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.661 [2024-04-26 12:22:23.832111] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.661 [2024-04-26 12:22:23.832120] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.661 [2024-04-26 12:22:23.832133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.661 qpair failed and we were unable to recover it. 00:26:22.661 [2024-04-26 12:22:23.842166] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.661 [2024-04-26 12:22:23.842217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.661 [2024-04-26 12:22:23.842231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.661 [2024-04-26 12:22:23.842238] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.661 [2024-04-26 12:22:23.842244] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.661 [2024-04-26 12:22:23.842256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.661 qpair failed and we were unable to recover it. 00:26:22.661 [2024-04-26 12:22:23.852201] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.661 [2024-04-26 12:22:23.852247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.661 [2024-04-26 12:22:23.852261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.661 [2024-04-26 12:22:23.852268] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.661 [2024-04-26 12:22:23.852274] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.661 [2024-04-26 12:22:23.852286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.661 qpair failed and we were unable to recover it. 00:26:22.661 [2024-04-26 12:22:23.862090] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.661 [2024-04-26 12:22:23.862141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.661 [2024-04-26 12:22:23.862155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.661 [2024-04-26 12:22:23.862162] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.661 [2024-04-26 12:22:23.862168] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.661 [2024-04-26 12:22:23.862182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.661 qpair failed and we were unable to recover it. 00:26:22.661 [2024-04-26 12:22:23.872249] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.661 [2024-04-26 12:22:23.872296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.661 [2024-04-26 12:22:23.872311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.662 [2024-04-26 12:22:23.872317] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.662 [2024-04-26 12:22:23.872323] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.662 [2024-04-26 12:22:23.872336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.662 qpair failed and we were unable to recover it. 00:26:22.974 [2024-04-26 12:22:23.882258] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.974 [2024-04-26 12:22:23.882310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.974 [2024-04-26 12:22:23.882325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.974 [2024-04-26 12:22:23.882331] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.974 [2024-04-26 12:22:23.882337] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.974 [2024-04-26 12:22:23.882351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.974 qpair failed and we were unable to recover it. 00:26:22.974 [2024-04-26 12:22:23.892291] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.974 [2024-04-26 12:22:23.892336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.974 [2024-04-26 12:22:23.892350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.974 [2024-04-26 12:22:23.892357] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.974 [2024-04-26 12:22:23.892363] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.974 [2024-04-26 12:22:23.892375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.974 qpair failed and we were unable to recover it. 00:26:22.974 [2024-04-26 12:22:23.902342] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.974 [2024-04-26 12:22:23.902399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.974 [2024-04-26 12:22:23.902413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.974 [2024-04-26 12:22:23.902420] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.974 [2024-04-26 12:22:23.902426] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.974 [2024-04-26 12:22:23.902439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.974 qpair failed and we were unable to recover it. 00:26:22.974 [2024-04-26 12:22:23.912354] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.974 [2024-04-26 12:22:23.912405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.974 [2024-04-26 12:22:23.912422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.974 [2024-04-26 12:22:23.912429] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.974 [2024-04-26 12:22:23.912435] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.974 [2024-04-26 12:22:23.912449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.974 qpair failed and we were unable to recover it. 00:26:22.974 [2024-04-26 12:22:23.922362] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.974 [2024-04-26 12:22:23.922409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.974 [2024-04-26 12:22:23.922423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.974 [2024-04-26 12:22:23.922435] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.974 [2024-04-26 12:22:23.922441] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.974 [2024-04-26 12:22:23.922454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.974 qpair failed and we were unable to recover it. 00:26:22.974 [2024-04-26 12:22:23.932441] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.974 [2024-04-26 12:22:23.932491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.974 [2024-04-26 12:22:23.932505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.974 [2024-04-26 12:22:23.932512] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.974 [2024-04-26 12:22:23.932519] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.974 [2024-04-26 12:22:23.932532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.974 qpair failed and we were unable to recover it. 00:26:22.974 [2024-04-26 12:22:23.942475] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.974 [2024-04-26 12:22:23.942568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.974 [2024-04-26 12:22:23.942582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.974 [2024-04-26 12:22:23.942589] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.974 [2024-04-26 12:22:23.942595] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.974 [2024-04-26 12:22:23.942608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.974 qpair failed and we were unable to recover it. 00:26:22.974 [2024-04-26 12:22:23.952465] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.974 [2024-04-26 12:22:23.952552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.974 [2024-04-26 12:22:23.952577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.974 [2024-04-26 12:22:23.952585] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.974 [2024-04-26 12:22:23.952592] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.974 [2024-04-26 12:22:23.952609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.974 qpair failed and we were unable to recover it. 00:26:22.974 [2024-04-26 12:22:23.962376] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.974 [2024-04-26 12:22:23.962423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.974 [2024-04-26 12:22:23.962440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.974 [2024-04-26 12:22:23.962447] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.974 [2024-04-26 12:22:23.962453] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.974 [2024-04-26 12:22:23.962468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.974 qpair failed and we were unable to recover it. 00:26:22.974 [2024-04-26 12:22:23.972492] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.974 [2024-04-26 12:22:23.972544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.974 [2024-04-26 12:22:23.972560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.974 [2024-04-26 12:22:23.972566] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.974 [2024-04-26 12:22:23.972572] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.974 [2024-04-26 12:22:23.972586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.974 qpair failed and we were unable to recover it. 00:26:22.974 [2024-04-26 12:22:23.982552] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.974 [2024-04-26 12:22:23.982606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.974 [2024-04-26 12:22:23.982631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.975 [2024-04-26 12:22:23.982639] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.975 [2024-04-26 12:22:23.982646] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.975 [2024-04-26 12:22:23.982663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.975 qpair failed and we were unable to recover it. 00:26:22.975 [2024-04-26 12:22:23.992543] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.975 [2024-04-26 12:22:23.992624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.975 [2024-04-26 12:22:23.992649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.975 [2024-04-26 12:22:23.992658] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.975 [2024-04-26 12:22:23.992665] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.975 [2024-04-26 12:22:23.992681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.975 qpair failed and we were unable to recover it. 00:26:22.975 [2024-04-26 12:22:24.002605] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.975 [2024-04-26 12:22:24.002653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.975 [2024-04-26 12:22:24.002678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.975 [2024-04-26 12:22:24.002686] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.975 [2024-04-26 12:22:24.002693] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.975 [2024-04-26 12:22:24.002710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.975 qpair failed and we were unable to recover it. 00:26:22.975 [2024-04-26 12:22:24.012630] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.975 [2024-04-26 12:22:24.012679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.975 [2024-04-26 12:22:24.012695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.975 [2024-04-26 12:22:24.012706] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.975 [2024-04-26 12:22:24.012712] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.975 [2024-04-26 12:22:24.012726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.975 qpair failed and we were unable to recover it. 00:26:22.975 [2024-04-26 12:22:24.022655] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.975 [2024-04-26 12:22:24.022713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.975 [2024-04-26 12:22:24.022727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.975 [2024-04-26 12:22:24.022734] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.975 [2024-04-26 12:22:24.022740] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.975 [2024-04-26 12:22:24.022753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.975 qpair failed and we were unable to recover it. 00:26:22.975 [2024-04-26 12:22:24.032555] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.975 [2024-04-26 12:22:24.032600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.975 [2024-04-26 12:22:24.032614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.975 [2024-04-26 12:22:24.032621] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.975 [2024-04-26 12:22:24.032627] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.975 [2024-04-26 12:22:24.032640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.975 qpair failed and we were unable to recover it. 00:26:22.975 [2024-04-26 12:22:24.042708] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.975 [2024-04-26 12:22:24.042757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.975 [2024-04-26 12:22:24.042771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.975 [2024-04-26 12:22:24.042777] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.975 [2024-04-26 12:22:24.042783] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.975 [2024-04-26 12:22:24.042796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.975 qpair failed and we were unable to recover it. 00:26:22.975 [2024-04-26 12:22:24.052750] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.975 [2024-04-26 12:22:24.052795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.975 [2024-04-26 12:22:24.052809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.975 [2024-04-26 12:22:24.052815] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.975 [2024-04-26 12:22:24.052821] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.975 [2024-04-26 12:22:24.052834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.975 qpair failed and we were unable to recover it. 00:26:22.975 [2024-04-26 12:22:24.062756] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.975 [2024-04-26 12:22:24.062812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.975 [2024-04-26 12:22:24.062825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.975 [2024-04-26 12:22:24.062832] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.975 [2024-04-26 12:22:24.062864] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.975 [2024-04-26 12:22:24.062880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.975 qpair failed and we were unable to recover it. 00:26:22.975 [2024-04-26 12:22:24.072691] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.975 [2024-04-26 12:22:24.072739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.975 [2024-04-26 12:22:24.072753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.975 [2024-04-26 12:22:24.072760] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.975 [2024-04-26 12:22:24.072766] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.975 [2024-04-26 12:22:24.072778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.975 qpair failed and we were unable to recover it. 00:26:22.975 [2024-04-26 12:22:24.082794] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.975 [2024-04-26 12:22:24.082847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.975 [2024-04-26 12:22:24.082862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.975 [2024-04-26 12:22:24.082868] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.975 [2024-04-26 12:22:24.082874] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.975 [2024-04-26 12:22:24.082888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.975 qpair failed and we were unable to recover it. 00:26:22.975 [2024-04-26 12:22:24.092834] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.975 [2024-04-26 12:22:24.092884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.975 [2024-04-26 12:22:24.092898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.975 [2024-04-26 12:22:24.092904] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.975 [2024-04-26 12:22:24.092910] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.975 [2024-04-26 12:22:24.092923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.975 qpair failed and we were unable to recover it. 00:26:22.975 [2024-04-26 12:22:24.102860] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.975 [2024-04-26 12:22:24.102910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.975 [2024-04-26 12:22:24.102928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.975 [2024-04-26 12:22:24.102935] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.975 [2024-04-26 12:22:24.102941] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.975 [2024-04-26 12:22:24.102954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.975 qpair failed and we were unable to recover it. 00:26:22.975 [2024-04-26 12:22:24.112893] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.976 [2024-04-26 12:22:24.112947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.976 [2024-04-26 12:22:24.112961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.976 [2024-04-26 12:22:24.112968] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.976 [2024-04-26 12:22:24.112974] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.976 [2024-04-26 12:22:24.112987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.976 qpair failed and we were unable to recover it. 00:26:22.976 [2024-04-26 12:22:24.122917] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.976 [2024-04-26 12:22:24.122966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.976 [2024-04-26 12:22:24.122980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.976 [2024-04-26 12:22:24.122987] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.976 [2024-04-26 12:22:24.122993] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.976 [2024-04-26 12:22:24.123006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.976 qpair failed and we were unable to recover it. 00:26:22.976 [2024-04-26 12:22:24.132830] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.976 [2024-04-26 12:22:24.132878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.976 [2024-04-26 12:22:24.132892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.976 [2024-04-26 12:22:24.132899] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.976 [2024-04-26 12:22:24.132905] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.976 [2024-04-26 12:22:24.132918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.976 qpair failed and we were unable to recover it. 00:26:22.976 [2024-04-26 12:22:24.142966] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.976 [2024-04-26 12:22:24.143020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.976 [2024-04-26 12:22:24.143034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.976 [2024-04-26 12:22:24.143041] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.976 [2024-04-26 12:22:24.143047] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.976 [2024-04-26 12:22:24.143060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.976 qpair failed and we were unable to recover it. 00:26:22.976 [2024-04-26 12:22:24.152990] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.976 [2024-04-26 12:22:24.153062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.976 [2024-04-26 12:22:24.153076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.976 [2024-04-26 12:22:24.153083] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.976 [2024-04-26 12:22:24.153089] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.976 [2024-04-26 12:22:24.153102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.976 qpair failed and we were unable to recover it. 00:26:22.976 [2024-04-26 12:22:24.163021] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.976 [2024-04-26 12:22:24.163067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.976 [2024-04-26 12:22:24.163081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.976 [2024-04-26 12:22:24.163087] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.976 [2024-04-26 12:22:24.163093] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.976 [2024-04-26 12:22:24.163106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.976 qpair failed and we were unable to recover it. 00:26:22.976 [2024-04-26 12:22:24.173031] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.976 [2024-04-26 12:22:24.173121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.976 [2024-04-26 12:22:24.173135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.976 [2024-04-26 12:22:24.173142] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.976 [2024-04-26 12:22:24.173148] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.976 [2024-04-26 12:22:24.173160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.976 qpair failed and we were unable to recover it. 00:26:22.976 [2024-04-26 12:22:24.183111] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.976 [2024-04-26 12:22:24.183192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.976 [2024-04-26 12:22:24.183206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.976 [2024-04-26 12:22:24.183213] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.976 [2024-04-26 12:22:24.183218] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:22.976 [2024-04-26 12:22:24.183231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.976 qpair failed and we were unable to recover it. 00:26:23.239 [2024-04-26 12:22:24.193089] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.239 [2024-04-26 12:22:24.193139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.239 [2024-04-26 12:22:24.193160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.239 [2024-04-26 12:22:24.193167] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.239 [2024-04-26 12:22:24.193173] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.239 [2024-04-26 12:22:24.193186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.239 qpair failed and we were unable to recover it. 00:26:23.239 [2024-04-26 12:22:24.203127] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.239 [2024-04-26 12:22:24.203181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.239 [2024-04-26 12:22:24.203195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.239 [2024-04-26 12:22:24.203202] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.239 [2024-04-26 12:22:24.203208] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.239 [2024-04-26 12:22:24.203221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.239 qpair failed and we were unable to recover it. 00:26:23.239 [2024-04-26 12:22:24.213171] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.239 [2024-04-26 12:22:24.213269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.239 [2024-04-26 12:22:24.213283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.239 [2024-04-26 12:22:24.213290] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.239 [2024-04-26 12:22:24.213296] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.239 [2024-04-26 12:22:24.213309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.239 qpair failed and we were unable to recover it. 00:26:23.239 [2024-04-26 12:22:24.223204] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.239 [2024-04-26 12:22:24.223286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.239 [2024-04-26 12:22:24.223300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.239 [2024-04-26 12:22:24.223307] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.239 [2024-04-26 12:22:24.223313] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.239 [2024-04-26 12:22:24.223327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.239 qpair failed and we were unable to recover it. 00:26:23.239 [2024-04-26 12:22:24.233229] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.239 [2024-04-26 12:22:24.233320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.239 [2024-04-26 12:22:24.233334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.239 [2024-04-26 12:22:24.233341] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.239 [2024-04-26 12:22:24.233347] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.239 [2024-04-26 12:22:24.233364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.239 qpair failed and we were unable to recover it. 00:26:23.239 [2024-04-26 12:22:24.243207] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.239 [2024-04-26 12:22:24.243252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.239 [2024-04-26 12:22:24.243266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.239 [2024-04-26 12:22:24.243273] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.239 [2024-04-26 12:22:24.243278] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.239 [2024-04-26 12:22:24.243291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.239 qpair failed and we were unable to recover it. 00:26:23.239 [2024-04-26 12:22:24.253275] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.239 [2024-04-26 12:22:24.253325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.239 [2024-04-26 12:22:24.253339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.239 [2024-04-26 12:22:24.253345] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.239 [2024-04-26 12:22:24.253351] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.239 [2024-04-26 12:22:24.253364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.239 qpair failed and we were unable to recover it. 00:26:23.239 [2024-04-26 12:22:24.263174] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.239 [2024-04-26 12:22:24.263224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.239 [2024-04-26 12:22:24.263238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.239 [2024-04-26 12:22:24.263245] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.239 [2024-04-26 12:22:24.263251] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.239 [2024-04-26 12:22:24.263264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.239 qpair failed and we were unable to recover it. 00:26:23.239 [2024-04-26 12:22:24.273325] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.239 [2024-04-26 12:22:24.273370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.239 [2024-04-26 12:22:24.273385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.239 [2024-04-26 12:22:24.273391] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.239 [2024-04-26 12:22:24.273398] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.239 [2024-04-26 12:22:24.273411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.239 qpair failed and we were unable to recover it. 00:26:23.239 [2024-04-26 12:22:24.283348] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.239 [2024-04-26 12:22:24.283405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.239 [2024-04-26 12:22:24.283422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.239 [2024-04-26 12:22:24.283429] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.239 [2024-04-26 12:22:24.283435] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.239 [2024-04-26 12:22:24.283448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.239 qpair failed and we were unable to recover it. 00:26:23.239 [2024-04-26 12:22:24.293362] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.239 [2024-04-26 12:22:24.293410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.239 [2024-04-26 12:22:24.293424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.239 [2024-04-26 12:22:24.293431] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.239 [2024-04-26 12:22:24.293437] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.239 [2024-04-26 12:22:24.293449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.239 qpair failed and we were unable to recover it. 00:26:23.239 [2024-04-26 12:22:24.303473] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.239 [2024-04-26 12:22:24.303541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.239 [2024-04-26 12:22:24.303555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.239 [2024-04-26 12:22:24.303562] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.239 [2024-04-26 12:22:24.303568] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.239 [2024-04-26 12:22:24.303580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.239 qpair failed and we were unable to recover it. 00:26:23.239 [2024-04-26 12:22:24.313435] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.239 [2024-04-26 12:22:24.313486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.239 [2024-04-26 12:22:24.313510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.239 [2024-04-26 12:22:24.313519] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.239 [2024-04-26 12:22:24.313526] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.239 [2024-04-26 12:22:24.313543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.239 qpair failed and we were unable to recover it. 00:26:23.239 [2024-04-26 12:22:24.323351] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.239 [2024-04-26 12:22:24.323409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.239 [2024-04-26 12:22:24.323426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.239 [2024-04-26 12:22:24.323433] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.239 [2024-04-26 12:22:24.323439] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.239 [2024-04-26 12:22:24.323458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.239 qpair failed and we were unable to recover it. 00:26:23.239 [2024-04-26 12:22:24.333490] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.239 [2024-04-26 12:22:24.333538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.239 [2024-04-26 12:22:24.333553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.239 [2024-04-26 12:22:24.333560] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.239 [2024-04-26 12:22:24.333566] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.239 [2024-04-26 12:22:24.333579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.239 qpair failed and we were unable to recover it. 00:26:23.239 [2024-04-26 12:22:24.343511] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.239 [2024-04-26 12:22:24.343568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.239 [2024-04-26 12:22:24.343594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.239 [2024-04-26 12:22:24.343602] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.239 [2024-04-26 12:22:24.343609] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.239 [2024-04-26 12:22:24.343626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.240 qpair failed and we were unable to recover it. 00:26:23.240 [2024-04-26 12:22:24.353548] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.240 [2024-04-26 12:22:24.353599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.240 [2024-04-26 12:22:24.353614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.240 [2024-04-26 12:22:24.353621] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.240 [2024-04-26 12:22:24.353627] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.240 [2024-04-26 12:22:24.353641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.240 qpair failed and we were unable to recover it. 00:26:23.240 [2024-04-26 12:22:24.363606] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.240 [2024-04-26 12:22:24.363660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.240 [2024-04-26 12:22:24.363685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.240 [2024-04-26 12:22:24.363693] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.240 [2024-04-26 12:22:24.363700] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.240 [2024-04-26 12:22:24.363717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.240 qpair failed and we were unable to recover it. 00:26:23.240 [2024-04-26 12:22:24.373611] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.240 [2024-04-26 12:22:24.373659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.240 [2024-04-26 12:22:24.373679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.240 [2024-04-26 12:22:24.373686] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.240 [2024-04-26 12:22:24.373692] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.240 [2024-04-26 12:22:24.373706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.240 qpair failed and we were unable to recover it. 00:26:23.240 [2024-04-26 12:22:24.383507] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.240 [2024-04-26 12:22:24.383579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.240 [2024-04-26 12:22:24.383595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.240 [2024-04-26 12:22:24.383602] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.240 [2024-04-26 12:22:24.383608] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.240 [2024-04-26 12:22:24.383621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.240 qpair failed and we were unable to recover it. 00:26:23.240 [2024-04-26 12:22:24.393682] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.240 [2024-04-26 12:22:24.393745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.240 [2024-04-26 12:22:24.393759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.240 [2024-04-26 12:22:24.393766] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.240 [2024-04-26 12:22:24.393771] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.240 [2024-04-26 12:22:24.393785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.240 qpair failed and we were unable to recover it. 00:26:23.240 [2024-04-26 12:22:24.403561] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.240 [2024-04-26 12:22:24.403607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.240 [2024-04-26 12:22:24.403621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.240 [2024-04-26 12:22:24.403628] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.240 [2024-04-26 12:22:24.403634] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.240 [2024-04-26 12:22:24.403647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.240 qpair failed and we were unable to recover it. 00:26:23.240 [2024-04-26 12:22:24.413698] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.240 [2024-04-26 12:22:24.413746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.240 [2024-04-26 12:22:24.413760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.240 [2024-04-26 12:22:24.413767] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.240 [2024-04-26 12:22:24.413776] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.240 [2024-04-26 12:22:24.413789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.240 qpair failed and we were unable to recover it. 00:26:23.240 [2024-04-26 12:22:24.423758] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.240 [2024-04-26 12:22:24.423806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.240 [2024-04-26 12:22:24.423821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.240 [2024-04-26 12:22:24.423828] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.240 [2024-04-26 12:22:24.423834] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.240 [2024-04-26 12:22:24.423852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.240 qpair failed and we were unable to recover it. 00:26:23.240 [2024-04-26 12:22:24.433778] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.240 [2024-04-26 12:22:24.433828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.240 [2024-04-26 12:22:24.433847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.240 [2024-04-26 12:22:24.433854] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.240 [2024-04-26 12:22:24.433860] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.240 [2024-04-26 12:22:24.433873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.240 qpair failed and we were unable to recover it. 00:26:23.240 [2024-04-26 12:22:24.443830] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.240 [2024-04-26 12:22:24.443902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.240 [2024-04-26 12:22:24.443916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.240 [2024-04-26 12:22:24.443923] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.240 [2024-04-26 12:22:24.443929] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.240 [2024-04-26 12:22:24.443942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.240 qpair failed and we were unable to recover it. 00:26:23.240 [2024-04-26 12:22:24.453683] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.240 [2024-04-26 12:22:24.453732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.240 [2024-04-26 12:22:24.453746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.240 [2024-04-26 12:22:24.453752] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.240 [2024-04-26 12:22:24.453758] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.240 [2024-04-26 12:22:24.453771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.240 qpair failed and we were unable to recover it. 00:26:23.502 [2024-04-26 12:22:24.463850] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.502 [2024-04-26 12:22:24.463935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.502 [2024-04-26 12:22:24.463949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.502 [2024-04-26 12:22:24.463956] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.502 [2024-04-26 12:22:24.463962] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.502 [2024-04-26 12:22:24.463974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.502 qpair failed and we were unable to recover it. 00:26:23.502 [2024-04-26 12:22:24.473738] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.502 [2024-04-26 12:22:24.473786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.502 [2024-04-26 12:22:24.473800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.502 [2024-04-26 12:22:24.473807] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.502 [2024-04-26 12:22:24.473813] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.502 [2024-04-26 12:22:24.473826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.502 qpair failed and we were unable to recover it. 00:26:23.502 [2024-04-26 12:22:24.483898] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.502 [2024-04-26 12:22:24.483948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.502 [2024-04-26 12:22:24.483964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.502 [2024-04-26 12:22:24.483971] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.502 [2024-04-26 12:22:24.483977] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.502 [2024-04-26 12:22:24.483990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.502 qpair failed and we were unable to recover it. 00:26:23.502 [2024-04-26 12:22:24.493786] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.502 [2024-04-26 12:22:24.493842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.502 [2024-04-26 12:22:24.493856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.502 [2024-04-26 12:22:24.493863] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.502 [2024-04-26 12:22:24.493869] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.502 [2024-04-26 12:22:24.493882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.502 qpair failed and we were unable to recover it. 00:26:23.502 [2024-04-26 12:22:24.503931] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.502 [2024-04-26 12:22:24.504010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.502 [2024-04-26 12:22:24.504024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.502 [2024-04-26 12:22:24.504030] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.502 [2024-04-26 12:22:24.504040] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.502 [2024-04-26 12:22:24.504054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.502 qpair failed and we were unable to recover it. 00:26:23.502 [2024-04-26 12:22:24.513978] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.502 [2024-04-26 12:22:24.514060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.502 [2024-04-26 12:22:24.514074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.503 [2024-04-26 12:22:24.514080] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.503 [2024-04-26 12:22:24.514086] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.503 [2024-04-26 12:22:24.514099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.503 qpair failed and we were unable to recover it. 00:26:23.503 [2024-04-26 12:22:24.524044] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.503 [2024-04-26 12:22:24.524090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.503 [2024-04-26 12:22:24.524105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.503 [2024-04-26 12:22:24.524112] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.503 [2024-04-26 12:22:24.524118] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.503 [2024-04-26 12:22:24.524131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.503 qpair failed and we were unable to recover it. 00:26:23.503 [2024-04-26 12:22:24.534004] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.503 [2024-04-26 12:22:24.534051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.503 [2024-04-26 12:22:24.534065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.503 [2024-04-26 12:22:24.534072] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.503 [2024-04-26 12:22:24.534078] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.503 [2024-04-26 12:22:24.534091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.503 qpair failed and we were unable to recover it. 00:26:23.503 [2024-04-26 12:22:24.544020] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.503 [2024-04-26 12:22:24.544070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.503 [2024-04-26 12:22:24.544084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.503 [2024-04-26 12:22:24.544091] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.503 [2024-04-26 12:22:24.544097] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.503 [2024-04-26 12:22:24.544109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.503 qpair failed and we were unable to recover it. 00:26:23.503 [2024-04-26 12:22:24.554095] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.503 [2024-04-26 12:22:24.554141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.503 [2024-04-26 12:22:24.554155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.503 [2024-04-26 12:22:24.554162] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.503 [2024-04-26 12:22:24.554168] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.503 [2024-04-26 12:22:24.554181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.503 qpair failed and we were unable to recover it. 00:26:23.503 [2024-04-26 12:22:24.564115] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.503 [2024-04-26 12:22:24.564164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.503 [2024-04-26 12:22:24.564178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.503 [2024-04-26 12:22:24.564185] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.503 [2024-04-26 12:22:24.564191] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.503 [2024-04-26 12:22:24.564203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.503 qpair failed and we were unable to recover it. 00:26:23.503 [2024-04-26 12:22:24.574152] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.503 [2024-04-26 12:22:24.574199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.503 [2024-04-26 12:22:24.574213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.503 [2024-04-26 12:22:24.574219] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.503 [2024-04-26 12:22:24.574226] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.503 [2024-04-26 12:22:24.574238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.503 qpair failed and we were unable to recover it. 00:26:23.503 [2024-04-26 12:22:24.584169] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.503 [2024-04-26 12:22:24.584224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.503 [2024-04-26 12:22:24.584239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.503 [2024-04-26 12:22:24.584245] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.503 [2024-04-26 12:22:24.584251] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.503 [2024-04-26 12:22:24.584265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.503 qpair failed and we were unable to recover it. 00:26:23.503 [2024-04-26 12:22:24.594172] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.503 [2024-04-26 12:22:24.594221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.503 [2024-04-26 12:22:24.594235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.503 [2024-04-26 12:22:24.594241] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.503 [2024-04-26 12:22:24.594251] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.503 [2024-04-26 12:22:24.594264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.503 qpair failed and we were unable to recover it. 00:26:23.503 [2024-04-26 12:22:24.604185] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.503 [2024-04-26 12:22:24.604233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.503 [2024-04-26 12:22:24.604248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.503 [2024-04-26 12:22:24.604255] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.503 [2024-04-26 12:22:24.604261] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.503 [2024-04-26 12:22:24.604274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.503 qpair failed and we were unable to recover it. 00:26:23.503 [2024-04-26 12:22:24.614228] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.503 [2024-04-26 12:22:24.614278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.503 [2024-04-26 12:22:24.614292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.503 [2024-04-26 12:22:24.614299] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.503 [2024-04-26 12:22:24.614305] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.503 [2024-04-26 12:22:24.614318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.503 qpair failed and we were unable to recover it. 00:26:23.503 [2024-04-26 12:22:24.624163] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.503 [2024-04-26 12:22:24.624214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.503 [2024-04-26 12:22:24.624228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.503 [2024-04-26 12:22:24.624235] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.503 [2024-04-26 12:22:24.624240] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.503 [2024-04-26 12:22:24.624253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.503 qpair failed and we were unable to recover it. 00:26:23.503 [2024-04-26 12:22:24.634295] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.503 [2024-04-26 12:22:24.634338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.503 [2024-04-26 12:22:24.634353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.503 [2024-04-26 12:22:24.634359] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.503 [2024-04-26 12:22:24.634365] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.503 [2024-04-26 12:22:24.634378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.503 qpair failed and we were unable to recover it. 00:26:23.503 [2024-04-26 12:22:24.644329] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.503 [2024-04-26 12:22:24.644382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.503 [2024-04-26 12:22:24.644396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.503 [2024-04-26 12:22:24.644402] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.503 [2024-04-26 12:22:24.644409] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.504 [2024-04-26 12:22:24.644421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.504 qpair failed and we were unable to recover it. 00:26:23.504 [2024-04-26 12:22:24.654227] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.504 [2024-04-26 12:22:24.654275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.504 [2024-04-26 12:22:24.654289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.504 [2024-04-26 12:22:24.654295] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.504 [2024-04-26 12:22:24.654301] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.504 [2024-04-26 12:22:24.654314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.504 qpair failed and we were unable to recover it. 00:26:23.504 [2024-04-26 12:22:24.664378] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.504 [2024-04-26 12:22:24.664463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.504 [2024-04-26 12:22:24.664476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.504 [2024-04-26 12:22:24.664483] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.504 [2024-04-26 12:22:24.664489] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.504 [2024-04-26 12:22:24.664503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.504 qpair failed and we were unable to recover it. 00:26:23.504 [2024-04-26 12:22:24.674412] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.504 [2024-04-26 12:22:24.674462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.504 [2024-04-26 12:22:24.674475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.504 [2024-04-26 12:22:24.674482] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.504 [2024-04-26 12:22:24.674488] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.504 [2024-04-26 12:22:24.674500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.504 qpair failed and we were unable to recover it. 00:26:23.504 [2024-04-26 12:22:24.684441] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.504 [2024-04-26 12:22:24.684489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.504 [2024-04-26 12:22:24.684504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.504 [2024-04-26 12:22:24.684514] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.504 [2024-04-26 12:22:24.684520] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.504 [2024-04-26 12:22:24.684533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.504 qpair failed and we were unable to recover it. 00:26:23.504 [2024-04-26 12:22:24.694444] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.504 [2024-04-26 12:22:24.694490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.504 [2024-04-26 12:22:24.694504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.504 [2024-04-26 12:22:24.694511] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.504 [2024-04-26 12:22:24.694517] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.504 [2024-04-26 12:22:24.694531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.504 qpair failed and we were unable to recover it. 00:26:23.504 [2024-04-26 12:22:24.704482] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.504 [2024-04-26 12:22:24.704531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.504 [2024-04-26 12:22:24.704546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.504 [2024-04-26 12:22:24.704553] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.504 [2024-04-26 12:22:24.704559] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.504 [2024-04-26 12:22:24.704571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.504 qpair failed and we were unable to recover it. 00:26:23.504 [2024-04-26 12:22:24.714510] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.504 [2024-04-26 12:22:24.714554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.504 [2024-04-26 12:22:24.714568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.504 [2024-04-26 12:22:24.714575] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.504 [2024-04-26 12:22:24.714581] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.504 [2024-04-26 12:22:24.714594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.504 qpair failed and we were unable to recover it. 00:26:23.768 [2024-04-26 12:22:24.724411] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.768 [2024-04-26 12:22:24.724473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.768 [2024-04-26 12:22:24.724489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.768 [2024-04-26 12:22:24.724496] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.768 [2024-04-26 12:22:24.724502] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.768 [2024-04-26 12:22:24.724515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.768 qpair failed and we were unable to recover it. 00:26:23.768 [2024-04-26 12:22:24.734542] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.768 [2024-04-26 12:22:24.734590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.768 [2024-04-26 12:22:24.734605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.768 [2024-04-26 12:22:24.734612] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.768 [2024-04-26 12:22:24.734618] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.768 [2024-04-26 12:22:24.734631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.768 qpair failed and we were unable to recover it. 00:26:23.768 [2024-04-26 12:22:24.744596] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.768 [2024-04-26 12:22:24.744658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.768 [2024-04-26 12:22:24.744683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.768 [2024-04-26 12:22:24.744692] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.768 [2024-04-26 12:22:24.744699] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.768 [2024-04-26 12:22:24.744715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.768 qpair failed and we were unable to recover it. 00:26:23.768 [2024-04-26 12:22:24.754613] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.768 [2024-04-26 12:22:24.754672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.768 [2024-04-26 12:22:24.754696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.768 [2024-04-26 12:22:24.754704] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.768 [2024-04-26 12:22:24.754711] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.768 [2024-04-26 12:22:24.754729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.768 qpair failed and we were unable to recover it. 00:26:23.768 [2024-04-26 12:22:24.764527] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.768 [2024-04-26 12:22:24.764575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.768 [2024-04-26 12:22:24.764591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.768 [2024-04-26 12:22:24.764598] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.768 [2024-04-26 12:22:24.764604] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.768 [2024-04-26 12:22:24.764618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.768 qpair failed and we were unable to recover it. 00:26:23.768 [2024-04-26 12:22:24.774679] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.768 [2024-04-26 12:22:24.774765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.768 [2024-04-26 12:22:24.774780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.768 [2024-04-26 12:22:24.774791] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.768 [2024-04-26 12:22:24.774797] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.768 [2024-04-26 12:22:24.774810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.768 qpair failed and we were unable to recover it. 00:26:23.768 [2024-04-26 12:22:24.784700] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.768 [2024-04-26 12:22:24.784752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.768 [2024-04-26 12:22:24.784767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.768 [2024-04-26 12:22:24.784774] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.768 [2024-04-26 12:22:24.784780] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.768 [2024-04-26 12:22:24.784793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.768 qpair failed and we were unable to recover it. 00:26:23.768 [2024-04-26 12:22:24.794727] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.768 [2024-04-26 12:22:24.794777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.768 [2024-04-26 12:22:24.794791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.768 [2024-04-26 12:22:24.794798] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.768 [2024-04-26 12:22:24.794804] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.768 [2024-04-26 12:22:24.794817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.768 qpair failed and we were unable to recover it. 00:26:23.768 [2024-04-26 12:22:24.804804] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.768 [2024-04-26 12:22:24.804890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.768 [2024-04-26 12:22:24.804904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.768 [2024-04-26 12:22:24.804911] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.768 [2024-04-26 12:22:24.804917] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.768 [2024-04-26 12:22:24.804930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.768 qpair failed and we were unable to recover it. 00:26:23.768 [2024-04-26 12:22:24.814792] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.768 [2024-04-26 12:22:24.814880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.768 [2024-04-26 12:22:24.814895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.768 [2024-04-26 12:22:24.814902] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.768 [2024-04-26 12:22:24.814908] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.768 [2024-04-26 12:22:24.814921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.768 qpair failed and we were unable to recover it. 00:26:23.768 [2024-04-26 12:22:24.824841] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.768 [2024-04-26 12:22:24.824933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.768 [2024-04-26 12:22:24.824948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.768 [2024-04-26 12:22:24.824955] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.768 [2024-04-26 12:22:24.824961] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.768 [2024-04-26 12:22:24.824974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.768 qpair failed and we were unable to recover it. 00:26:23.768 [2024-04-26 12:22:24.834708] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.768 [2024-04-26 12:22:24.834758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.768 [2024-04-26 12:22:24.834772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.768 [2024-04-26 12:22:24.834778] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.768 [2024-04-26 12:22:24.834785] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.768 [2024-04-26 12:22:24.834798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.768 qpair failed and we were unable to recover it. 00:26:23.768 [2024-04-26 12:22:24.844851] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.768 [2024-04-26 12:22:24.844899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.768 [2024-04-26 12:22:24.844914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.769 [2024-04-26 12:22:24.844921] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.769 [2024-04-26 12:22:24.844927] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.769 [2024-04-26 12:22:24.844940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.769 qpair failed and we were unable to recover it. 00:26:23.769 [2024-04-26 12:22:24.854766] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.769 [2024-04-26 12:22:24.854826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.769 [2024-04-26 12:22:24.854846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.769 [2024-04-26 12:22:24.854853] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.769 [2024-04-26 12:22:24.854859] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.769 [2024-04-26 12:22:24.854872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.769 qpair failed and we were unable to recover it. 00:26:23.769 [2024-04-26 12:22:24.864908] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.769 [2024-04-26 12:22:24.865005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.769 [2024-04-26 12:22:24.865019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.769 [2024-04-26 12:22:24.865029] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.769 [2024-04-26 12:22:24.865035] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.769 [2024-04-26 12:22:24.865049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.769 qpair failed and we were unable to recover it. 00:26:23.769 [2024-04-26 12:22:24.874942] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.769 [2024-04-26 12:22:24.874991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.769 [2024-04-26 12:22:24.875004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.769 [2024-04-26 12:22:24.875011] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.769 [2024-04-26 12:22:24.875017] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.769 [2024-04-26 12:22:24.875030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.769 qpair failed and we were unable to recover it. 00:26:23.769 [2024-04-26 12:22:24.884984] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.769 [2024-04-26 12:22:24.885074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.769 [2024-04-26 12:22:24.885089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.769 [2024-04-26 12:22:24.885095] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.769 [2024-04-26 12:22:24.885101] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.769 [2024-04-26 12:22:24.885115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.769 qpair failed and we were unable to recover it. 00:26:23.769 [2024-04-26 12:22:24.894867] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.769 [2024-04-26 12:22:24.894922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.769 [2024-04-26 12:22:24.894937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.769 [2024-04-26 12:22:24.894944] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.769 [2024-04-26 12:22:24.894950] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.769 [2024-04-26 12:22:24.894964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.769 qpair failed and we were unable to recover it. 00:26:23.769 [2024-04-26 12:22:24.905018] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.769 [2024-04-26 12:22:24.905080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.769 [2024-04-26 12:22:24.905094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.769 [2024-04-26 12:22:24.905101] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.769 [2024-04-26 12:22:24.905107] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.769 [2024-04-26 12:22:24.905119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.769 qpair failed and we were unable to recover it. 00:26:23.769 [2024-04-26 12:22:24.915072] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.769 [2024-04-26 12:22:24.915119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.769 [2024-04-26 12:22:24.915134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.769 [2024-04-26 12:22:24.915141] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.769 [2024-04-26 12:22:24.915147] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.769 [2024-04-26 12:22:24.915160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.769 qpair failed and we were unable to recover it. 00:26:23.769 [2024-04-26 12:22:24.925074] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.769 [2024-04-26 12:22:24.925122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.769 [2024-04-26 12:22:24.925137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.769 [2024-04-26 12:22:24.925144] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.769 [2024-04-26 12:22:24.925150] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.769 [2024-04-26 12:22:24.925163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.769 qpair failed and we were unable to recover it. 00:26:23.769 [2024-04-26 12:22:24.935114] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.769 [2024-04-26 12:22:24.935162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.769 [2024-04-26 12:22:24.935176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.769 [2024-04-26 12:22:24.935183] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.769 [2024-04-26 12:22:24.935189] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.769 [2024-04-26 12:22:24.935201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.769 qpair failed and we were unable to recover it. 00:26:23.769 [2024-04-26 12:22:24.945149] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.769 [2024-04-26 12:22:24.945204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.769 [2024-04-26 12:22:24.945218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.769 [2024-04-26 12:22:24.945224] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.769 [2024-04-26 12:22:24.945230] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.769 [2024-04-26 12:22:24.945243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.769 qpair failed and we were unable to recover it. 00:26:23.769 [2024-04-26 12:22:24.955176] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.769 [2024-04-26 12:22:24.955256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.769 [2024-04-26 12:22:24.955276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.769 [2024-04-26 12:22:24.955283] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.769 [2024-04-26 12:22:24.955290] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.769 [2024-04-26 12:22:24.955303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.769 qpair failed and we were unable to recover it. 00:26:23.769 [2024-04-26 12:22:24.965230] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.769 [2024-04-26 12:22:24.965280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.769 [2024-04-26 12:22:24.965294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.769 [2024-04-26 12:22:24.965301] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.769 [2024-04-26 12:22:24.965306] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.769 [2024-04-26 12:22:24.965319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.769 qpair failed and we were unable to recover it. 00:26:23.769 [2024-04-26 12:22:24.975222] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.769 [2024-04-26 12:22:24.975271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.769 [2024-04-26 12:22:24.975285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.769 [2024-04-26 12:22:24.975292] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.769 [2024-04-26 12:22:24.975298] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:23.770 [2024-04-26 12:22:24.975311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.770 qpair failed and we were unable to recover it. 00:26:24.032 [2024-04-26 12:22:24.985256] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.032 [2024-04-26 12:22:24.985312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.032 [2024-04-26 12:22:24.985326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.032 [2024-04-26 12:22:24.985333] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.032 [2024-04-26 12:22:24.985339] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.032 [2024-04-26 12:22:24.985352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.032 qpair failed and we were unable to recover it. 00:26:24.032 [2024-04-26 12:22:24.995170] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.032 [2024-04-26 12:22:24.995214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.032 [2024-04-26 12:22:24.995228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.032 [2024-04-26 12:22:24.995234] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.032 [2024-04-26 12:22:24.995241] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.032 [2024-04-26 12:22:24.995257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.032 qpair failed and we were unable to recover it. 00:26:24.032 [2024-04-26 12:22:25.005194] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.032 [2024-04-26 12:22:25.005260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.032 [2024-04-26 12:22:25.005274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.032 [2024-04-26 12:22:25.005281] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.032 [2024-04-26 12:22:25.005287] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.032 [2024-04-26 12:22:25.005300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.032 qpair failed and we were unable to recover it. 00:26:24.032 [2024-04-26 12:22:25.015194] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.032 [2024-04-26 12:22:25.015276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.032 [2024-04-26 12:22:25.015290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.032 [2024-04-26 12:22:25.015297] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.032 [2024-04-26 12:22:25.015303] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.032 [2024-04-26 12:22:25.015316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.032 qpair failed and we were unable to recover it. 00:26:24.032 [2024-04-26 12:22:25.025344] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.032 [2024-04-26 12:22:25.025395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.033 [2024-04-26 12:22:25.025409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.033 [2024-04-26 12:22:25.025416] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.033 [2024-04-26 12:22:25.025422] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.033 [2024-04-26 12:22:25.025435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.033 qpair failed and we were unable to recover it. 00:26:24.033 [2024-04-26 12:22:25.035262] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.033 [2024-04-26 12:22:25.035309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.033 [2024-04-26 12:22:25.035323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.033 [2024-04-26 12:22:25.035330] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.033 [2024-04-26 12:22:25.035336] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.033 [2024-04-26 12:22:25.035348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.033 qpair failed and we were unable to recover it. 00:26:24.033 [2024-04-26 12:22:25.045391] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.033 [2024-04-26 12:22:25.045448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.033 [2024-04-26 12:22:25.045465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.033 [2024-04-26 12:22:25.045472] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.033 [2024-04-26 12:22:25.045478] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.033 [2024-04-26 12:22:25.045491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.033 qpair failed and we were unable to recover it. 00:26:24.033 [2024-04-26 12:22:25.055415] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.033 [2024-04-26 12:22:25.055463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.033 [2024-04-26 12:22:25.055477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.033 [2024-04-26 12:22:25.055484] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.033 [2024-04-26 12:22:25.055490] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.033 [2024-04-26 12:22:25.055503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.033 qpair failed and we were unable to recover it. 00:26:24.033 [2024-04-26 12:22:25.065442] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.033 [2024-04-26 12:22:25.065501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.033 [2024-04-26 12:22:25.065515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.033 [2024-04-26 12:22:25.065522] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.033 [2024-04-26 12:22:25.065528] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.033 [2024-04-26 12:22:25.065541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.033 qpair failed and we were unable to recover it. 00:26:24.033 [2024-04-26 12:22:25.075463] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.033 [2024-04-26 12:22:25.075525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.033 [2024-04-26 12:22:25.075539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.033 [2024-04-26 12:22:25.075546] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.033 [2024-04-26 12:22:25.075552] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.033 [2024-04-26 12:22:25.075565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.033 qpair failed and we were unable to recover it. 00:26:24.033 [2024-04-26 12:22:25.085368] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.033 [2024-04-26 12:22:25.085415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.033 [2024-04-26 12:22:25.085430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.033 [2024-04-26 12:22:25.085436] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.033 [2024-04-26 12:22:25.085443] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.033 [2024-04-26 12:22:25.085459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.033 qpair failed and we were unable to recover it. 00:26:24.033 [2024-04-26 12:22:25.095394] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.033 [2024-04-26 12:22:25.095440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.033 [2024-04-26 12:22:25.095454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.033 [2024-04-26 12:22:25.095461] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.033 [2024-04-26 12:22:25.095467] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.033 [2024-04-26 12:22:25.095480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.033 qpair failed and we were unable to recover it. 00:26:24.033 [2024-04-26 12:22:25.105584] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.033 [2024-04-26 12:22:25.105666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.033 [2024-04-26 12:22:25.105681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.033 [2024-04-26 12:22:25.105688] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.033 [2024-04-26 12:22:25.105694] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.033 [2024-04-26 12:22:25.105707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.033 qpair failed and we were unable to recover it. 00:26:24.033 [2024-04-26 12:22:25.115578] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.033 [2024-04-26 12:22:25.115619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.033 [2024-04-26 12:22:25.115633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.033 [2024-04-26 12:22:25.115640] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.033 [2024-04-26 12:22:25.115647] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.033 [2024-04-26 12:22:25.115659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.033 qpair failed and we were unable to recover it. 00:26:24.033 [2024-04-26 12:22:25.125636] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.033 [2024-04-26 12:22:25.125679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.033 [2024-04-26 12:22:25.125693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.033 [2024-04-26 12:22:25.125700] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.033 [2024-04-26 12:22:25.125706] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.033 [2024-04-26 12:22:25.125718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.033 qpair failed and we were unable to recover it. 00:26:24.033 [2024-04-26 12:22:25.135660] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.033 [2024-04-26 12:22:25.135725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.033 [2024-04-26 12:22:25.135742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.033 [2024-04-26 12:22:25.135749] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.033 [2024-04-26 12:22:25.135755] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.033 [2024-04-26 12:22:25.135768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.033 qpair failed and we were unable to recover it. 00:26:24.033 [2024-04-26 12:22:25.145689] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.033 [2024-04-26 12:22:25.145742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.033 [2024-04-26 12:22:25.145756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.033 [2024-04-26 12:22:25.145763] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.033 [2024-04-26 12:22:25.145769] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.033 [2024-04-26 12:22:25.145781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.033 qpair failed and we were unable to recover it. 00:26:24.033 [2024-04-26 12:22:25.155713] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.033 [2024-04-26 12:22:25.155761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.033 [2024-04-26 12:22:25.155775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.033 [2024-04-26 12:22:25.155781] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.033 [2024-04-26 12:22:25.155788] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.034 [2024-04-26 12:22:25.155800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.034 qpair failed and we were unable to recover it. 00:26:24.034 [2024-04-26 12:22:25.165689] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.034 [2024-04-26 12:22:25.165735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.034 [2024-04-26 12:22:25.165749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.034 [2024-04-26 12:22:25.165755] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.034 [2024-04-26 12:22:25.165761] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.034 [2024-04-26 12:22:25.165774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.034 qpair failed and we were unable to recover it. 00:26:24.034 [2024-04-26 12:22:25.175753] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.034 [2024-04-26 12:22:25.175803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.034 [2024-04-26 12:22:25.175817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.034 [2024-04-26 12:22:25.175824] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.034 [2024-04-26 12:22:25.175830] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.034 [2024-04-26 12:22:25.175850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.034 qpair failed and we were unable to recover it. 00:26:24.034 [2024-04-26 12:22:25.185779] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.034 [2024-04-26 12:22:25.185831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.034 [2024-04-26 12:22:25.185852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.034 [2024-04-26 12:22:25.185859] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.034 [2024-04-26 12:22:25.185865] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.034 [2024-04-26 12:22:25.185878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.034 qpair failed and we were unable to recover it. 00:26:24.034 [2024-04-26 12:22:25.195758] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.034 [2024-04-26 12:22:25.195829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.034 [2024-04-26 12:22:25.195847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.034 [2024-04-26 12:22:25.195853] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.034 [2024-04-26 12:22:25.195859] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.034 [2024-04-26 12:22:25.195872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.034 qpair failed and we were unable to recover it. 00:26:24.034 [2024-04-26 12:22:25.205815] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.034 [2024-04-26 12:22:25.205916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.034 [2024-04-26 12:22:25.205931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.034 [2024-04-26 12:22:25.205938] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.034 [2024-04-26 12:22:25.205944] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.034 [2024-04-26 12:22:25.205957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.034 qpair failed and we were unable to recover it. 00:26:24.034 [2024-04-26 12:22:25.215847] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.034 [2024-04-26 12:22:25.215895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.034 [2024-04-26 12:22:25.215908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.034 [2024-04-26 12:22:25.215915] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.034 [2024-04-26 12:22:25.215921] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.034 [2024-04-26 12:22:25.215934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.034 qpair failed and we were unable to recover it. 00:26:24.034 [2024-04-26 12:22:25.225895] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.034 [2024-04-26 12:22:25.225951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.034 [2024-04-26 12:22:25.225968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.034 [2024-04-26 12:22:25.225975] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.034 [2024-04-26 12:22:25.225981] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.034 [2024-04-26 12:22:25.225994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.034 qpair failed and we were unable to recover it. 00:26:24.034 [2024-04-26 12:22:25.235908] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.034 [2024-04-26 12:22:25.235956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.034 [2024-04-26 12:22:25.235970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.034 [2024-04-26 12:22:25.235976] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.034 [2024-04-26 12:22:25.235982] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.034 [2024-04-26 12:22:25.235995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.034 qpair failed and we were unable to recover it. 00:26:24.034 [2024-04-26 12:22:25.245929] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.034 [2024-04-26 12:22:25.245986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.034 [2024-04-26 12:22:25.246000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.034 [2024-04-26 12:22:25.246006] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.034 [2024-04-26 12:22:25.246012] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.034 [2024-04-26 12:22:25.246025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.034 qpair failed and we were unable to recover it. 00:26:24.296 [2024-04-26 12:22:25.255955] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.296 [2024-04-26 12:22:25.256004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.296 [2024-04-26 12:22:25.256018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.296 [2024-04-26 12:22:25.256025] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.296 [2024-04-26 12:22:25.256031] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.296 [2024-04-26 12:22:25.256044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.296 qpair failed and we were unable to recover it. 00:26:24.296 [2024-04-26 12:22:25.265998] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.296 [2024-04-26 12:22:25.266051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.296 [2024-04-26 12:22:25.266065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.296 [2024-04-26 12:22:25.266072] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.296 [2024-04-26 12:22:25.266081] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.296 [2024-04-26 12:22:25.266094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.296 qpair failed and we were unable to recover it. 00:26:24.296 [2024-04-26 12:22:25.276067] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.296 [2024-04-26 12:22:25.276113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.296 [2024-04-26 12:22:25.276130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.296 [2024-04-26 12:22:25.276137] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.296 [2024-04-26 12:22:25.276143] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.296 [2024-04-26 12:22:25.276157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.296 qpair failed and we were unable to recover it. 00:26:24.296 [2024-04-26 12:22:25.286071] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.296 [2024-04-26 12:22:25.286123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.296 [2024-04-26 12:22:25.286138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.296 [2024-04-26 12:22:25.286145] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.296 [2024-04-26 12:22:25.286151] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.296 [2024-04-26 12:22:25.286164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.296 qpair failed and we were unable to recover it. 00:26:24.296 [2024-04-26 12:22:25.296090] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.296 [2024-04-26 12:22:25.296139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.296 [2024-04-26 12:22:25.296153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.296 [2024-04-26 12:22:25.296160] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.296 [2024-04-26 12:22:25.296166] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.296 [2024-04-26 12:22:25.296179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.296 qpair failed and we were unable to recover it. 00:26:24.296 [2024-04-26 12:22:25.306127] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.296 [2024-04-26 12:22:25.306173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.296 [2024-04-26 12:22:25.306187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.296 [2024-04-26 12:22:25.306194] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.296 [2024-04-26 12:22:25.306200] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.296 [2024-04-26 12:22:25.306212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.296 qpair failed and we were unable to recover it. 00:26:24.296 [2024-04-26 12:22:25.316131] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.296 [2024-04-26 12:22:25.316231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.296 [2024-04-26 12:22:25.316246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.296 [2024-04-26 12:22:25.316253] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.296 [2024-04-26 12:22:25.316259] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.296 [2024-04-26 12:22:25.316272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.296 qpair failed and we were unable to recover it. 00:26:24.296 [2024-04-26 12:22:25.326178] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.296 [2024-04-26 12:22:25.326225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.296 [2024-04-26 12:22:25.326239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.296 [2024-04-26 12:22:25.326246] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.296 [2024-04-26 12:22:25.326251] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.296 [2024-04-26 12:22:25.326264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.296 qpair failed and we were unable to recover it. 00:26:24.296 [2024-04-26 12:22:25.336184] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.296 [2024-04-26 12:22:25.336233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.296 [2024-04-26 12:22:25.336247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.296 [2024-04-26 12:22:25.336254] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.296 [2024-04-26 12:22:25.336260] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.296 [2024-04-26 12:22:25.336273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.296 qpair failed and we were unable to recover it. 00:26:24.296 [2024-04-26 12:22:25.346219] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.296 [2024-04-26 12:22:25.346268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.296 [2024-04-26 12:22:25.346283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.296 [2024-04-26 12:22:25.346289] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.296 [2024-04-26 12:22:25.346295] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.296 [2024-04-26 12:22:25.346308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.296 qpair failed and we were unable to recover it. 00:26:24.296 [2024-04-26 12:22:25.356304] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.296 [2024-04-26 12:22:25.356352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.296 [2024-04-26 12:22:25.356366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.296 [2024-04-26 12:22:25.356372] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.296 [2024-04-26 12:22:25.356382] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.296 [2024-04-26 12:22:25.356395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.296 qpair failed and we were unable to recover it. 00:26:24.296 [2024-04-26 12:22:25.366241] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.296 [2024-04-26 12:22:25.366284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.296 [2024-04-26 12:22:25.366298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.296 [2024-04-26 12:22:25.366304] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.296 [2024-04-26 12:22:25.366310] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.296 [2024-04-26 12:22:25.366323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.296 qpair failed and we were unable to recover it. 00:26:24.296 [2024-04-26 12:22:25.376268] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.296 [2024-04-26 12:22:25.376351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.296 [2024-04-26 12:22:25.376365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.296 [2024-04-26 12:22:25.376372] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.296 [2024-04-26 12:22:25.376378] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.296 [2024-04-26 12:22:25.376390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.296 qpair failed and we were unable to recover it. 00:26:24.297 [2024-04-26 12:22:25.386338] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.297 [2024-04-26 12:22:25.386429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.297 [2024-04-26 12:22:25.386443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.297 [2024-04-26 12:22:25.386450] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.297 [2024-04-26 12:22:25.386456] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.297 [2024-04-26 12:22:25.386468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.297 qpair failed and we were unable to recover it. 00:26:24.297 [2024-04-26 12:22:25.396365] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.297 [2024-04-26 12:22:25.396410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.297 [2024-04-26 12:22:25.396423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.297 [2024-04-26 12:22:25.396430] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.297 [2024-04-26 12:22:25.396436] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.297 [2024-04-26 12:22:25.396449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.297 qpair failed and we were unable to recover it. 00:26:24.297 [2024-04-26 12:22:25.406253] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.297 [2024-04-26 12:22:25.406303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.297 [2024-04-26 12:22:25.406317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.297 [2024-04-26 12:22:25.406324] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.297 [2024-04-26 12:22:25.406330] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.297 [2024-04-26 12:22:25.406342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.297 qpair failed and we were unable to recover it. 00:26:24.297 [2024-04-26 12:22:25.416424] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.297 [2024-04-26 12:22:25.416470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.297 [2024-04-26 12:22:25.416483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.297 [2024-04-26 12:22:25.416490] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.297 [2024-04-26 12:22:25.416496] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.297 [2024-04-26 12:22:25.416509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.297 qpair failed and we were unable to recover it. 00:26:24.297 [2024-04-26 12:22:25.426453] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.297 [2024-04-26 12:22:25.426499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.297 [2024-04-26 12:22:25.426513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.297 [2024-04-26 12:22:25.426520] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.297 [2024-04-26 12:22:25.426526] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.297 [2024-04-26 12:22:25.426538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.297 qpair failed and we were unable to recover it. 00:26:24.297 [2024-04-26 12:22:25.436476] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.297 [2024-04-26 12:22:25.436528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.297 [2024-04-26 12:22:25.436553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.297 [2024-04-26 12:22:25.436561] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.297 [2024-04-26 12:22:25.436568] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.297 [2024-04-26 12:22:25.436585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.297 qpair failed and we were unable to recover it. 00:26:24.297 [2024-04-26 12:22:25.446494] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.297 [2024-04-26 12:22:25.446549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.297 [2024-04-26 12:22:25.446573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.297 [2024-04-26 12:22:25.446586] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.297 [2024-04-26 12:22:25.446593] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.297 [2024-04-26 12:22:25.446610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.297 qpair failed and we were unable to recover it. 00:26:24.297 [2024-04-26 12:22:25.456520] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.297 [2024-04-26 12:22:25.456578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.297 [2024-04-26 12:22:25.456593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.297 [2024-04-26 12:22:25.456600] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.297 [2024-04-26 12:22:25.456607] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.297 [2024-04-26 12:22:25.456621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.297 qpair failed and we were unable to recover it. 00:26:24.297 [2024-04-26 12:22:25.466508] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.297 [2024-04-26 12:22:25.466561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.297 [2024-04-26 12:22:25.466575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.297 [2024-04-26 12:22:25.466582] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.297 [2024-04-26 12:22:25.466589] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.297 [2024-04-26 12:22:25.466601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.297 qpair failed and we were unable to recover it. 00:26:24.297 [2024-04-26 12:22:25.476430] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.297 [2024-04-26 12:22:25.476474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.297 [2024-04-26 12:22:25.476488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.297 [2024-04-26 12:22:25.476495] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.297 [2024-04-26 12:22:25.476501] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.297 [2024-04-26 12:22:25.476514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.297 qpair failed and we were unable to recover it. 00:26:24.297 [2024-04-26 12:22:25.486473] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.297 [2024-04-26 12:22:25.486521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.297 [2024-04-26 12:22:25.486536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.297 [2024-04-26 12:22:25.486543] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.297 [2024-04-26 12:22:25.486549] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.297 [2024-04-26 12:22:25.486562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.297 qpair failed and we were unable to recover it. 00:26:24.297 [2024-04-26 12:22:25.496595] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.297 [2024-04-26 12:22:25.496648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.297 [2024-04-26 12:22:25.496672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.297 [2024-04-26 12:22:25.496681] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.297 [2024-04-26 12:22:25.496687] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.297 [2024-04-26 12:22:25.496704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.297 qpair failed and we were unable to recover it. 00:26:24.297 [2024-04-26 12:22:25.506646] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.297 [2024-04-26 12:22:25.506702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.297 [2024-04-26 12:22:25.506717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.297 [2024-04-26 12:22:25.506724] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.297 [2024-04-26 12:22:25.506730] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.297 [2024-04-26 12:22:25.506744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.297 qpair failed and we were unable to recover it. 00:26:24.560 [2024-04-26 12:22:25.516692] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.561 [2024-04-26 12:22:25.516784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.561 [2024-04-26 12:22:25.516799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.561 [2024-04-26 12:22:25.516806] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.561 [2024-04-26 12:22:25.516812] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.561 [2024-04-26 12:22:25.516825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.561 qpair failed and we were unable to recover it. 00:26:24.561 [2024-04-26 12:22:25.526709] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.561 [2024-04-26 12:22:25.526752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.561 [2024-04-26 12:22:25.526768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.561 [2024-04-26 12:22:25.526775] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.561 [2024-04-26 12:22:25.526781] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.561 [2024-04-26 12:22:25.526794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.561 qpair failed and we were unable to recover it. 00:26:24.561 [2024-04-26 12:22:25.536713] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.561 [2024-04-26 12:22:25.536774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.561 [2024-04-26 12:22:25.536788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.561 [2024-04-26 12:22:25.536800] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.561 [2024-04-26 12:22:25.536806] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.561 [2024-04-26 12:22:25.536819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.561 qpair failed and we were unable to recover it. 00:26:24.561 [2024-04-26 12:22:25.546730] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.561 [2024-04-26 12:22:25.546782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.561 [2024-04-26 12:22:25.546796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.561 [2024-04-26 12:22:25.546803] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.561 [2024-04-26 12:22:25.546809] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.561 [2024-04-26 12:22:25.546822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.561 qpair failed and we were unable to recover it. 00:26:24.561 [2024-04-26 12:22:25.556786] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.561 [2024-04-26 12:22:25.556835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.561 [2024-04-26 12:22:25.556852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.561 [2024-04-26 12:22:25.556859] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.561 [2024-04-26 12:22:25.556864] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.561 [2024-04-26 12:22:25.556878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.561 qpair failed and we were unable to recover it. 00:26:24.561 [2024-04-26 12:22:25.566799] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.561 [2024-04-26 12:22:25.566856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.561 [2024-04-26 12:22:25.566870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.561 [2024-04-26 12:22:25.566877] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.561 [2024-04-26 12:22:25.566883] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.561 [2024-04-26 12:22:25.566896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.561 qpair failed and we were unable to recover it. 00:26:24.561 [2024-04-26 12:22:25.576827] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.561 [2024-04-26 12:22:25.576877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.561 [2024-04-26 12:22:25.576891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.561 [2024-04-26 12:22:25.576898] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.561 [2024-04-26 12:22:25.576904] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.561 [2024-04-26 12:22:25.576917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.561 qpair failed and we were unable to recover it. 00:26:24.561 [2024-04-26 12:22:25.586864] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.561 [2024-04-26 12:22:25.586915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.561 [2024-04-26 12:22:25.586929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.561 [2024-04-26 12:22:25.586936] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.561 [2024-04-26 12:22:25.586942] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.561 [2024-04-26 12:22:25.586955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.561 qpair failed and we were unable to recover it. 00:26:24.561 [2024-04-26 12:22:25.596883] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.561 [2024-04-26 12:22:25.596929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.561 [2024-04-26 12:22:25.596943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.561 [2024-04-26 12:22:25.596949] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.561 [2024-04-26 12:22:25.596955] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.561 [2024-04-26 12:22:25.596969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.561 qpair failed and we were unable to recover it. 00:26:24.561 [2024-04-26 12:22:25.606936] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.561 [2024-04-26 12:22:25.606981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.561 [2024-04-26 12:22:25.606994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.561 [2024-04-26 12:22:25.607001] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.561 [2024-04-26 12:22:25.607007] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.561 [2024-04-26 12:22:25.607020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.561 qpair failed and we were unable to recover it. 00:26:24.561 [2024-04-26 12:22:25.616954] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.561 [2024-04-26 12:22:25.617005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.561 [2024-04-26 12:22:25.617019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.561 [2024-04-26 12:22:25.617025] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.561 [2024-04-26 12:22:25.617031] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.561 [2024-04-26 12:22:25.617044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.561 qpair failed and we were unable to recover it. 00:26:24.561 [2024-04-26 12:22:25.626963] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.561 [2024-04-26 12:22:25.627047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.561 [2024-04-26 12:22:25.627061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.561 [2024-04-26 12:22:25.627072] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.561 [2024-04-26 12:22:25.627078] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.561 [2024-04-26 12:22:25.627091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.561 qpair failed and we were unable to recover it. 00:26:24.561 [2024-04-26 12:22:25.636990] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.561 [2024-04-26 12:22:25.637039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.561 [2024-04-26 12:22:25.637053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.561 [2024-04-26 12:22:25.637060] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.561 [2024-04-26 12:22:25.637066] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.561 [2024-04-26 12:22:25.637079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.561 qpair failed and we were unable to recover it. 00:26:24.561 [2024-04-26 12:22:25.647033] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.561 [2024-04-26 12:22:25.647078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.561 [2024-04-26 12:22:25.647091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.561 [2024-04-26 12:22:25.647098] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.561 [2024-04-26 12:22:25.647104] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.561 [2024-04-26 12:22:25.647117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.561 qpair failed and we were unable to recover it. 00:26:24.561 [2024-04-26 12:22:25.657045] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.561 [2024-04-26 12:22:25.657093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.561 [2024-04-26 12:22:25.657106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.561 [2024-04-26 12:22:25.657113] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.561 [2024-04-26 12:22:25.657119] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.561 [2024-04-26 12:22:25.657132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.561 qpair failed and we were unable to recover it. 00:26:24.561 [2024-04-26 12:22:25.667082] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.561 [2024-04-26 12:22:25.667137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.561 [2024-04-26 12:22:25.667151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.561 [2024-04-26 12:22:25.667157] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.561 [2024-04-26 12:22:25.667163] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.561 [2024-04-26 12:22:25.667176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.561 qpair failed and we were unable to recover it. 00:26:24.561 [2024-04-26 12:22:25.677102] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.561 [2024-04-26 12:22:25.677150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.561 [2024-04-26 12:22:25.677163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.561 [2024-04-26 12:22:25.677170] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.561 [2024-04-26 12:22:25.677176] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.561 [2024-04-26 12:22:25.677188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.561 qpair failed and we were unable to recover it. 00:26:24.561 [2024-04-26 12:22:25.687150] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.562 [2024-04-26 12:22:25.687194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.562 [2024-04-26 12:22:25.687208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.562 [2024-04-26 12:22:25.687215] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.562 [2024-04-26 12:22:25.687221] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.562 [2024-04-26 12:22:25.687234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.562 qpair failed and we were unable to recover it. 00:26:24.562 [2024-04-26 12:22:25.697034] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.562 [2024-04-26 12:22:25.697084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.562 [2024-04-26 12:22:25.697097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.562 [2024-04-26 12:22:25.697104] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.562 [2024-04-26 12:22:25.697110] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.562 [2024-04-26 12:22:25.697123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.562 qpair failed and we were unable to recover it. 00:26:24.562 [2024-04-26 12:22:25.707093] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.562 [2024-04-26 12:22:25.707155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.562 [2024-04-26 12:22:25.707169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.562 [2024-04-26 12:22:25.707176] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.562 [2024-04-26 12:22:25.707182] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.562 [2024-04-26 12:22:25.707194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.562 qpair failed and we were unable to recover it. 00:26:24.562 [2024-04-26 12:22:25.717106] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.562 [2024-04-26 12:22:25.717150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.562 [2024-04-26 12:22:25.717170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.562 [2024-04-26 12:22:25.717177] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.562 [2024-04-26 12:22:25.717183] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.562 [2024-04-26 12:22:25.717195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.562 qpair failed and we were unable to recover it. 00:26:24.562 [2024-04-26 12:22:25.727251] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.562 [2024-04-26 12:22:25.727331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.562 [2024-04-26 12:22:25.727345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.562 [2024-04-26 12:22:25.727352] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.562 [2024-04-26 12:22:25.727359] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.562 [2024-04-26 12:22:25.727371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.562 qpair failed and we were unable to recover it. 00:26:24.562 [2024-04-26 12:22:25.737283] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.562 [2024-04-26 12:22:25.737331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.562 [2024-04-26 12:22:25.737345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.562 [2024-04-26 12:22:25.737352] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.562 [2024-04-26 12:22:25.737358] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.562 [2024-04-26 12:22:25.737371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.562 qpair failed and we were unable to recover it. 00:26:24.562 [2024-04-26 12:22:25.747344] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.562 [2024-04-26 12:22:25.747428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.562 [2024-04-26 12:22:25.747442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.562 [2024-04-26 12:22:25.747449] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.562 [2024-04-26 12:22:25.747455] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.562 [2024-04-26 12:22:25.747467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.562 qpair failed and we were unable to recover it. 00:26:24.562 [2024-04-26 12:22:25.757282] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.562 [2024-04-26 12:22:25.757360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.562 [2024-04-26 12:22:25.757374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.562 [2024-04-26 12:22:25.757381] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.562 [2024-04-26 12:22:25.757387] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.562 [2024-04-26 12:22:25.757403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.562 qpair failed and we were unable to recover it. 00:26:24.562 [2024-04-26 12:22:25.767321] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.562 [2024-04-26 12:22:25.767368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.562 [2024-04-26 12:22:25.767382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.562 [2024-04-26 12:22:25.767389] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.562 [2024-04-26 12:22:25.767395] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.562 [2024-04-26 12:22:25.767407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.562 qpair failed and we were unable to recover it. 00:26:24.562 [2024-04-26 12:22:25.777364] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.562 [2024-04-26 12:22:25.777413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.562 [2024-04-26 12:22:25.777427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.562 [2024-04-26 12:22:25.777434] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.562 [2024-04-26 12:22:25.777440] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.562 [2024-04-26 12:22:25.777453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.562 qpair failed and we were unable to recover it. 00:26:24.825 [2024-04-26 12:22:25.787415] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.825 [2024-04-26 12:22:25.787468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.825 [2024-04-26 12:22:25.787482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.825 [2024-04-26 12:22:25.787489] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.825 [2024-04-26 12:22:25.787495] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.825 [2024-04-26 12:22:25.787508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.825 qpair failed and we were unable to recover it. 00:26:24.825 [2024-04-26 12:22:25.797463] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.825 [2024-04-26 12:22:25.797507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.825 [2024-04-26 12:22:25.797521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.825 [2024-04-26 12:22:25.797528] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.825 [2024-04-26 12:22:25.797534] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.825 [2024-04-26 12:22:25.797547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.825 qpair failed and we were unable to recover it. 00:26:24.825 [2024-04-26 12:22:25.807479] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.825 [2024-04-26 12:22:25.807526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.825 [2024-04-26 12:22:25.807543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.825 [2024-04-26 12:22:25.807550] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.825 [2024-04-26 12:22:25.807556] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.825 [2024-04-26 12:22:25.807569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.825 qpair failed and we were unable to recover it. 00:26:24.825 [2024-04-26 12:22:25.817504] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.825 [2024-04-26 12:22:25.817585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.825 [2024-04-26 12:22:25.817609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.825 [2024-04-26 12:22:25.817618] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.825 [2024-04-26 12:22:25.817624] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.825 [2024-04-26 12:22:25.817642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.825 qpair failed and we were unable to recover it. 00:26:24.825 [2024-04-26 12:22:25.827522] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.825 [2024-04-26 12:22:25.827616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.825 [2024-04-26 12:22:25.827635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.825 [2024-04-26 12:22:25.827642] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.825 [2024-04-26 12:22:25.827648] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.825 [2024-04-26 12:22:25.827663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.825 qpair failed and we were unable to recover it. 00:26:24.825 [2024-04-26 12:22:25.837534] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.825 [2024-04-26 12:22:25.837582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.825 [2024-04-26 12:22:25.837597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.825 [2024-04-26 12:22:25.837604] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.825 [2024-04-26 12:22:25.837610] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.825 [2024-04-26 12:22:25.837624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.825 qpair failed and we were unable to recover it. 00:26:24.825 [2024-04-26 12:22:25.847548] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.825 [2024-04-26 12:22:25.847601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.825 [2024-04-26 12:22:25.847615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.825 [2024-04-26 12:22:25.847622] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.825 [2024-04-26 12:22:25.847628] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.825 [2024-04-26 12:22:25.847645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.825 qpair failed and we were unable to recover it. 00:26:24.825 [2024-04-26 12:22:25.857610] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.825 [2024-04-26 12:22:25.857656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.825 [2024-04-26 12:22:25.857670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.825 [2024-04-26 12:22:25.857677] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.825 [2024-04-26 12:22:25.857683] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.826 [2024-04-26 12:22:25.857696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.826 qpair failed and we were unable to recover it. 00:26:24.826 [2024-04-26 12:22:25.867638] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.826 [2024-04-26 12:22:25.867695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.826 [2024-04-26 12:22:25.867710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.826 [2024-04-26 12:22:25.867717] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.826 [2024-04-26 12:22:25.867723] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.826 [2024-04-26 12:22:25.867736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.826 qpair failed and we were unable to recover it. 00:26:24.826 [2024-04-26 12:22:25.877548] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.826 [2024-04-26 12:22:25.877606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.826 [2024-04-26 12:22:25.877620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.826 [2024-04-26 12:22:25.877627] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.826 [2024-04-26 12:22:25.877633] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.826 [2024-04-26 12:22:25.877646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.826 qpair failed and we were unable to recover it. 00:26:24.826 [2024-04-26 12:22:25.887553] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.826 [2024-04-26 12:22:25.887608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.826 [2024-04-26 12:22:25.887623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.826 [2024-04-26 12:22:25.887630] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.826 [2024-04-26 12:22:25.887635] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.826 [2024-04-26 12:22:25.887648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.826 qpair failed and we were unable to recover it. 00:26:24.826 [2024-04-26 12:22:25.897758] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.826 [2024-04-26 12:22:25.897804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.826 [2024-04-26 12:22:25.897822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.826 [2024-04-26 12:22:25.897828] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.826 [2024-04-26 12:22:25.897834] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.826 [2024-04-26 12:22:25.897854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.826 qpair failed and we were unable to recover it. 00:26:24.826 [2024-04-26 12:22:25.907739] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.826 [2024-04-26 12:22:25.907795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.826 [2024-04-26 12:22:25.907811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.826 [2024-04-26 12:22:25.907818] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.826 [2024-04-26 12:22:25.907824] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.826 [2024-04-26 12:22:25.907843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.826 qpair failed and we were unable to recover it. 00:26:24.826 [2024-04-26 12:22:25.917769] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.826 [2024-04-26 12:22:25.917833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.826 [2024-04-26 12:22:25.917854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.826 [2024-04-26 12:22:25.917861] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.826 [2024-04-26 12:22:25.917867] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.826 [2024-04-26 12:22:25.917880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.826 qpair failed and we were unable to recover it. 00:26:24.826 [2024-04-26 12:22:25.927787] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.826 [2024-04-26 12:22:25.927833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.826 [2024-04-26 12:22:25.927853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.826 [2024-04-26 12:22:25.927860] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.826 [2024-04-26 12:22:25.927866] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.826 [2024-04-26 12:22:25.927879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.826 qpair failed and we were unable to recover it. 00:26:24.826 [2024-04-26 12:22:25.937827] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.826 [2024-04-26 12:22:25.937878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.826 [2024-04-26 12:22:25.937893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.826 [2024-04-26 12:22:25.937900] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.826 [2024-04-26 12:22:25.937906] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.826 [2024-04-26 12:22:25.937923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.826 qpair failed and we were unable to recover it. 00:26:24.826 [2024-04-26 12:22:25.947844] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.826 [2024-04-26 12:22:25.947900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.826 [2024-04-26 12:22:25.947914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.826 [2024-04-26 12:22:25.947921] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.826 [2024-04-26 12:22:25.947927] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.826 [2024-04-26 12:22:25.947939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.826 qpair failed and we were unable to recover it. 00:26:24.826 [2024-04-26 12:22:25.957873] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.826 [2024-04-26 12:22:25.957927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.826 [2024-04-26 12:22:25.957940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.826 [2024-04-26 12:22:25.957947] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.826 [2024-04-26 12:22:25.957953] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.826 [2024-04-26 12:22:25.957966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.826 qpair failed and we were unable to recover it. 00:26:24.826 [2024-04-26 12:22:25.967903] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.826 [2024-04-26 12:22:25.967991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.826 [2024-04-26 12:22:25.968005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.826 [2024-04-26 12:22:25.968012] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.826 [2024-04-26 12:22:25.968018] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.826 [2024-04-26 12:22:25.968031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.826 qpair failed and we were unable to recover it. 00:26:24.826 [2024-04-26 12:22:25.977934] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.826 [2024-04-26 12:22:25.977984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.826 [2024-04-26 12:22:25.977998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.826 [2024-04-26 12:22:25.978005] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.826 [2024-04-26 12:22:25.978011] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.826 [2024-04-26 12:22:25.978024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.826 qpair failed and we were unable to recover it. 00:26:24.826 [2024-04-26 12:22:25.987979] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.826 [2024-04-26 12:22:25.988029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.826 [2024-04-26 12:22:25.988047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.826 [2024-04-26 12:22:25.988054] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.826 [2024-04-26 12:22:25.988060] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.826 [2024-04-26 12:22:25.988073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.826 qpair failed and we were unable to recover it. 00:26:24.826 [2024-04-26 12:22:25.997939] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.827 [2024-04-26 12:22:25.998024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.827 [2024-04-26 12:22:25.998038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.827 [2024-04-26 12:22:25.998044] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.827 [2024-04-26 12:22:25.998050] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.827 [2024-04-26 12:22:25.998063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.827 qpair failed and we were unable to recover it. 00:26:24.827 [2024-04-26 12:22:26.008006] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.827 [2024-04-26 12:22:26.008062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.827 [2024-04-26 12:22:26.008078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.827 [2024-04-26 12:22:26.008085] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.827 [2024-04-26 12:22:26.008093] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.827 [2024-04-26 12:22:26.008108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.827 qpair failed and we were unable to recover it. 00:26:24.827 [2024-04-26 12:22:26.017907] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.827 [2024-04-26 12:22:26.017955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.827 [2024-04-26 12:22:26.017969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.827 [2024-04-26 12:22:26.017976] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.827 [2024-04-26 12:22:26.017982] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.827 [2024-04-26 12:22:26.017995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.827 qpair failed and we were unable to recover it. 00:26:24.827 [2024-04-26 12:22:26.028069] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.827 [2024-04-26 12:22:26.028161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.827 [2024-04-26 12:22:26.028176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.827 [2024-04-26 12:22:26.028183] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.827 [2024-04-26 12:22:26.028192] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.827 [2024-04-26 12:22:26.028205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.827 qpair failed and we were unable to recover it. 00:26:24.827 [2024-04-26 12:22:26.038103] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.827 [2024-04-26 12:22:26.038174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.827 [2024-04-26 12:22:26.038188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.827 [2024-04-26 12:22:26.038196] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.827 [2024-04-26 12:22:26.038202] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:24.827 [2024-04-26 12:22:26.038215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.827 qpair failed and we were unable to recover it. 00:26:25.091 [2024-04-26 12:22:26.048168] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.091 [2024-04-26 12:22:26.048216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.091 [2024-04-26 12:22:26.048231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.091 [2024-04-26 12:22:26.048238] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.091 [2024-04-26 12:22:26.048244] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:25.091 [2024-04-26 12:22:26.048257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.091 qpair failed and we were unable to recover it. 00:26:25.091 [2024-04-26 12:22:26.058127] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.091 [2024-04-26 12:22:26.058173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.091 [2024-04-26 12:22:26.058186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.091 [2024-04-26 12:22:26.058193] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.091 [2024-04-26 12:22:26.058199] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:25.091 [2024-04-26 12:22:26.058212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.091 qpair failed and we were unable to recover it. 00:26:25.091 [2024-04-26 12:22:26.068054] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.091 [2024-04-26 12:22:26.068107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.091 [2024-04-26 12:22:26.068122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.091 [2024-04-26 12:22:26.068129] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.091 [2024-04-26 12:22:26.068136] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:25.091 [2024-04-26 12:22:26.068149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.091 qpair failed and we were unable to recover it. 00:26:25.091 [2024-04-26 12:22:26.078186] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.091 [2024-04-26 12:22:26.078238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.091 [2024-04-26 12:22:26.078252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.091 [2024-04-26 12:22:26.078259] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.091 [2024-04-26 12:22:26.078265] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:25.091 [2024-04-26 12:22:26.078278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.091 qpair failed and we were unable to recover it. 00:26:25.091 [2024-04-26 12:22:26.088232] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.091 [2024-04-26 12:22:26.088282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.091 [2024-04-26 12:22:26.088297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.091 [2024-04-26 12:22:26.088304] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.091 [2024-04-26 12:22:26.088310] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:25.091 [2024-04-26 12:22:26.088322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.091 qpair failed and we were unable to recover it. 00:26:25.091 [2024-04-26 12:22:26.098212] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.091 [2024-04-26 12:22:26.098257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.091 [2024-04-26 12:22:26.098271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.091 [2024-04-26 12:22:26.098278] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.091 [2024-04-26 12:22:26.098284] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:25.091 [2024-04-26 12:22:26.098297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.091 qpair failed and we were unable to recover it. 00:26:25.091 [2024-04-26 12:22:26.108271] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.091 [2024-04-26 12:22:26.108320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.091 [2024-04-26 12:22:26.108334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.091 [2024-04-26 12:22:26.108341] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.091 [2024-04-26 12:22:26.108347] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:25.092 [2024-04-26 12:22:26.108360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.092 qpair failed and we were unable to recover it. 00:26:25.092 [2024-04-26 12:22:26.118282] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.092 [2024-04-26 12:22:26.118327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.092 [2024-04-26 12:22:26.118341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.092 [2024-04-26 12:22:26.118347] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.092 [2024-04-26 12:22:26.118357] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:25.092 [2024-04-26 12:22:26.118370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.092 qpair failed and we were unable to recover it. 00:26:25.092 [2024-04-26 12:22:26.128343] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.092 [2024-04-26 12:22:26.128419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.092 [2024-04-26 12:22:26.128434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.092 [2024-04-26 12:22:26.128441] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.092 [2024-04-26 12:22:26.128448] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:25.092 [2024-04-26 12:22:26.128460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.092 qpair failed and we were unable to recover it. 00:26:25.092 [2024-04-26 12:22:26.138368] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.092 [2024-04-26 12:22:26.138411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.092 [2024-04-26 12:22:26.138426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.092 [2024-04-26 12:22:26.138432] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.092 [2024-04-26 12:22:26.138438] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:25.092 [2024-04-26 12:22:26.138451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.092 qpair failed and we were unable to recover it. 00:26:25.092 [2024-04-26 12:22:26.148407] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.092 [2024-04-26 12:22:26.148460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.092 [2024-04-26 12:22:26.148475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.092 [2024-04-26 12:22:26.148481] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.092 [2024-04-26 12:22:26.148487] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:25.092 [2024-04-26 12:22:26.148500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.092 qpair failed and we were unable to recover it. 00:26:25.092 [2024-04-26 12:22:26.158420] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.092 [2024-04-26 12:22:26.158464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.092 [2024-04-26 12:22:26.158478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.092 [2024-04-26 12:22:26.158484] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.092 [2024-04-26 12:22:26.158490] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:25.092 [2024-04-26 12:22:26.158503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.092 qpair failed and we were unable to recover it. 00:26:25.092 [2024-04-26 12:22:26.168455] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.092 [2024-04-26 12:22:26.168504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.092 [2024-04-26 12:22:26.168517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.092 [2024-04-26 12:22:26.168524] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.092 [2024-04-26 12:22:26.168530] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:25.092 [2024-04-26 12:22:26.168542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.092 qpair failed and we were unable to recover it. 00:26:25.092 [2024-04-26 12:22:26.178486] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.092 [2024-04-26 12:22:26.178534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.092 [2024-04-26 12:22:26.178547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.092 [2024-04-26 12:22:26.178554] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.092 [2024-04-26 12:22:26.178560] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:25.092 [2024-04-26 12:22:26.178572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.092 qpair failed and we were unable to recover it. 00:26:25.092 [2024-04-26 12:22:26.188516] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.092 [2024-04-26 12:22:26.188570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.092 [2024-04-26 12:22:26.188585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.092 [2024-04-26 12:22:26.188592] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.092 [2024-04-26 12:22:26.188598] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:25.092 [2024-04-26 12:22:26.188611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.092 qpair failed and we were unable to recover it. 00:26:25.092 [2024-04-26 12:22:26.198513] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.092 [2024-04-26 12:22:26.198566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.092 [2024-04-26 12:22:26.198591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.092 [2024-04-26 12:22:26.198599] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.092 [2024-04-26 12:22:26.198606] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:25.092 [2024-04-26 12:22:26.198623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.092 qpair failed and we were unable to recover it. 00:26:25.092 [2024-04-26 12:22:26.208549] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.092 [2024-04-26 12:22:26.208604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.092 [2024-04-26 12:22:26.208629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.092 [2024-04-26 12:22:26.208642] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.092 [2024-04-26 12:22:26.208649] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:25.092 [2024-04-26 12:22:26.208666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.092 qpair failed and we were unable to recover it. 00:26:25.092 [2024-04-26 12:22:26.218596] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.092 [2024-04-26 12:22:26.218647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.092 [2024-04-26 12:22:26.218672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.092 [2024-04-26 12:22:26.218680] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.092 [2024-04-26 12:22:26.218686] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:25.092 [2024-04-26 12:22:26.218704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.092 qpair failed and we were unable to recover it. 00:26:25.092 [2024-04-26 12:22:26.228609] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.092 [2024-04-26 12:22:26.228708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.092 [2024-04-26 12:22:26.228724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.092 [2024-04-26 12:22:26.228731] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.092 [2024-04-26 12:22:26.228737] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:25.092 [2024-04-26 12:22:26.228751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.092 qpair failed and we were unable to recover it. 00:26:25.092 [2024-04-26 12:22:26.238663] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.092 [2024-04-26 12:22:26.238741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.092 [2024-04-26 12:22:26.238755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.092 [2024-04-26 12:22:26.238762] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.092 [2024-04-26 12:22:26.238768] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:25.092 [2024-04-26 12:22:26.238781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.092 qpair failed and we were unable to recover it. 00:26:25.092 [2024-04-26 12:22:26.248537] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.093 [2024-04-26 12:22:26.248586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.093 [2024-04-26 12:22:26.248600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.093 [2024-04-26 12:22:26.248606] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.093 [2024-04-26 12:22:26.248613] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:25.093 [2024-04-26 12:22:26.248625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.093 qpair failed and we were unable to recover it. 00:26:25.093 [2024-04-26 12:22:26.258702] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.093 [2024-04-26 12:22:26.258749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.093 [2024-04-26 12:22:26.258763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.093 [2024-04-26 12:22:26.258770] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.093 [2024-04-26 12:22:26.258775] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:25.093 [2024-04-26 12:22:26.258788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.093 qpair failed and we were unable to recover it. 00:26:25.093 [2024-04-26 12:22:26.268725] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.093 [2024-04-26 12:22:26.268773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.093 [2024-04-26 12:22:26.268787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.093 [2024-04-26 12:22:26.268793] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.093 [2024-04-26 12:22:26.268800] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:25.093 [2024-04-26 12:22:26.268812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.093 qpair failed and we were unable to recover it. 00:26:25.093 [2024-04-26 12:22:26.278617] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.093 [2024-04-26 12:22:26.278670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.093 [2024-04-26 12:22:26.278684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.093 [2024-04-26 12:22:26.278691] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.093 [2024-04-26 12:22:26.278697] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:25.093 [2024-04-26 12:22:26.278710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.093 qpair failed and we were unable to recover it. 00:26:25.093 [2024-04-26 12:22:26.288755] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.093 [2024-04-26 12:22:26.288799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.093 [2024-04-26 12:22:26.288813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.093 [2024-04-26 12:22:26.288820] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.093 [2024-04-26 12:22:26.288826] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:25.093 [2024-04-26 12:22:26.288869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.093 qpair failed and we were unable to recover it. 00:26:25.093 [2024-04-26 12:22:26.298797] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.093 [2024-04-26 12:22:26.298854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.093 [2024-04-26 12:22:26.298869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.093 [2024-04-26 12:22:26.298879] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.093 [2024-04-26 12:22:26.298885] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:25.093 [2024-04-26 12:22:26.298899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.093 qpair failed and we were unable to recover it. 00:26:25.093 [2024-04-26 12:22:26.308831] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.093 [2024-04-26 12:22:26.308886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.093 [2024-04-26 12:22:26.308901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.093 [2024-04-26 12:22:26.308907] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.093 [2024-04-26 12:22:26.308913] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:25.093 [2024-04-26 12:22:26.308926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.093 qpair failed and we were unable to recover it. 00:26:25.356 [2024-04-26 12:22:26.318857] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.356 [2024-04-26 12:22:26.318904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.356 [2024-04-26 12:22:26.318918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.356 [2024-04-26 12:22:26.318924] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.356 [2024-04-26 12:22:26.318930] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:25.356 [2024-04-26 12:22:26.318944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.356 qpair failed and we were unable to recover it. 00:26:25.356 [2024-04-26 12:22:26.328882] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.356 [2024-04-26 12:22:26.328961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.356 [2024-04-26 12:22:26.328975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.356 [2024-04-26 12:22:26.328982] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.356 [2024-04-26 12:22:26.328988] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:25.356 [2024-04-26 12:22:26.329001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.356 qpair failed and we were unable to recover it. 00:26:25.356 [2024-04-26 12:22:26.338874] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.356 [2024-04-26 12:22:26.338922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.356 [2024-04-26 12:22:26.338936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.356 [2024-04-26 12:22:26.338943] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.356 [2024-04-26 12:22:26.338949] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:25.356 [2024-04-26 12:22:26.338962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.356 qpair failed and we were unable to recover it. 00:26:25.356 [2024-04-26 12:22:26.348933] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.356 [2024-04-26 12:22:26.348989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.356 [2024-04-26 12:22:26.349003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.356 [2024-04-26 12:22:26.349009] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.356 [2024-04-26 12:22:26.349015] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:25.356 [2024-04-26 12:22:26.349028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.356 qpair failed and we were unable to recover it. 00:26:25.356 [2024-04-26 12:22:26.358931] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.356 [2024-04-26 12:22:26.358979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.356 [2024-04-26 12:22:26.358994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.356 [2024-04-26 12:22:26.359001] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.356 [2024-04-26 12:22:26.359007] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:25.356 [2024-04-26 12:22:26.359020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.356 qpair failed and we were unable to recover it. 00:26:25.356 [2024-04-26 12:22:26.368992] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.356 [2024-04-26 12:22:26.369040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.356 [2024-04-26 12:22:26.369055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.356 [2024-04-26 12:22:26.369061] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.356 [2024-04-26 12:22:26.369068] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:25.356 [2024-04-26 12:22:26.369081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.356 qpair failed and we were unable to recover it. 00:26:25.356 [2024-04-26 12:22:26.378982] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.356 [2024-04-26 12:22:26.379029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.357 [2024-04-26 12:22:26.379043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.357 [2024-04-26 12:22:26.379050] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.357 [2024-04-26 12:22:26.379056] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:25.357 [2024-04-26 12:22:26.379069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.357 qpair failed and we were unable to recover it. 00:26:25.357 [2024-04-26 12:22:26.389061] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.357 [2024-04-26 12:22:26.389114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.357 [2024-04-26 12:22:26.389128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.357 [2024-04-26 12:22:26.389139] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.357 [2024-04-26 12:22:26.389146] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:25.357 [2024-04-26 12:22:26.389159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.357 qpair failed and we were unable to recover it. 00:26:25.357 [2024-04-26 12:22:26.399074] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.357 [2024-04-26 12:22:26.399118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.357 [2024-04-26 12:22:26.399132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.357 [2024-04-26 12:22:26.399139] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.357 [2024-04-26 12:22:26.399145] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:25.357 [2024-04-26 12:22:26.399158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.357 qpair failed and we were unable to recover it. 00:26:25.357 [2024-04-26 12:22:26.409088] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.357 [2024-04-26 12:22:26.409131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.357 [2024-04-26 12:22:26.409145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.357 [2024-04-26 12:22:26.409151] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.357 [2024-04-26 12:22:26.409157] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:25.357 [2024-04-26 12:22:26.409170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.357 qpair failed and we were unable to recover it. 00:26:25.357 [2024-04-26 12:22:26.419117] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.357 [2024-04-26 12:22:26.419166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.357 [2024-04-26 12:22:26.419179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.357 [2024-04-26 12:22:26.419185] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.357 [2024-04-26 12:22:26.419191] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:25.357 [2024-04-26 12:22:26.419204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.357 qpair failed and we were unable to recover it. 00:26:25.357 [2024-04-26 12:22:26.429120] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.357 [2024-04-26 12:22:26.429172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.357 [2024-04-26 12:22:26.429186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.357 [2024-04-26 12:22:26.429193] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.357 [2024-04-26 12:22:26.429199] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:25.357 [2024-04-26 12:22:26.429211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.357 qpair failed and we were unable to recover it. 00:26:25.357 [2024-04-26 12:22:26.439044] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.357 [2024-04-26 12:22:26.439158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.357 [2024-04-26 12:22:26.439171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.357 [2024-04-26 12:22:26.439178] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.357 [2024-04-26 12:22:26.439184] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:25.357 [2024-04-26 12:22:26.439197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.357 qpair failed and we were unable to recover it. 00:26:25.357 [2024-04-26 12:22:26.449073] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.357 [2024-04-26 12:22:26.449142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.357 [2024-04-26 12:22:26.449156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.357 [2024-04-26 12:22:26.449162] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.357 [2024-04-26 12:22:26.449168] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:25.357 [2024-04-26 12:22:26.449181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.357 qpair failed and we were unable to recover it. 00:26:25.357 [2024-04-26 12:22:26.459231] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.357 [2024-04-26 12:22:26.459279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.357 [2024-04-26 12:22:26.459293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.357 [2024-04-26 12:22:26.459299] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.357 [2024-04-26 12:22:26.459305] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:25.357 [2024-04-26 12:22:26.459318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.357 qpair failed and we were unable to recover it. 00:26:25.357 [2024-04-26 12:22:26.469228] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.357 [2024-04-26 12:22:26.469284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.357 [2024-04-26 12:22:26.469298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.357 [2024-04-26 12:22:26.469305] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.357 [2024-04-26 12:22:26.469311] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:25.357 [2024-04-26 12:22:26.469323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.357 qpair failed and we were unable to recover it. 00:26:25.357 [2024-04-26 12:22:26.479283] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.357 [2024-04-26 12:22:26.479366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.357 [2024-04-26 12:22:26.479386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.357 [2024-04-26 12:22:26.479393] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.357 [2024-04-26 12:22:26.479399] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:25.357 [2024-04-26 12:22:26.479412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.357 qpair failed and we were unable to recover it. 00:26:25.357 [2024-04-26 12:22:26.489303] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.357 [2024-04-26 12:22:26.489355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.357 [2024-04-26 12:22:26.489369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.357 [2024-04-26 12:22:26.489376] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.357 [2024-04-26 12:22:26.489382] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:25.357 [2024-04-26 12:22:26.489395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.357 qpair failed and we were unable to recover it. 00:26:25.357 [2024-04-26 12:22:26.499344] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.357 [2024-04-26 12:22:26.499389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.357 [2024-04-26 12:22:26.499403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.357 [2024-04-26 12:22:26.499410] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.357 [2024-04-26 12:22:26.499415] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:25.357 [2024-04-26 12:22:26.499428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.357 qpair failed and we were unable to recover it. 00:26:25.357 [2024-04-26 12:22:26.509341] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.357 [2024-04-26 12:22:26.509389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.357 [2024-04-26 12:22:26.509403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.357 [2024-04-26 12:22:26.509410] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.357 [2024-04-26 12:22:26.509416] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:25.358 [2024-04-26 12:22:26.509429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.358 qpair failed and we were unable to recover it. 00:26:25.358 [2024-04-26 12:22:26.519385] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.358 [2024-04-26 12:22:26.519431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.358 [2024-04-26 12:22:26.519445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.358 [2024-04-26 12:22:26.519451] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.358 [2024-04-26 12:22:26.519457] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:25.358 [2024-04-26 12:22:26.519470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.358 qpair failed and we were unable to recover it. 00:26:25.358 [2024-04-26 12:22:26.529433] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.358 [2024-04-26 12:22:26.529481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.358 [2024-04-26 12:22:26.529495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.358 [2024-04-26 12:22:26.529502] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.358 [2024-04-26 12:22:26.529508] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:25.358 [2024-04-26 12:22:26.529521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.358 qpair failed and we were unable to recover it. 00:26:25.358 [2024-04-26 12:22:26.539505] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.358 [2024-04-26 12:22:26.539582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.358 [2024-04-26 12:22:26.539596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.358 [2024-04-26 12:22:26.539603] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.358 [2024-04-26 12:22:26.539609] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:25.358 [2024-04-26 12:22:26.539621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.358 qpair failed and we were unable to recover it. 00:26:25.358 [2024-04-26 12:22:26.549355] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.358 [2024-04-26 12:22:26.549408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.358 [2024-04-26 12:22:26.549422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.358 [2024-04-26 12:22:26.549429] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.358 [2024-04-26 12:22:26.549434] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:25.358 [2024-04-26 12:22:26.549447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.358 qpair failed and we were unable to recover it. 00:26:25.358 [2024-04-26 12:22:26.559440] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.358 [2024-04-26 12:22:26.559543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.358 [2024-04-26 12:22:26.559561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.358 [2024-04-26 12:22:26.559571] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.358 [2024-04-26 12:22:26.559577] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:25.358 [2024-04-26 12:22:26.559592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.358 qpair failed and we were unable to recover it. 00:26:25.358 [2024-04-26 12:22:26.569492] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.358 [2024-04-26 12:22:26.569538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.358 [2024-04-26 12:22:26.569558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.358 [2024-04-26 12:22:26.569565] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.358 [2024-04-26 12:22:26.569571] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:25.358 [2024-04-26 12:22:26.569584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.358 qpair failed and we were unable to recover it. 00:26:25.621 [2024-04-26 12:22:26.579552] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.621 [2024-04-26 12:22:26.579600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.621 [2024-04-26 12:22:26.579614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.621 [2024-04-26 12:22:26.579621] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.621 [2024-04-26 12:22:26.579627] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:25.621 [2024-04-26 12:22:26.579640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.621 qpair failed and we were unable to recover it. 00:26:25.621 [2024-04-26 12:22:26.589455] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.621 [2024-04-26 12:22:26.589506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.621 [2024-04-26 12:22:26.589520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.621 [2024-04-26 12:22:26.589527] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.621 [2024-04-26 12:22:26.589532] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:25.621 [2024-04-26 12:22:26.589545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.621 qpair failed and we were unable to recover it. 00:26:25.621 [2024-04-26 12:22:26.599592] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.621 [2024-04-26 12:22:26.599644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.621 [2024-04-26 12:22:26.599659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.621 [2024-04-26 12:22:26.599665] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.621 [2024-04-26 12:22:26.599671] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:25.621 [2024-04-26 12:22:26.599684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.621 qpair failed and we were unable to recover it. 00:26:25.621 [2024-04-26 12:22:26.609554] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.621 [2024-04-26 12:22:26.609612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.621 [2024-04-26 12:22:26.609626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.621 [2024-04-26 12:22:26.609632] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.621 [2024-04-26 12:22:26.609639] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:25.621 [2024-04-26 12:22:26.609655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.621 qpair failed and we were unable to recover it. 00:26:25.621 [2024-04-26 12:22:26.619645] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.621 [2024-04-26 12:22:26.619699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.621 [2024-04-26 12:22:26.619713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.621 [2024-04-26 12:22:26.619720] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.621 [2024-04-26 12:22:26.619726] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:25.621 [2024-04-26 12:22:26.619738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.621 qpair failed and we were unable to recover it. 00:26:25.621 [2024-04-26 12:22:26.629709] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.621 [2024-04-26 12:22:26.629760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.621 [2024-04-26 12:22:26.629775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.621 [2024-04-26 12:22:26.629781] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.621 [2024-04-26 12:22:26.629787] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:25.622 [2024-04-26 12:22:26.629800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.622 qpair failed and we were unable to recover it. 00:26:25.622 [2024-04-26 12:22:26.639587] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.622 [2024-04-26 12:22:26.639635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.622 [2024-04-26 12:22:26.639649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.622 [2024-04-26 12:22:26.639656] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.622 [2024-04-26 12:22:26.639662] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:25.622 [2024-04-26 12:22:26.639675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.622 qpair failed and we were unable to recover it. 00:26:25.622 [2024-04-26 12:22:26.649743] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.622 [2024-04-26 12:22:26.649790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.622 [2024-04-26 12:22:26.649804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.622 [2024-04-26 12:22:26.649811] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.622 [2024-04-26 12:22:26.649817] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:25.622 [2024-04-26 12:22:26.649829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.622 qpair failed and we were unable to recover it. 00:26:25.622 [2024-04-26 12:22:26.659776] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.622 [2024-04-26 12:22:26.659823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.622 [2024-04-26 12:22:26.659845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.622 [2024-04-26 12:22:26.659852] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.622 [2024-04-26 12:22:26.659858] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:25.622 [2024-04-26 12:22:26.659871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.622 qpair failed and we were unable to recover it. 00:26:25.622 [2024-04-26 12:22:26.669801] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.622 [2024-04-26 12:22:26.669853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.622 [2024-04-26 12:22:26.669867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.622 [2024-04-26 12:22:26.669874] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.622 [2024-04-26 12:22:26.669880] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:25.622 [2024-04-26 12:22:26.669892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.622 qpair failed and we were unable to recover it. 00:26:25.622 [2024-04-26 12:22:26.679824] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.622 [2024-04-26 12:22:26.679919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.622 [2024-04-26 12:22:26.679933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.622 [2024-04-26 12:22:26.679940] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.622 [2024-04-26 12:22:26.679946] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:25.622 [2024-04-26 12:22:26.679959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.622 qpair failed and we were unable to recover it. 00:26:25.622 [2024-04-26 12:22:26.689719] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.622 [2024-04-26 12:22:26.689801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.622 [2024-04-26 12:22:26.689815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.622 [2024-04-26 12:22:26.689822] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.622 [2024-04-26 12:22:26.689828] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:25.622 [2024-04-26 12:22:26.689845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.622 qpair failed and we were unable to recover it. 00:26:25.622 [2024-04-26 12:22:26.699888] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.622 [2024-04-26 12:22:26.699957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.622 [2024-04-26 12:22:26.699971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.622 [2024-04-26 12:22:26.699978] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.622 [2024-04-26 12:22:26.699984] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:25.622 [2024-04-26 12:22:26.700001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.622 qpair failed and we were unable to recover it. 00:26:25.622 [2024-04-26 12:22:26.709891] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.622 [2024-04-26 12:22:26.709942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.622 [2024-04-26 12:22:26.709956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.622 [2024-04-26 12:22:26.709963] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.622 [2024-04-26 12:22:26.709969] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:25.622 [2024-04-26 12:22:26.709981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.622 qpair failed and we were unable to recover it. 00:26:25.622 [2024-04-26 12:22:26.719931] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.622 [2024-04-26 12:22:26.720017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.622 [2024-04-26 12:22:26.720031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.622 [2024-04-26 12:22:26.720037] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.622 [2024-04-26 12:22:26.720043] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x178c650 00:26:25.622 [2024-04-26 12:22:26.720056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.622 qpair failed and we were unable to recover it. 00:26:25.622 Read completed with error (sct=0, sc=8) 00:26:25.622 starting I/O failed 00:26:25.622 Read completed with error (sct=0, sc=8) 00:26:25.622 starting I/O failed 00:26:25.622 Read completed with error (sct=0, sc=8) 00:26:25.622 starting I/O failed 00:26:25.622 Read completed with error (sct=0, sc=8) 00:26:25.622 starting I/O failed 00:26:25.622 Read completed with error (sct=0, sc=8) 00:26:25.622 starting I/O failed 00:26:25.622 Write completed with error (sct=0, sc=8) 00:26:25.622 starting I/O failed 00:26:25.622 Write completed with error (sct=0, sc=8) 00:26:25.622 starting I/O failed 00:26:25.622 Write completed with error (sct=0, sc=8) 00:26:25.622 starting I/O failed 00:26:25.622 Write completed with error (sct=0, sc=8) 00:26:25.622 starting I/O failed 00:26:25.622 Read completed with error (sct=0, sc=8) 00:26:25.622 starting I/O failed 00:26:25.622 Read completed with error (sct=0, sc=8) 00:26:25.622 starting I/O failed 00:26:25.622 Read completed with error (sct=0, sc=8) 00:26:25.622 starting I/O failed 00:26:25.622 Write completed with error (sct=0, sc=8) 00:26:25.622 starting I/O failed 00:26:25.622 Write completed with error (sct=0, sc=8) 00:26:25.622 starting I/O failed 00:26:25.622 Read completed with error (sct=0, sc=8) 00:26:25.622 starting I/O failed 00:26:25.622 Write completed with error (sct=0, sc=8) 00:26:25.622 starting I/O failed 00:26:25.622 Read completed with error (sct=0, sc=8) 00:26:25.622 starting I/O failed 00:26:25.622 Read completed with error (sct=0, sc=8) 00:26:25.622 starting I/O failed 00:26:25.622 Write completed with error (sct=0, sc=8) 00:26:25.622 starting I/O failed 00:26:25.622 Read completed with error (sct=0, sc=8) 00:26:25.622 starting I/O failed 00:26:25.622 Read completed with error (sct=0, sc=8) 00:26:25.622 starting I/O failed 00:26:25.622 Read completed with error (sct=0, sc=8) 00:26:25.622 starting I/O failed 00:26:25.622 Read completed with error (sct=0, sc=8) 00:26:25.622 starting I/O failed 00:26:25.622 Write completed with error (sct=0, sc=8) 00:26:25.622 starting I/O failed 00:26:25.622 Read completed with error (sct=0, sc=8) 00:26:25.622 starting I/O failed 00:26:25.622 Write completed with error (sct=0, sc=8) 00:26:25.622 starting I/O failed 00:26:25.622 Read completed with error (sct=0, sc=8) 00:26:25.622 starting I/O failed 00:26:25.622 Read completed with error (sct=0, sc=8) 00:26:25.622 starting I/O failed 00:26:25.622 Write completed with error (sct=0, sc=8) 00:26:25.622 starting I/O failed 00:26:25.622 Write completed with error (sct=0, sc=8) 00:26:25.622 starting I/O failed 00:26:25.622 Write completed with error (sct=0, sc=8) 00:26:25.622 starting I/O failed 00:26:25.622 Read completed with error (sct=0, sc=8) 00:26:25.622 starting I/O failed 00:26:25.622 [2024-04-26 12:22:26.720389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:25.622 [2024-04-26 12:22:26.729942] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.622 [2024-04-26 12:22:26.729986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.622 [2024-04-26 12:22:26.730000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.623 [2024-04-26 12:22:26.730006] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.623 [2024-04-26 12:22:26.730011] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f26d4000b90 00:26:25.623 [2024-04-26 12:22:26.730024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:25.623 qpair failed and we were unable to recover it. 00:26:25.623 [2024-04-26 12:22:26.739976] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.623 [2024-04-26 12:22:26.740021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.623 [2024-04-26 12:22:26.740033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.623 [2024-04-26 12:22:26.740038] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.623 [2024-04-26 12:22:26.740042] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f26d4000b90 00:26:25.623 [2024-04-26 12:22:26.740052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:25.623 qpair failed and we were unable to recover it. 00:26:25.623 [2024-04-26 12:22:26.750021] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.623 [2024-04-26 12:22:26.750121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.623 [2024-04-26 12:22:26.750184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.623 [2024-04-26 12:22:26.750209] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.623 [2024-04-26 12:22:26.750228] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f26cc000b90 00:26:25.623 [2024-04-26 12:22:26.750278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:25.623 qpair failed and we were unable to recover it. 00:26:25.623 [2024-04-26 12:22:26.760040] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.623 [2024-04-26 12:22:26.760109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.623 [2024-04-26 12:22:26.760139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.623 [2024-04-26 12:22:26.760154] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.623 [2024-04-26 12:22:26.760167] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f26cc000b90 00:26:25.623 [2024-04-26 12:22:26.760197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:25.623 qpair failed and we were unable to recover it. 00:26:25.623 [2024-04-26 12:22:26.760537] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a160 is same with the state(5) to be set 00:26:25.623 Read completed with error (sct=0, sc=8) 00:26:25.623 starting I/O failed 00:26:25.623 Write completed with error (sct=0, sc=8) 00:26:25.623 starting I/O failed 00:26:25.623 Read completed with error (sct=0, sc=8) 00:26:25.623 starting I/O failed 00:26:25.623 Read completed with error (sct=0, sc=8) 00:26:25.623 starting I/O failed 00:26:25.623 Read completed with error (sct=0, sc=8) 00:26:25.623 starting I/O failed 00:26:25.623 Read completed with error (sct=0, sc=8) 00:26:25.623 starting I/O failed 00:26:25.623 Read completed with error (sct=0, sc=8) 00:26:25.623 starting I/O failed 00:26:25.623 Write completed with error (sct=0, sc=8) 00:26:25.623 starting I/O failed 00:26:25.623 Write completed with error (sct=0, sc=8) 00:26:25.623 starting I/O failed 00:26:25.623 Read completed with error (sct=0, sc=8) 00:26:25.623 starting I/O failed 00:26:25.623 Write completed with error (sct=0, sc=8) 00:26:25.623 starting I/O failed 00:26:25.623 Write completed with error (sct=0, sc=8) 00:26:25.623 starting I/O failed 00:26:25.623 Read completed with error (sct=0, sc=8) 00:26:25.623 starting I/O failed 00:26:25.623 Read completed with error (sct=0, sc=8) 00:26:25.623 starting I/O failed 00:26:25.623 Write completed with error (sct=0, sc=8) 00:26:25.623 starting I/O failed 00:26:25.623 Write completed with error (sct=0, sc=8) 00:26:25.623 starting I/O failed 00:26:25.623 Write completed with error (sct=0, sc=8) 00:26:25.623 starting I/O failed 00:26:25.623 Read completed with error (sct=0, sc=8) 00:26:25.623 starting I/O failed 00:26:25.623 Read completed with error (sct=0, sc=8) 00:26:25.623 starting I/O failed 00:26:25.623 Read completed with error (sct=0, sc=8) 00:26:25.623 starting I/O failed 00:26:25.623 Read completed with error (sct=0, sc=8) 00:26:25.623 starting I/O failed 00:26:25.623 Write completed with error (sct=0, sc=8) 00:26:25.623 starting I/O failed 00:26:25.623 Write completed with error (sct=0, sc=8) 00:26:25.623 starting I/O failed 00:26:25.623 Read completed with error (sct=0, sc=8) 00:26:25.623 starting I/O failed 00:26:25.623 Read completed with error (sct=0, sc=8) 00:26:25.623 starting I/O failed 00:26:25.623 Write completed with error (sct=0, sc=8) 00:26:25.623 starting I/O failed 00:26:25.623 Read completed with error (sct=0, sc=8) 00:26:25.623 starting I/O failed 00:26:25.623 Write completed with error (sct=0, sc=8) 00:26:25.623 starting I/O failed 00:26:25.623 Write completed with error (sct=0, sc=8) 00:26:25.623 starting I/O failed 00:26:25.623 Write completed with error (sct=0, sc=8) 00:26:25.623 starting I/O failed 00:26:25.623 Write completed with error (sct=0, sc=8) 00:26:25.623 starting I/O failed 00:26:25.623 Read completed with error (sct=0, sc=8) 00:26:25.623 starting I/O failed 00:26:25.623 [2024-04-26 12:22:26.761308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:25.623 [2024-04-26 12:22:26.770104] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.623 [2024-04-26 12:22:26.770203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.623 [2024-04-26 12:22:26.770248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.623 [2024-04-26 12:22:26.770270] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.623 [2024-04-26 12:22:26.770290] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f26dc000b90 00:26:25.623 [2024-04-26 12:22:26.770333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:25.623 qpair failed and we were unable to recover it. 00:26:25.623 [2024-04-26 12:22:26.780067] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.623 [2024-04-26 12:22:26.780139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.623 [2024-04-26 12:22:26.780168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.623 [2024-04-26 12:22:26.780183] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.623 [2024-04-26 12:22:26.780198] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f26dc000b90 00:26:25.623 [2024-04-26 12:22:26.780227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:25.623 qpair failed and we were unable to recover it. 00:26:25.623 [2024-04-26 12:22:26.780441] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x179a160 (9): Bad file descriptor 00:26:25.623 Initializing NVMe Controllers 00:26:25.623 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:25.623 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:25.623 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:26:25.623 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:26:25.623 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:26:25.623 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:26:25.623 Initialization complete. Launching workers. 00:26:25.623 Starting thread on core 1 00:26:25.623 Starting thread on core 2 00:26:25.623 Starting thread on core 3 00:26:25.623 Starting thread on core 0 00:26:25.623 12:22:26 -- host/target_disconnect.sh@59 -- # sync 00:26:25.623 00:26:25.623 real 0m11.391s 00:26:25.623 user 0m21.443s 00:26:25.623 sys 0m3.454s 00:26:25.623 12:22:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:25.623 12:22:26 -- common/autotest_common.sh@10 -- # set +x 00:26:25.623 ************************************ 00:26:25.623 END TEST nvmf_target_disconnect_tc2 00:26:25.623 ************************************ 00:26:25.623 12:22:26 -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:26:25.623 12:22:26 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:26:25.623 12:22:26 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:26:25.623 12:22:26 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:25.623 12:22:26 -- nvmf/common.sh@117 -- # sync 00:26:25.623 12:22:26 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:25.623 12:22:26 -- nvmf/common.sh@120 -- # set +e 00:26:25.623 12:22:26 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:25.623 12:22:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:25.885 rmmod nvme_tcp 00:26:25.885 rmmod nvme_fabrics 00:26:25.885 rmmod nvme_keyring 00:26:25.885 12:22:26 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:25.885 12:22:26 -- nvmf/common.sh@124 -- # set -e 00:26:25.885 12:22:26 -- nvmf/common.sh@125 -- # return 0 00:26:25.885 12:22:26 -- nvmf/common.sh@478 -- # '[' -n 3568636 ']' 00:26:25.885 12:22:26 -- nvmf/common.sh@479 -- # killprocess 3568636 00:26:25.885 12:22:26 -- common/autotest_common.sh@936 -- # '[' -z 3568636 ']' 00:26:25.885 12:22:26 -- common/autotest_common.sh@940 -- # kill -0 3568636 00:26:25.885 12:22:26 -- common/autotest_common.sh@941 -- # uname 00:26:25.885 12:22:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:25.885 12:22:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3568636 00:26:25.885 12:22:26 -- common/autotest_common.sh@942 -- # process_name=reactor_4 00:26:25.885 12:22:26 -- common/autotest_common.sh@946 -- # '[' reactor_4 = sudo ']' 00:26:25.885 12:22:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3568636' 00:26:25.885 killing process with pid 3568636 00:26:25.885 12:22:26 -- common/autotest_common.sh@955 -- # kill 3568636 00:26:25.885 12:22:26 -- common/autotest_common.sh@960 -- # wait 3568636 00:26:25.885 12:22:27 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:26:25.885 12:22:27 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:25.885 12:22:27 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:25.885 12:22:27 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:25.885 12:22:27 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:25.885 12:22:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:25.885 12:22:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:25.885 12:22:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:28.430 12:22:29 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:28.430 00:26:28.430 real 0m21.688s 00:26:28.430 user 0m49.411s 00:26:28.430 sys 0m9.387s 00:26:28.430 12:22:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:28.430 12:22:29 -- common/autotest_common.sh@10 -- # set +x 00:26:28.430 ************************************ 00:26:28.430 END TEST nvmf_target_disconnect 00:26:28.430 ************************************ 00:26:28.430 12:22:29 -- nvmf/nvmf.sh@123 -- # timing_exit host 00:26:28.430 12:22:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:28.430 12:22:29 -- common/autotest_common.sh@10 -- # set +x 00:26:28.430 12:22:29 -- nvmf/nvmf.sh@125 -- # trap - SIGINT SIGTERM EXIT 00:26:28.430 00:26:28.430 real 19m43.446s 00:26:28.430 user 40m19.265s 00:26:28.430 sys 6m30.517s 00:26:28.430 12:22:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:28.430 12:22:29 -- common/autotest_common.sh@10 -- # set +x 00:26:28.430 ************************************ 00:26:28.430 END TEST nvmf_tcp 00:26:28.430 ************************************ 00:26:28.430 12:22:29 -- spdk/autotest.sh@286 -- # [[ 0 -eq 0 ]] 00:26:28.430 12:22:29 -- spdk/autotest.sh@287 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:26:28.430 12:22:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:28.430 12:22:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:28.430 12:22:29 -- common/autotest_common.sh@10 -- # set +x 00:26:28.430 ************************************ 00:26:28.430 START TEST spdkcli_nvmf_tcp 00:26:28.430 ************************************ 00:26:28.430 12:22:29 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:26:28.430 * Looking for test storage... 00:26:28.430 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:26:28.430 12:22:29 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:26:28.430 12:22:29 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:26:28.430 12:22:29 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:26:28.430 12:22:29 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:28.430 12:22:29 -- nvmf/common.sh@7 -- # uname -s 00:26:28.430 12:22:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:28.430 12:22:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:28.430 12:22:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:28.430 12:22:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:28.430 12:22:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:28.430 12:22:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:28.430 12:22:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:28.430 12:22:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:28.430 12:22:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:28.430 12:22:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:28.430 12:22:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:28.430 12:22:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:28.430 12:22:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:28.430 12:22:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:28.430 12:22:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:28.430 12:22:29 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:28.430 12:22:29 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:28.430 12:22:29 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:28.430 12:22:29 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:28.430 12:22:29 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:28.430 12:22:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.430 12:22:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.430 12:22:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.430 12:22:29 -- paths/export.sh@5 -- # export PATH 00:26:28.430 12:22:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.430 12:22:29 -- nvmf/common.sh@47 -- # : 0 00:26:28.430 12:22:29 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:28.430 12:22:29 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:28.430 12:22:29 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:28.430 12:22:29 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:28.430 12:22:29 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:28.430 12:22:29 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:28.430 12:22:29 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:28.430 12:22:29 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:28.430 12:22:29 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:26:28.430 12:22:29 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:26:28.430 12:22:29 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:26:28.430 12:22:29 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:26:28.430 12:22:29 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:28.430 12:22:29 -- common/autotest_common.sh@10 -- # set +x 00:26:28.430 12:22:29 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:26:28.431 12:22:29 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3570469 00:26:28.431 12:22:29 -- spdkcli/common.sh@34 -- # waitforlisten 3570469 00:26:28.431 12:22:29 -- common/autotest_common.sh@817 -- # '[' -z 3570469 ']' 00:26:28.431 12:22:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:28.431 12:22:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:28.431 12:22:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:28.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:28.431 12:22:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:28.431 12:22:29 -- common/autotest_common.sh@10 -- # set +x 00:26:28.431 12:22:29 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:26:28.431 [2024-04-26 12:22:29.582400] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:26:28.431 [2024-04-26 12:22:29.582455] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3570469 ] 00:26:28.431 EAL: No free 2048 kB hugepages reported on node 1 00:26:28.431 [2024-04-26 12:22:29.644361] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:28.691 [2024-04-26 12:22:29.710871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:28.691 [2024-04-26 12:22:29.710900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:29.264 12:22:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:29.264 12:22:30 -- common/autotest_common.sh@850 -- # return 0 00:26:29.264 12:22:30 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:26:29.264 12:22:30 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:29.264 12:22:30 -- common/autotest_common.sh@10 -- # set +x 00:26:29.264 12:22:30 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:26:29.264 12:22:30 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:26:29.264 12:22:30 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:26:29.264 12:22:30 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:29.264 12:22:30 -- common/autotest_common.sh@10 -- # set +x 00:26:29.264 12:22:30 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:26:29.264 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:26:29.264 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:26:29.264 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:26:29.264 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:26:29.264 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:26:29.264 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:26:29.264 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:26:29.264 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:26:29.264 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:26:29.264 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:29.264 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:29.264 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:26:29.264 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:29.264 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:29.264 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:26:29.264 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:29.264 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:26:29.264 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:26:29.264 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:29.264 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:26:29.264 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:26:29.264 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:26:29.264 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:26:29.264 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:29.264 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:26:29.264 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:26:29.264 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:26:29.264 ' 00:26:29.525 [2024-04-26 12:22:30.707273] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:26:32.070 [2024-04-26 12:22:32.706776] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:33.012 [2024-04-26 12:22:33.870592] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:26:34.926 [2024-04-26 12:22:36.004895] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:26:36.842 [2024-04-26 12:22:37.838449] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:26:38.231 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:26:38.231 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:26:38.231 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:26:38.231 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:26:38.231 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:26:38.231 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:26:38.231 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:26:38.231 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:26:38.231 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:26:38.231 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:26:38.231 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:26:38.231 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:38.231 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:26:38.231 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:26:38.231 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:38.231 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:26:38.231 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:26:38.231 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:26:38.231 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:26:38.231 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:38.231 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:26:38.231 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:26:38.231 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:26:38.231 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:26:38.231 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:38.231 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:26:38.231 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:26:38.231 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:26:38.231 12:22:39 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:26:38.231 12:22:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:38.231 12:22:39 -- common/autotest_common.sh@10 -- # set +x 00:26:38.231 12:22:39 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:26:38.231 12:22:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:38.231 12:22:39 -- common/autotest_common.sh@10 -- # set +x 00:26:38.231 12:22:39 -- spdkcli/nvmf.sh@69 -- # check_match 00:26:38.231 12:22:39 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:26:38.803 12:22:39 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:26:38.803 12:22:39 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:26:38.803 12:22:39 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:26:38.803 12:22:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:38.803 12:22:39 -- common/autotest_common.sh@10 -- # set +x 00:26:38.803 12:22:39 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:26:38.803 12:22:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:38.803 12:22:39 -- common/autotest_common.sh@10 -- # set +x 00:26:38.803 12:22:39 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:26:38.803 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:26:38.803 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:26:38.803 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:26:38.803 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:26:38.803 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:26:38.803 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:26:38.803 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:26:38.803 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:26:38.803 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:26:38.803 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:26:38.803 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:26:38.803 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:26:38.803 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:26:38.803 ' 00:26:44.117 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:26:44.117 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:26:44.117 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:26:44.117 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:26:44.117 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:26:44.117 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:26:44.117 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:26:44.117 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:26:44.117 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:26:44.117 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:26:44.117 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:26:44.117 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:26:44.117 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:26:44.117 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:26:44.117 12:22:44 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:26:44.117 12:22:44 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:44.117 12:22:44 -- common/autotest_common.sh@10 -- # set +x 00:26:44.117 12:22:44 -- spdkcli/nvmf.sh@90 -- # killprocess 3570469 00:26:44.117 12:22:44 -- common/autotest_common.sh@936 -- # '[' -z 3570469 ']' 00:26:44.117 12:22:44 -- common/autotest_common.sh@940 -- # kill -0 3570469 00:26:44.117 12:22:44 -- common/autotest_common.sh@941 -- # uname 00:26:44.117 12:22:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:44.117 12:22:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3570469 00:26:44.117 12:22:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:44.117 12:22:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:44.117 12:22:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3570469' 00:26:44.117 killing process with pid 3570469 00:26:44.117 12:22:44 -- common/autotest_common.sh@955 -- # kill 3570469 00:26:44.117 [2024-04-26 12:22:44.772580] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:26:44.117 12:22:44 -- common/autotest_common.sh@960 -- # wait 3570469 00:26:44.117 12:22:44 -- spdkcli/nvmf.sh@1 -- # cleanup 00:26:44.117 12:22:44 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:26:44.117 12:22:44 -- spdkcli/common.sh@13 -- # '[' -n 3570469 ']' 00:26:44.117 12:22:44 -- spdkcli/common.sh@14 -- # killprocess 3570469 00:26:44.117 12:22:44 -- common/autotest_common.sh@936 -- # '[' -z 3570469 ']' 00:26:44.117 12:22:44 -- common/autotest_common.sh@940 -- # kill -0 3570469 00:26:44.117 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (3570469) - No such process 00:26:44.117 12:22:44 -- common/autotest_common.sh@963 -- # echo 'Process with pid 3570469 is not found' 00:26:44.117 Process with pid 3570469 is not found 00:26:44.117 12:22:44 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:26:44.117 12:22:44 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:26:44.117 12:22:44 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:26:44.117 00:26:44.117 real 0m15.496s 00:26:44.117 user 0m31.907s 00:26:44.117 sys 0m0.688s 00:26:44.117 12:22:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:44.117 12:22:44 -- common/autotest_common.sh@10 -- # set +x 00:26:44.117 ************************************ 00:26:44.117 END TEST spdkcli_nvmf_tcp 00:26:44.117 ************************************ 00:26:44.117 12:22:44 -- spdk/autotest.sh@288 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:26:44.117 12:22:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:44.117 12:22:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:44.117 12:22:44 -- common/autotest_common.sh@10 -- # set +x 00:26:44.117 ************************************ 00:26:44.117 START TEST nvmf_identify_passthru 00:26:44.117 ************************************ 00:26:44.117 12:22:45 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:26:44.117 * Looking for test storage... 00:26:44.117 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:44.117 12:22:45 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:44.117 12:22:45 -- nvmf/common.sh@7 -- # uname -s 00:26:44.117 12:22:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:44.117 12:22:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:44.117 12:22:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:44.117 12:22:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:44.117 12:22:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:44.117 12:22:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:44.117 12:22:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:44.117 12:22:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:44.117 12:22:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:44.117 12:22:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:44.117 12:22:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:44.117 12:22:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:44.117 12:22:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:44.117 12:22:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:44.117 12:22:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:44.117 12:22:45 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:44.117 12:22:45 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:44.117 12:22:45 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:44.117 12:22:45 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:44.117 12:22:45 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:44.117 12:22:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.117 12:22:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.117 12:22:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.117 12:22:45 -- paths/export.sh@5 -- # export PATH 00:26:44.117 12:22:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.117 12:22:45 -- nvmf/common.sh@47 -- # : 0 00:26:44.117 12:22:45 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:44.117 12:22:45 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:44.117 12:22:45 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:44.117 12:22:45 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:44.117 12:22:45 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:44.117 12:22:45 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:44.117 12:22:45 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:44.117 12:22:45 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:44.117 12:22:45 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:44.117 12:22:45 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:44.117 12:22:45 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:44.117 12:22:45 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:44.117 12:22:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.117 12:22:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.117 12:22:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.117 12:22:45 -- paths/export.sh@5 -- # export PATH 00:26:44.118 12:22:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.118 12:22:45 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:26:44.118 12:22:45 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:44.118 12:22:45 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:44.118 12:22:45 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:44.118 12:22:45 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:44.118 12:22:45 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:44.118 12:22:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:44.118 12:22:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:44.118 12:22:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:44.118 12:22:45 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:26:44.118 12:22:45 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:26:44.118 12:22:45 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:44.118 12:22:45 -- common/autotest_common.sh@10 -- # set +x 00:26:52.268 12:22:52 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:52.268 12:22:52 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:52.268 12:22:52 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:52.268 12:22:52 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:52.268 12:22:52 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:52.268 12:22:52 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:52.268 12:22:52 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:52.268 12:22:52 -- nvmf/common.sh@295 -- # net_devs=() 00:26:52.268 12:22:52 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:52.268 12:22:52 -- nvmf/common.sh@296 -- # e810=() 00:26:52.268 12:22:52 -- nvmf/common.sh@296 -- # local -ga e810 00:26:52.268 12:22:52 -- nvmf/common.sh@297 -- # x722=() 00:26:52.268 12:22:52 -- nvmf/common.sh@297 -- # local -ga x722 00:26:52.268 12:22:52 -- nvmf/common.sh@298 -- # mlx=() 00:26:52.268 12:22:52 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:52.268 12:22:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:52.268 12:22:52 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:52.268 12:22:52 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:52.268 12:22:52 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:52.268 12:22:52 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:52.268 12:22:52 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:52.268 12:22:52 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:52.268 12:22:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:52.268 12:22:52 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:52.268 12:22:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:52.268 12:22:52 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:52.268 12:22:52 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:52.268 12:22:52 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:52.268 12:22:52 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:52.268 12:22:52 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:52.268 12:22:52 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:52.268 12:22:52 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:52.269 12:22:52 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:52.269 12:22:52 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:52.269 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:52.269 12:22:52 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:52.269 12:22:52 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:52.269 12:22:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:52.269 12:22:52 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:52.269 12:22:52 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:52.269 12:22:52 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:52.269 12:22:52 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:52.269 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:52.269 12:22:52 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:52.269 12:22:52 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:52.269 12:22:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:52.269 12:22:52 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:52.269 12:22:52 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:52.269 12:22:52 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:52.269 12:22:52 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:52.269 12:22:52 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:52.269 12:22:52 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:52.269 12:22:52 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:52.269 12:22:52 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:52.269 12:22:52 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:52.269 12:22:52 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:52.269 Found net devices under 0000:31:00.0: cvl_0_0 00:26:52.269 12:22:52 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:52.269 12:22:52 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:52.269 12:22:52 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:52.269 12:22:52 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:52.269 12:22:52 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:52.269 12:22:52 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:52.269 Found net devices under 0000:31:00.1: cvl_0_1 00:26:52.269 12:22:52 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:52.269 12:22:52 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:26:52.269 12:22:52 -- nvmf/common.sh@403 -- # is_hw=yes 00:26:52.269 12:22:52 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:26:52.269 12:22:52 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:26:52.269 12:22:52 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:26:52.269 12:22:52 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:52.269 12:22:52 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:52.269 12:22:52 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:52.269 12:22:52 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:52.269 12:22:52 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:52.269 12:22:52 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:52.269 12:22:52 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:52.269 12:22:52 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:52.269 12:22:52 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:52.269 12:22:52 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:52.269 12:22:52 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:52.269 12:22:52 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:52.269 12:22:52 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:52.269 12:22:52 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:52.269 12:22:52 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:52.269 12:22:52 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:52.269 12:22:52 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:52.269 12:22:52 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:52.269 12:22:52 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:52.269 12:22:52 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:52.269 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:52.269 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.567 ms 00:26:52.269 00:26:52.269 --- 10.0.0.2 ping statistics --- 00:26:52.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:52.269 rtt min/avg/max/mdev = 0.567/0.567/0.567/0.000 ms 00:26:52.269 12:22:52 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:52.269 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:52.269 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:26:52.269 00:26:52.269 --- 10.0.0.1 ping statistics --- 00:26:52.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:52.269 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:26:52.269 12:22:52 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:52.269 12:22:52 -- nvmf/common.sh@411 -- # return 0 00:26:52.269 12:22:52 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:26:52.269 12:22:52 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:52.269 12:22:52 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:26:52.269 12:22:52 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:26:52.269 12:22:52 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:52.269 12:22:52 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:26:52.269 12:22:52 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:26:52.269 12:22:52 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:26:52.269 12:22:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:52.269 12:22:52 -- common/autotest_common.sh@10 -- # set +x 00:26:52.269 12:22:52 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:26:52.269 12:22:52 -- common/autotest_common.sh@1510 -- # bdfs=() 00:26:52.269 12:22:52 -- common/autotest_common.sh@1510 -- # local bdfs 00:26:52.269 12:22:52 -- common/autotest_common.sh@1511 -- # bdfs=($(get_nvme_bdfs)) 00:26:52.269 12:22:52 -- common/autotest_common.sh@1511 -- # get_nvme_bdfs 00:26:52.269 12:22:52 -- common/autotest_common.sh@1499 -- # bdfs=() 00:26:52.269 12:22:52 -- common/autotest_common.sh@1499 -- # local bdfs 00:26:52.269 12:22:52 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:26:52.269 12:22:52 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:26:52.269 12:22:52 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:26:52.269 12:22:52 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:26:52.269 12:22:52 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:65:00.0 00:26:52.269 12:22:52 -- common/autotest_common.sh@1513 -- # echo 0000:65:00.0 00:26:52.269 12:22:52 -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:26:52.269 12:22:52 -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:26:52.269 12:22:52 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:26:52.269 12:22:52 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:26:52.269 12:22:52 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:26:52.269 EAL: No free 2048 kB hugepages reported on node 1 00:26:52.269 12:22:53 -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605494 00:26:52.269 12:22:53 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:26:52.269 12:22:53 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:26:52.269 12:22:53 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:26:52.269 EAL: No free 2048 kB hugepages reported on node 1 00:26:52.529 12:22:53 -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:26:52.529 12:22:53 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:26:52.529 12:22:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:52.529 12:22:53 -- common/autotest_common.sh@10 -- # set +x 00:26:52.529 12:22:53 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:26:52.529 12:22:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:52.529 12:22:53 -- common/autotest_common.sh@10 -- # set +x 00:26:52.529 12:22:53 -- target/identify_passthru.sh@31 -- # nvmfpid=3577603 00:26:52.529 12:22:53 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:52.529 12:22:53 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:52.529 12:22:53 -- target/identify_passthru.sh@35 -- # waitforlisten 3577603 00:26:52.529 12:22:53 -- common/autotest_common.sh@817 -- # '[' -z 3577603 ']' 00:26:52.529 12:22:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:52.529 12:22:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:52.529 12:22:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:52.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:52.529 12:22:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:52.529 12:22:53 -- common/autotest_common.sh@10 -- # set +x 00:26:52.790 [2024-04-26 12:22:53.770016] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:26:52.790 [2024-04-26 12:22:53.770073] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:52.790 EAL: No free 2048 kB hugepages reported on node 1 00:26:52.790 [2024-04-26 12:22:53.836976] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:52.790 [2024-04-26 12:22:53.901775] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:52.790 [2024-04-26 12:22:53.901814] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:52.790 [2024-04-26 12:22:53.901823] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:52.790 [2024-04-26 12:22:53.901830] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:52.790 [2024-04-26 12:22:53.901844] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:52.790 [2024-04-26 12:22:53.901921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:52.790 [2024-04-26 12:22:53.902035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:52.790 [2024-04-26 12:22:53.902190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:52.790 [2024-04-26 12:22:53.902190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:53.361 12:22:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:53.361 12:22:54 -- common/autotest_common.sh@850 -- # return 0 00:26:53.361 12:22:54 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:26:53.361 12:22:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:53.361 12:22:54 -- common/autotest_common.sh@10 -- # set +x 00:26:53.361 INFO: Log level set to 20 00:26:53.361 INFO: Requests: 00:26:53.361 { 00:26:53.361 "jsonrpc": "2.0", 00:26:53.361 "method": "nvmf_set_config", 00:26:53.361 "id": 1, 00:26:53.361 "params": { 00:26:53.361 "admin_cmd_passthru": { 00:26:53.361 "identify_ctrlr": true 00:26:53.361 } 00:26:53.361 } 00:26:53.361 } 00:26:53.361 00:26:53.361 INFO: response: 00:26:53.361 { 00:26:53.361 "jsonrpc": "2.0", 00:26:53.361 "id": 1, 00:26:53.361 "result": true 00:26:53.361 } 00:26:53.361 00:26:53.361 12:22:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:53.361 12:22:54 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:26:53.361 12:22:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:53.361 12:22:54 -- common/autotest_common.sh@10 -- # set +x 00:26:53.361 INFO: Setting log level to 20 00:26:53.361 INFO: Setting log level to 20 00:26:53.361 INFO: Log level set to 20 00:26:53.361 INFO: Log level set to 20 00:26:53.361 INFO: Requests: 00:26:53.361 { 00:26:53.361 "jsonrpc": "2.0", 00:26:53.361 "method": "framework_start_init", 00:26:53.361 "id": 1 00:26:53.361 } 00:26:53.361 00:26:53.361 INFO: Requests: 00:26:53.361 { 00:26:53.361 "jsonrpc": "2.0", 00:26:53.361 "method": "framework_start_init", 00:26:53.361 "id": 1 00:26:53.361 } 00:26:53.361 00:26:53.623 [2024-04-26 12:22:54.622272] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:26:53.623 INFO: response: 00:26:53.623 { 00:26:53.623 "jsonrpc": "2.0", 00:26:53.623 "id": 1, 00:26:53.623 "result": true 00:26:53.623 } 00:26:53.623 00:26:53.623 INFO: response: 00:26:53.623 { 00:26:53.623 "jsonrpc": "2.0", 00:26:53.623 "id": 1, 00:26:53.623 "result": true 00:26:53.623 } 00:26:53.623 00:26:53.623 12:22:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:53.623 12:22:54 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:53.623 12:22:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:53.623 12:22:54 -- common/autotest_common.sh@10 -- # set +x 00:26:53.623 INFO: Setting log level to 40 00:26:53.623 INFO: Setting log level to 40 00:26:53.623 INFO: Setting log level to 40 00:26:53.623 [2024-04-26 12:22:54.635530] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:53.623 12:22:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:53.623 12:22:54 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:26:53.623 12:22:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:53.623 12:22:54 -- common/autotest_common.sh@10 -- # set +x 00:26:53.623 12:22:54 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:26:53.623 12:22:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:53.623 12:22:54 -- common/autotest_common.sh@10 -- # set +x 00:26:53.883 Nvme0n1 00:26:53.883 12:22:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:53.883 12:22:54 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:26:53.883 12:22:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:53.883 12:22:54 -- common/autotest_common.sh@10 -- # set +x 00:26:53.883 12:22:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:53.883 12:22:54 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:26:53.883 12:22:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:53.883 12:22:54 -- common/autotest_common.sh@10 -- # set +x 00:26:53.883 12:22:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:53.883 12:22:55 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:53.883 12:22:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:53.883 12:22:55 -- common/autotest_common.sh@10 -- # set +x 00:26:53.883 [2024-04-26 12:22:55.020099] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:53.883 12:22:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:53.883 12:22:55 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:26:53.883 12:22:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:53.883 12:22:55 -- common/autotest_common.sh@10 -- # set +x 00:26:53.883 [2024-04-26 12:22:55.027884] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:26:53.883 [ 00:26:53.883 { 00:26:53.883 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:53.883 "subtype": "Discovery", 00:26:53.883 "listen_addresses": [], 00:26:53.883 "allow_any_host": true, 00:26:53.883 "hosts": [] 00:26:53.883 }, 00:26:53.883 { 00:26:53.883 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:53.883 "subtype": "NVMe", 00:26:53.883 "listen_addresses": [ 00:26:53.883 { 00:26:53.883 "transport": "TCP", 00:26:53.883 "trtype": "TCP", 00:26:53.883 "adrfam": "IPv4", 00:26:53.883 "traddr": "10.0.0.2", 00:26:53.883 "trsvcid": "4420" 00:26:53.883 } 00:26:53.883 ], 00:26:53.883 "allow_any_host": true, 00:26:53.883 "hosts": [], 00:26:53.883 "serial_number": "SPDK00000000000001", 00:26:53.883 "model_number": "SPDK bdev Controller", 00:26:53.883 "max_namespaces": 1, 00:26:53.883 "min_cntlid": 1, 00:26:53.883 "max_cntlid": 65519, 00:26:53.883 "namespaces": [ 00:26:53.883 { 00:26:53.883 "nsid": 1, 00:26:53.883 "bdev_name": "Nvme0n1", 00:26:53.883 "name": "Nvme0n1", 00:26:53.883 "nguid": "3634473052605494002538450000001F", 00:26:53.883 "uuid": "36344730-5260-5494-0025-38450000001f" 00:26:53.883 } 00:26:53.883 ] 00:26:53.883 } 00:26:53.883 ] 00:26:53.883 12:22:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:53.883 12:22:55 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:53.883 12:22:55 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:26:53.883 12:22:55 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:26:53.883 EAL: No free 2048 kB hugepages reported on node 1 00:26:54.143 12:22:55 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605494 00:26:54.143 12:22:55 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:54.143 12:22:55 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:26:54.143 12:22:55 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:26:54.143 EAL: No free 2048 kB hugepages reported on node 1 00:26:54.404 12:22:55 -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:26:54.404 12:22:55 -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605494 '!=' S64GNE0R605494 ']' 00:26:54.404 12:22:55 -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:26:54.404 12:22:55 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:54.404 12:22:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:54.404 12:22:55 -- common/autotest_common.sh@10 -- # set +x 00:26:54.404 12:22:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:54.404 12:22:55 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:26:54.404 12:22:55 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:26:54.404 12:22:55 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:54.404 12:22:55 -- nvmf/common.sh@117 -- # sync 00:26:54.404 12:22:55 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:54.404 12:22:55 -- nvmf/common.sh@120 -- # set +e 00:26:54.404 12:22:55 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:54.404 12:22:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:54.404 rmmod nvme_tcp 00:26:54.404 rmmod nvme_fabrics 00:26:54.404 rmmod nvme_keyring 00:26:54.404 12:22:55 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:54.404 12:22:55 -- nvmf/common.sh@124 -- # set -e 00:26:54.404 12:22:55 -- nvmf/common.sh@125 -- # return 0 00:26:54.404 12:22:55 -- nvmf/common.sh@478 -- # '[' -n 3577603 ']' 00:26:54.404 12:22:55 -- nvmf/common.sh@479 -- # killprocess 3577603 00:26:54.404 12:22:55 -- common/autotest_common.sh@936 -- # '[' -z 3577603 ']' 00:26:54.404 12:22:55 -- common/autotest_common.sh@940 -- # kill -0 3577603 00:26:54.404 12:22:55 -- common/autotest_common.sh@941 -- # uname 00:26:54.404 12:22:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:54.404 12:22:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3577603 00:26:54.664 12:22:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:54.664 12:22:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:54.664 12:22:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3577603' 00:26:54.664 killing process with pid 3577603 00:26:54.664 12:22:55 -- common/autotest_common.sh@955 -- # kill 3577603 00:26:54.664 [2024-04-26 12:22:55.668946] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:26:54.664 12:22:55 -- common/autotest_common.sh@960 -- # wait 3577603 00:26:54.925 12:22:55 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:26:54.925 12:22:55 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:54.925 12:22:55 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:54.925 12:22:55 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:54.925 12:22:55 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:54.925 12:22:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:54.925 12:22:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:54.925 12:22:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:56.836 12:22:57 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:56.836 00:26:56.836 real 0m12.907s 00:26:56.836 user 0m10.435s 00:26:56.836 sys 0m6.226s 00:26:56.836 12:22:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:56.836 12:22:58 -- common/autotest_common.sh@10 -- # set +x 00:26:56.836 ************************************ 00:26:56.836 END TEST nvmf_identify_passthru 00:26:56.836 ************************************ 00:26:56.836 12:22:58 -- spdk/autotest.sh@290 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:26:56.836 12:22:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:56.836 12:22:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:56.836 12:22:58 -- common/autotest_common.sh@10 -- # set +x 00:26:57.096 ************************************ 00:26:57.096 START TEST nvmf_dif 00:26:57.096 ************************************ 00:26:57.096 12:22:58 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:26:57.096 * Looking for test storage... 00:26:57.096 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:57.096 12:22:58 -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:57.096 12:22:58 -- nvmf/common.sh@7 -- # uname -s 00:26:57.096 12:22:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:57.096 12:22:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:57.096 12:22:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:57.096 12:22:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:57.096 12:22:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:57.096 12:22:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:57.439 12:22:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:57.439 12:22:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:57.440 12:22:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:57.440 12:22:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:57.440 12:22:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:57.440 12:22:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:57.440 12:22:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:57.440 12:22:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:57.440 12:22:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:57.440 12:22:58 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:57.440 12:22:58 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:57.440 12:22:58 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:57.440 12:22:58 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:57.440 12:22:58 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:57.440 12:22:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:57.440 12:22:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:57.440 12:22:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:57.440 12:22:58 -- paths/export.sh@5 -- # export PATH 00:26:57.440 12:22:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:57.440 12:22:58 -- nvmf/common.sh@47 -- # : 0 00:26:57.440 12:22:58 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:57.440 12:22:58 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:57.440 12:22:58 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:57.440 12:22:58 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:57.440 12:22:58 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:57.440 12:22:58 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:57.440 12:22:58 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:57.440 12:22:58 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:57.440 12:22:58 -- target/dif.sh@15 -- # NULL_META=16 00:26:57.440 12:22:58 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:26:57.440 12:22:58 -- target/dif.sh@15 -- # NULL_SIZE=64 00:26:57.440 12:22:58 -- target/dif.sh@15 -- # NULL_DIF=1 00:26:57.440 12:22:58 -- target/dif.sh@135 -- # nvmftestinit 00:26:57.440 12:22:58 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:57.440 12:22:58 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:57.440 12:22:58 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:57.440 12:22:58 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:57.440 12:22:58 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:57.440 12:22:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:57.440 12:22:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:57.440 12:22:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:57.440 12:22:58 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:26:57.440 12:22:58 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:26:57.440 12:22:58 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:57.440 12:22:58 -- common/autotest_common.sh@10 -- # set +x 00:27:04.020 12:23:05 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:04.020 12:23:05 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:04.020 12:23:05 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:04.020 12:23:05 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:04.020 12:23:05 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:04.020 12:23:05 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:04.020 12:23:05 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:04.020 12:23:05 -- nvmf/common.sh@295 -- # net_devs=() 00:27:04.020 12:23:05 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:04.020 12:23:05 -- nvmf/common.sh@296 -- # e810=() 00:27:04.020 12:23:05 -- nvmf/common.sh@296 -- # local -ga e810 00:27:04.020 12:23:05 -- nvmf/common.sh@297 -- # x722=() 00:27:04.020 12:23:05 -- nvmf/common.sh@297 -- # local -ga x722 00:27:04.020 12:23:05 -- nvmf/common.sh@298 -- # mlx=() 00:27:04.020 12:23:05 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:04.020 12:23:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:04.020 12:23:05 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:04.020 12:23:05 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:04.020 12:23:05 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:04.020 12:23:05 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:04.020 12:23:05 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:04.020 12:23:05 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:04.020 12:23:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:04.020 12:23:05 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:04.020 12:23:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:04.020 12:23:05 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:04.020 12:23:05 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:04.020 12:23:05 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:04.020 12:23:05 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:04.020 12:23:05 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:04.020 12:23:05 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:04.020 12:23:05 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:04.020 12:23:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:04.020 12:23:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:04.020 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:04.020 12:23:05 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:04.020 12:23:05 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:04.020 12:23:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:04.020 12:23:05 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:04.020 12:23:05 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:04.020 12:23:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:04.020 12:23:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:04.020 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:04.020 12:23:05 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:04.020 12:23:05 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:04.020 12:23:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:04.020 12:23:05 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:04.020 12:23:05 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:04.020 12:23:05 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:04.020 12:23:05 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:04.020 12:23:05 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:04.020 12:23:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:04.020 12:23:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:04.020 12:23:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:04.020 12:23:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:04.020 12:23:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:04.020 Found net devices under 0000:31:00.0: cvl_0_0 00:27:04.020 12:23:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:04.020 12:23:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:04.020 12:23:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:04.020 12:23:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:04.020 12:23:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:04.020 12:23:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:04.020 Found net devices under 0000:31:00.1: cvl_0_1 00:27:04.020 12:23:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:04.020 12:23:05 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:27:04.020 12:23:05 -- nvmf/common.sh@403 -- # is_hw=yes 00:27:04.020 12:23:05 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:27:04.020 12:23:05 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:27:04.020 12:23:05 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:27:04.020 12:23:05 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:04.020 12:23:05 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:04.020 12:23:05 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:04.020 12:23:05 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:04.020 12:23:05 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:04.020 12:23:05 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:04.020 12:23:05 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:04.020 12:23:05 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:04.020 12:23:05 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:04.020 12:23:05 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:04.020 12:23:05 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:04.020 12:23:05 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:04.020 12:23:05 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:04.281 12:23:05 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:04.281 12:23:05 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:04.281 12:23:05 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:04.281 12:23:05 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:04.281 12:23:05 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:04.281 12:23:05 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:04.281 12:23:05 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:04.281 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:04.281 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.680 ms 00:27:04.281 00:27:04.281 --- 10.0.0.2 ping statistics --- 00:27:04.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:04.281 rtt min/avg/max/mdev = 0.680/0.680/0.680/0.000 ms 00:27:04.281 12:23:05 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:04.281 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:04.281 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.264 ms 00:27:04.281 00:27:04.281 --- 10.0.0.1 ping statistics --- 00:27:04.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:04.281 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:27:04.281 12:23:05 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:04.281 12:23:05 -- nvmf/common.sh@411 -- # return 0 00:27:04.281 12:23:05 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:27:04.281 12:23:05 -- nvmf/common.sh@440 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:07.674 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:27:07.674 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:27:07.674 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:27:07.674 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:27:07.674 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:27:07.674 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:27:07.674 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:27:07.674 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:27:07.674 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:27:07.674 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:27:07.674 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:27:07.674 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:27:07.674 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:27:07.674 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:27:07.674 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:27:07.674 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:27:07.674 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:27:07.933 12:23:09 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:07.933 12:23:09 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:27:07.933 12:23:09 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:27:07.933 12:23:09 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:07.933 12:23:09 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:27:07.933 12:23:09 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:27:07.933 12:23:09 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:27:07.933 12:23:09 -- target/dif.sh@137 -- # nvmfappstart 00:27:07.933 12:23:09 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:27:07.933 12:23:09 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:07.933 12:23:09 -- common/autotest_common.sh@10 -- # set +x 00:27:08.193 12:23:09 -- nvmf/common.sh@470 -- # nvmfpid=3583846 00:27:08.193 12:23:09 -- nvmf/common.sh@471 -- # waitforlisten 3583846 00:27:08.193 12:23:09 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:27:08.193 12:23:09 -- common/autotest_common.sh@817 -- # '[' -z 3583846 ']' 00:27:08.193 12:23:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:08.193 12:23:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:08.193 12:23:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:08.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:08.193 12:23:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:08.193 12:23:09 -- common/autotest_common.sh@10 -- # set +x 00:27:08.193 [2024-04-26 12:23:09.208220] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:27:08.193 [2024-04-26 12:23:09.208282] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:08.193 EAL: No free 2048 kB hugepages reported on node 1 00:27:08.193 [2024-04-26 12:23:09.279290] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:08.193 [2024-04-26 12:23:09.351037] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:08.193 [2024-04-26 12:23:09.351071] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:08.193 [2024-04-26 12:23:09.351079] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:08.193 [2024-04-26 12:23:09.351085] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:08.193 [2024-04-26 12:23:09.351091] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:08.193 [2024-04-26 12:23:09.351110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:08.763 12:23:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:08.763 12:23:09 -- common/autotest_common.sh@850 -- # return 0 00:27:08.763 12:23:09 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:27:08.763 12:23:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:08.763 12:23:09 -- common/autotest_common.sh@10 -- # set +x 00:27:09.023 12:23:09 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:09.023 12:23:09 -- target/dif.sh@139 -- # create_transport 00:27:09.023 12:23:10 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:27:09.023 12:23:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:09.023 12:23:10 -- common/autotest_common.sh@10 -- # set +x 00:27:09.023 [2024-04-26 12:23:10.005790] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:09.023 12:23:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:09.023 12:23:10 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:27:09.023 12:23:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:09.023 12:23:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:09.023 12:23:10 -- common/autotest_common.sh@10 -- # set +x 00:27:09.023 ************************************ 00:27:09.023 START TEST fio_dif_1_default 00:27:09.023 ************************************ 00:27:09.024 12:23:10 -- common/autotest_common.sh@1111 -- # fio_dif_1 00:27:09.024 12:23:10 -- target/dif.sh@86 -- # create_subsystems 0 00:27:09.024 12:23:10 -- target/dif.sh@28 -- # local sub 00:27:09.024 12:23:10 -- target/dif.sh@30 -- # for sub in "$@" 00:27:09.024 12:23:10 -- target/dif.sh@31 -- # create_subsystem 0 00:27:09.024 12:23:10 -- target/dif.sh@18 -- # local sub_id=0 00:27:09.024 12:23:10 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:09.024 12:23:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:09.024 12:23:10 -- common/autotest_common.sh@10 -- # set +x 00:27:09.024 bdev_null0 00:27:09.024 12:23:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:09.024 12:23:10 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:09.024 12:23:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:09.024 12:23:10 -- common/autotest_common.sh@10 -- # set +x 00:27:09.024 12:23:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:09.024 12:23:10 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:09.024 12:23:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:09.024 12:23:10 -- common/autotest_common.sh@10 -- # set +x 00:27:09.024 12:23:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:09.024 12:23:10 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:09.024 12:23:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:09.024 12:23:10 -- common/autotest_common.sh@10 -- # set +x 00:27:09.024 [2024-04-26 12:23:10.202450] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:09.024 12:23:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:09.024 12:23:10 -- target/dif.sh@87 -- # fio /dev/fd/62 00:27:09.024 12:23:10 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:27:09.024 12:23:10 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:09.024 12:23:10 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:09.024 12:23:10 -- nvmf/common.sh@521 -- # config=() 00:27:09.024 12:23:10 -- nvmf/common.sh@521 -- # local subsystem config 00:27:09.024 12:23:10 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:09.024 12:23:10 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:09.024 12:23:10 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:27:09.024 12:23:10 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:09.024 { 00:27:09.024 "params": { 00:27:09.024 "name": "Nvme$subsystem", 00:27:09.024 "trtype": "$TEST_TRANSPORT", 00:27:09.024 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:09.024 "adrfam": "ipv4", 00:27:09.024 "trsvcid": "$NVMF_PORT", 00:27:09.024 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:09.024 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:09.024 "hdgst": ${hdgst:-false}, 00:27:09.024 "ddgst": ${ddgst:-false} 00:27:09.024 }, 00:27:09.024 "method": "bdev_nvme_attach_controller" 00:27:09.024 } 00:27:09.024 EOF 00:27:09.024 )") 00:27:09.024 12:23:10 -- target/dif.sh@82 -- # gen_fio_conf 00:27:09.024 12:23:10 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:09.024 12:23:10 -- target/dif.sh@54 -- # local file 00:27:09.024 12:23:10 -- common/autotest_common.sh@1325 -- # local sanitizers 00:27:09.024 12:23:10 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:09.024 12:23:10 -- target/dif.sh@56 -- # cat 00:27:09.024 12:23:10 -- common/autotest_common.sh@1327 -- # shift 00:27:09.024 12:23:10 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:27:09.024 12:23:10 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:09.024 12:23:10 -- nvmf/common.sh@543 -- # cat 00:27:09.024 12:23:10 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:09.024 12:23:10 -- common/autotest_common.sh@1331 -- # grep libasan 00:27:09.024 12:23:10 -- target/dif.sh@72 -- # (( file = 1 )) 00:27:09.024 12:23:10 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:09.024 12:23:10 -- target/dif.sh@72 -- # (( file <= files )) 00:27:09.024 12:23:10 -- nvmf/common.sh@545 -- # jq . 00:27:09.024 12:23:10 -- nvmf/common.sh@546 -- # IFS=, 00:27:09.024 12:23:10 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:27:09.024 "params": { 00:27:09.024 "name": "Nvme0", 00:27:09.024 "trtype": "tcp", 00:27:09.024 "traddr": "10.0.0.2", 00:27:09.024 "adrfam": "ipv4", 00:27:09.024 "trsvcid": "4420", 00:27:09.024 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:09.024 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:09.024 "hdgst": false, 00:27:09.024 "ddgst": false 00:27:09.024 }, 00:27:09.024 "method": "bdev_nvme_attach_controller" 00:27:09.024 }' 00:27:09.312 12:23:10 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:09.312 12:23:10 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:09.312 12:23:10 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:09.312 12:23:10 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:09.312 12:23:10 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:27:09.312 12:23:10 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:09.312 12:23:10 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:09.312 12:23:10 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:09.312 12:23:10 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:09.312 12:23:10 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:09.572 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:09.572 fio-3.35 00:27:09.572 Starting 1 thread 00:27:09.572 EAL: No free 2048 kB hugepages reported on node 1 00:27:21.798 00:27:21.798 filename0: (groupid=0, jobs=1): err= 0: pid=3584377: Fri Apr 26 12:23:21 2024 00:27:21.798 read: IOPS=188, BW=755KiB/s (773kB/s)(7568KiB/10020msec) 00:27:21.798 slat (nsec): min=5333, max=33992, avg=6185.53, stdev=1822.12 00:27:21.798 clat (usec): min=601, max=42985, avg=21165.49, stdev=20272.93 00:27:21.798 lat (usec): min=607, max=42993, avg=21171.68, stdev=20272.88 00:27:21.798 clat percentiles (usec): 00:27:21.798 | 1.00th=[ 783], 5.00th=[ 898], 10.00th=[ 914], 20.00th=[ 938], 00:27:21.798 | 30.00th=[ 947], 40.00th=[ 971], 50.00th=[ 1778], 60.00th=[41157], 00:27:21.798 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:27:21.798 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:27:21.798 | 99.99th=[42730] 00:27:21.798 bw ( KiB/s): min= 704, max= 768, per=99.96%, avg=755.20, stdev=24.13, samples=20 00:27:21.798 iops : min= 176, max= 192, avg=188.80, stdev= 6.03, samples=20 00:27:21.798 lat (usec) : 750=0.85%, 1000=47.09% 00:27:21.798 lat (msec) : 2=2.17%, 50=49.89% 00:27:21.798 cpu : usr=94.66%, sys=5.14%, ctx=14, majf=0, minf=214 00:27:21.798 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:21.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:21.798 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:21.798 issued rwts: total=1892,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:21.798 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:21.798 00:27:21.798 Run status group 0 (all jobs): 00:27:21.798 READ: bw=755KiB/s (773kB/s), 755KiB/s-755KiB/s (773kB/s-773kB/s), io=7568KiB (7750kB), run=10020-10020msec 00:27:21.798 12:23:21 -- target/dif.sh@88 -- # destroy_subsystems 0 00:27:21.798 12:23:21 -- target/dif.sh@43 -- # local sub 00:27:21.798 12:23:21 -- target/dif.sh@45 -- # for sub in "$@" 00:27:21.798 12:23:21 -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:21.798 12:23:21 -- target/dif.sh@36 -- # local sub_id=0 00:27:21.798 12:23:21 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:21.798 12:23:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:21.798 12:23:21 -- common/autotest_common.sh@10 -- # set +x 00:27:21.798 12:23:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:21.798 12:23:21 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:21.798 12:23:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:21.798 12:23:21 -- common/autotest_common.sh@10 -- # set +x 00:27:21.798 12:23:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:21.798 00:27:21.798 real 0m11.052s 00:27:21.798 user 0m27.832s 00:27:21.798 sys 0m0.809s 00:27:21.798 12:23:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:21.798 12:23:21 -- common/autotest_common.sh@10 -- # set +x 00:27:21.798 ************************************ 00:27:21.798 END TEST fio_dif_1_default 00:27:21.798 ************************************ 00:27:21.798 12:23:21 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:27:21.798 12:23:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:21.798 12:23:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:21.798 12:23:21 -- common/autotest_common.sh@10 -- # set +x 00:27:21.798 ************************************ 00:27:21.798 START TEST fio_dif_1_multi_subsystems 00:27:21.798 ************************************ 00:27:21.798 12:23:21 -- common/autotest_common.sh@1111 -- # fio_dif_1_multi_subsystems 00:27:21.798 12:23:21 -- target/dif.sh@92 -- # local files=1 00:27:21.798 12:23:21 -- target/dif.sh@94 -- # create_subsystems 0 1 00:27:21.798 12:23:21 -- target/dif.sh@28 -- # local sub 00:27:21.798 12:23:21 -- target/dif.sh@30 -- # for sub in "$@" 00:27:21.798 12:23:21 -- target/dif.sh@31 -- # create_subsystem 0 00:27:21.798 12:23:21 -- target/dif.sh@18 -- # local sub_id=0 00:27:21.798 12:23:21 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:21.798 12:23:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:21.798 12:23:21 -- common/autotest_common.sh@10 -- # set +x 00:27:21.798 bdev_null0 00:27:21.798 12:23:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:21.798 12:23:21 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:21.798 12:23:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:21.798 12:23:21 -- common/autotest_common.sh@10 -- # set +x 00:27:21.798 12:23:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:21.798 12:23:21 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:21.798 12:23:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:21.798 12:23:21 -- common/autotest_common.sh@10 -- # set +x 00:27:21.798 12:23:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:21.798 12:23:21 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:21.798 12:23:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:21.798 12:23:21 -- common/autotest_common.sh@10 -- # set +x 00:27:21.798 [2024-04-26 12:23:21.443960] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:21.798 12:23:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:21.798 12:23:21 -- target/dif.sh@30 -- # for sub in "$@" 00:27:21.798 12:23:21 -- target/dif.sh@31 -- # create_subsystem 1 00:27:21.798 12:23:21 -- target/dif.sh@18 -- # local sub_id=1 00:27:21.798 12:23:21 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:27:21.798 12:23:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:21.798 12:23:21 -- common/autotest_common.sh@10 -- # set +x 00:27:21.798 bdev_null1 00:27:21.799 12:23:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:21.799 12:23:21 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:21.799 12:23:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:21.799 12:23:21 -- common/autotest_common.sh@10 -- # set +x 00:27:21.799 12:23:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:21.799 12:23:21 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:21.799 12:23:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:21.799 12:23:21 -- common/autotest_common.sh@10 -- # set +x 00:27:21.799 12:23:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:21.799 12:23:21 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:21.799 12:23:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:21.799 12:23:21 -- common/autotest_common.sh@10 -- # set +x 00:27:21.799 12:23:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:21.799 12:23:21 -- target/dif.sh@95 -- # fio /dev/fd/62 00:27:21.799 12:23:21 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:27:21.799 12:23:21 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:27:21.799 12:23:21 -- nvmf/common.sh@521 -- # config=() 00:27:21.799 12:23:21 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:21.799 12:23:21 -- nvmf/common.sh@521 -- # local subsystem config 00:27:21.799 12:23:21 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:21.799 12:23:21 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:21.799 12:23:21 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:21.799 { 00:27:21.799 "params": { 00:27:21.799 "name": "Nvme$subsystem", 00:27:21.799 "trtype": "$TEST_TRANSPORT", 00:27:21.799 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:21.799 "adrfam": "ipv4", 00:27:21.799 "trsvcid": "$NVMF_PORT", 00:27:21.799 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:21.799 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:21.799 "hdgst": ${hdgst:-false}, 00:27:21.799 "ddgst": ${ddgst:-false} 00:27:21.799 }, 00:27:21.799 "method": "bdev_nvme_attach_controller" 00:27:21.799 } 00:27:21.799 EOF 00:27:21.799 )") 00:27:21.799 12:23:21 -- target/dif.sh@82 -- # gen_fio_conf 00:27:21.799 12:23:21 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:27:21.799 12:23:21 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:21.799 12:23:21 -- target/dif.sh@54 -- # local file 00:27:21.799 12:23:21 -- common/autotest_common.sh@1325 -- # local sanitizers 00:27:21.799 12:23:21 -- target/dif.sh@56 -- # cat 00:27:21.799 12:23:21 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:21.799 12:23:21 -- common/autotest_common.sh@1327 -- # shift 00:27:21.799 12:23:21 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:27:21.799 12:23:21 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:21.799 12:23:21 -- nvmf/common.sh@543 -- # cat 00:27:21.799 12:23:21 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:21.799 12:23:21 -- target/dif.sh@72 -- # (( file = 1 )) 00:27:21.799 12:23:21 -- common/autotest_common.sh@1331 -- # grep libasan 00:27:21.799 12:23:21 -- target/dif.sh@72 -- # (( file <= files )) 00:27:21.799 12:23:21 -- target/dif.sh@73 -- # cat 00:27:21.799 12:23:21 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:21.799 12:23:21 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:21.799 12:23:21 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:21.799 { 00:27:21.799 "params": { 00:27:21.799 "name": "Nvme$subsystem", 00:27:21.799 "trtype": "$TEST_TRANSPORT", 00:27:21.799 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:21.799 "adrfam": "ipv4", 00:27:21.799 "trsvcid": "$NVMF_PORT", 00:27:21.799 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:21.799 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:21.799 "hdgst": ${hdgst:-false}, 00:27:21.799 "ddgst": ${ddgst:-false} 00:27:21.799 }, 00:27:21.799 "method": "bdev_nvme_attach_controller" 00:27:21.799 } 00:27:21.799 EOF 00:27:21.799 )") 00:27:21.799 12:23:21 -- target/dif.sh@72 -- # (( file++ )) 00:27:21.799 12:23:21 -- target/dif.sh@72 -- # (( file <= files )) 00:27:21.799 12:23:21 -- nvmf/common.sh@543 -- # cat 00:27:21.799 12:23:21 -- nvmf/common.sh@545 -- # jq . 00:27:21.799 12:23:21 -- nvmf/common.sh@546 -- # IFS=, 00:27:21.799 12:23:21 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:27:21.799 "params": { 00:27:21.799 "name": "Nvme0", 00:27:21.799 "trtype": "tcp", 00:27:21.799 "traddr": "10.0.0.2", 00:27:21.799 "adrfam": "ipv4", 00:27:21.799 "trsvcid": "4420", 00:27:21.799 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:21.799 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:21.799 "hdgst": false, 00:27:21.799 "ddgst": false 00:27:21.799 }, 00:27:21.799 "method": "bdev_nvme_attach_controller" 00:27:21.799 },{ 00:27:21.799 "params": { 00:27:21.799 "name": "Nvme1", 00:27:21.799 "trtype": "tcp", 00:27:21.799 "traddr": "10.0.0.2", 00:27:21.799 "adrfam": "ipv4", 00:27:21.799 "trsvcid": "4420", 00:27:21.799 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:21.799 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:21.799 "hdgst": false, 00:27:21.799 "ddgst": false 00:27:21.799 }, 00:27:21.799 "method": "bdev_nvme_attach_controller" 00:27:21.799 }' 00:27:21.799 12:23:21 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:21.799 12:23:21 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:21.799 12:23:21 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:21.799 12:23:21 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:21.799 12:23:21 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:27:21.799 12:23:21 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:21.799 12:23:21 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:21.799 12:23:21 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:21.799 12:23:21 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:21.799 12:23:21 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:21.799 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:21.799 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:21.799 fio-3.35 00:27:21.799 Starting 2 threads 00:27:21.799 EAL: No free 2048 kB hugepages reported on node 1 00:27:31.797 00:27:31.797 filename0: (groupid=0, jobs=1): err= 0: pid=3586633: Fri Apr 26 12:23:32 2024 00:27:31.797 read: IOPS=187, BW=751KiB/s (769kB/s)(7536KiB/10034msec) 00:27:31.797 slat (nsec): min=5312, max=31504, avg=6248.21, stdev=1791.67 00:27:31.797 clat (usec): min=599, max=43059, avg=21286.47, stdev=20385.13 00:27:31.797 lat (usec): min=604, max=43067, avg=21292.72, stdev=20385.10 00:27:31.797 clat percentiles (usec): 00:27:31.797 | 1.00th=[ 685], 5.00th=[ 914], 10.00th=[ 930], 20.00th=[ 955], 00:27:31.797 | 30.00th=[ 963], 40.00th=[ 979], 50.00th=[ 2245], 60.00th=[41157], 00:27:31.797 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:27:31.797 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:27:31.797 | 99.99th=[43254] 00:27:31.797 bw ( KiB/s): min= 704, max= 768, per=66.33%, avg=752.00, stdev=28.43, samples=20 00:27:31.797 iops : min= 176, max= 192, avg=188.00, stdev= 7.11, samples=20 00:27:31.797 lat (usec) : 750=2.28%, 1000=43.47% 00:27:31.797 lat (msec) : 2=4.14%, 4=0.21%, 50=49.89% 00:27:31.797 cpu : usr=96.39%, sys=3.40%, ctx=28, majf=0, minf=73 00:27:31.797 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:31.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.797 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.797 issued rwts: total=1884,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:31.797 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:31.797 filename1: (groupid=0, jobs=1): err= 0: pid=3586634: Fri Apr 26 12:23:32 2024 00:27:31.797 read: IOPS=95, BW=383KiB/s (392kB/s)(3840KiB/10019msec) 00:27:31.797 slat (nsec): min=5323, max=32146, avg=6185.43, stdev=1511.36 00:27:31.797 clat (usec): min=40921, max=42999, avg=41728.28, stdev=462.18 00:27:31.797 lat (usec): min=40927, max=43004, avg=41734.46, stdev=462.29 00:27:31.797 clat percentiles (usec): 00:27:31.797 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:27:31.797 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:27:31.797 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:27:31.797 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:27:31.797 | 99.99th=[43254] 00:27:31.797 bw ( KiB/s): min= 352, max= 384, per=33.69%, avg=382.40, stdev= 7.16, samples=20 00:27:31.797 iops : min= 88, max= 96, avg=95.60, stdev= 1.79, samples=20 00:27:31.797 lat (msec) : 50=100.00% 00:27:31.797 cpu : usr=96.40%, sys=3.39%, ctx=24, majf=0, minf=165 00:27:31.797 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:31.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.797 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.797 issued rwts: total=960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:31.797 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:31.797 00:27:31.797 Run status group 0 (all jobs): 00:27:31.797 READ: bw=1134KiB/s (1161kB/s), 383KiB/s-751KiB/s (392kB/s-769kB/s), io=11.1MiB (11.6MB), run=10019-10034msec 00:27:31.797 12:23:32 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:27:31.797 12:23:32 -- target/dif.sh@43 -- # local sub 00:27:31.797 12:23:32 -- target/dif.sh@45 -- # for sub in "$@" 00:27:31.797 12:23:32 -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:31.798 12:23:32 -- target/dif.sh@36 -- # local sub_id=0 00:27:31.798 12:23:32 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:31.798 12:23:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:31.798 12:23:32 -- common/autotest_common.sh@10 -- # set +x 00:27:31.798 12:23:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:31.798 12:23:32 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:31.798 12:23:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:31.798 12:23:32 -- common/autotest_common.sh@10 -- # set +x 00:27:31.798 12:23:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:31.798 12:23:32 -- target/dif.sh@45 -- # for sub in "$@" 00:27:31.798 12:23:32 -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:31.798 12:23:32 -- target/dif.sh@36 -- # local sub_id=1 00:27:31.798 12:23:32 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:31.798 12:23:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:31.798 12:23:32 -- common/autotest_common.sh@10 -- # set +x 00:27:31.798 12:23:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:31.798 12:23:32 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:31.798 12:23:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:31.798 12:23:32 -- common/autotest_common.sh@10 -- # set +x 00:27:31.798 12:23:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:31.798 00:27:31.798 real 0m11.413s 00:27:31.798 user 0m36.790s 00:27:31.798 sys 0m1.002s 00:27:31.798 12:23:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:31.798 12:23:32 -- common/autotest_common.sh@10 -- # set +x 00:27:31.798 ************************************ 00:27:31.798 END TEST fio_dif_1_multi_subsystems 00:27:31.798 ************************************ 00:27:31.798 12:23:32 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:27:31.798 12:23:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:31.798 12:23:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:31.798 12:23:32 -- common/autotest_common.sh@10 -- # set +x 00:27:31.798 ************************************ 00:27:31.798 START TEST fio_dif_rand_params 00:27:31.798 ************************************ 00:27:31.798 12:23:33 -- common/autotest_common.sh@1111 -- # fio_dif_rand_params 00:27:31.798 12:23:33 -- target/dif.sh@100 -- # local NULL_DIF 00:27:31.798 12:23:33 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:27:31.798 12:23:33 -- target/dif.sh@103 -- # NULL_DIF=3 00:27:31.798 12:23:33 -- target/dif.sh@103 -- # bs=128k 00:27:31.798 12:23:33 -- target/dif.sh@103 -- # numjobs=3 00:27:31.798 12:23:33 -- target/dif.sh@103 -- # iodepth=3 00:27:31.798 12:23:33 -- target/dif.sh@103 -- # runtime=5 00:27:31.798 12:23:33 -- target/dif.sh@105 -- # create_subsystems 0 00:27:31.798 12:23:33 -- target/dif.sh@28 -- # local sub 00:27:31.798 12:23:33 -- target/dif.sh@30 -- # for sub in "$@" 00:27:31.798 12:23:33 -- target/dif.sh@31 -- # create_subsystem 0 00:27:31.798 12:23:33 -- target/dif.sh@18 -- # local sub_id=0 00:27:31.798 12:23:33 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:27:31.798 12:23:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:31.798 12:23:33 -- common/autotest_common.sh@10 -- # set +x 00:27:31.798 bdev_null0 00:27:31.798 12:23:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:31.798 12:23:33 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:31.798 12:23:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:31.798 12:23:33 -- common/autotest_common.sh@10 -- # set +x 00:27:32.058 12:23:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:32.058 12:23:33 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:32.058 12:23:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:32.058 12:23:33 -- common/autotest_common.sh@10 -- # set +x 00:27:32.058 12:23:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:32.058 12:23:33 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:32.058 12:23:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:32.058 12:23:33 -- common/autotest_common.sh@10 -- # set +x 00:27:32.058 [2024-04-26 12:23:33.046520] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:32.058 12:23:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:32.058 12:23:33 -- target/dif.sh@106 -- # fio /dev/fd/62 00:27:32.058 12:23:33 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:27:32.058 12:23:33 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:32.058 12:23:33 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:32.058 12:23:33 -- nvmf/common.sh@521 -- # config=() 00:27:32.058 12:23:33 -- nvmf/common.sh@521 -- # local subsystem config 00:27:32.058 12:23:33 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:32.058 12:23:33 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:32.058 12:23:33 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:27:32.058 12:23:33 -- target/dif.sh@82 -- # gen_fio_conf 00:27:32.058 12:23:33 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:32.058 { 00:27:32.058 "params": { 00:27:32.058 "name": "Nvme$subsystem", 00:27:32.058 "trtype": "$TEST_TRANSPORT", 00:27:32.058 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:32.058 "adrfam": "ipv4", 00:27:32.058 "trsvcid": "$NVMF_PORT", 00:27:32.058 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:32.058 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:32.058 "hdgst": ${hdgst:-false}, 00:27:32.058 "ddgst": ${ddgst:-false} 00:27:32.058 }, 00:27:32.058 "method": "bdev_nvme_attach_controller" 00:27:32.058 } 00:27:32.058 EOF 00:27:32.058 )") 00:27:32.058 12:23:33 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:32.058 12:23:33 -- target/dif.sh@54 -- # local file 00:27:32.058 12:23:33 -- common/autotest_common.sh@1325 -- # local sanitizers 00:27:32.058 12:23:33 -- target/dif.sh@56 -- # cat 00:27:32.058 12:23:33 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:32.058 12:23:33 -- common/autotest_common.sh@1327 -- # shift 00:27:32.058 12:23:33 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:27:32.058 12:23:33 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:32.058 12:23:33 -- nvmf/common.sh@543 -- # cat 00:27:32.058 12:23:33 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:32.058 12:23:33 -- target/dif.sh@72 -- # (( file = 1 )) 00:27:32.058 12:23:33 -- common/autotest_common.sh@1331 -- # grep libasan 00:27:32.058 12:23:33 -- target/dif.sh@72 -- # (( file <= files )) 00:27:32.059 12:23:33 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:32.059 12:23:33 -- nvmf/common.sh@545 -- # jq . 00:27:32.059 12:23:33 -- nvmf/common.sh@546 -- # IFS=, 00:27:32.059 12:23:33 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:27:32.059 "params": { 00:27:32.059 "name": "Nvme0", 00:27:32.059 "trtype": "tcp", 00:27:32.059 "traddr": "10.0.0.2", 00:27:32.059 "adrfam": "ipv4", 00:27:32.059 "trsvcid": "4420", 00:27:32.059 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:32.059 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:32.059 "hdgst": false, 00:27:32.059 "ddgst": false 00:27:32.059 }, 00:27:32.059 "method": "bdev_nvme_attach_controller" 00:27:32.059 }' 00:27:32.059 12:23:33 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:32.059 12:23:33 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:32.059 12:23:33 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:32.059 12:23:33 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:32.059 12:23:33 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:27:32.059 12:23:33 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:32.059 12:23:33 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:32.059 12:23:33 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:32.059 12:23:33 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:32.059 12:23:33 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:32.318 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:27:32.318 ... 00:27:32.318 fio-3.35 00:27:32.318 Starting 3 threads 00:27:32.318 EAL: No free 2048 kB hugepages reported on node 1 00:27:38.901 00:27:38.901 filename0: (groupid=0, jobs=1): err= 0: pid=3589142: Fri Apr 26 12:23:39 2024 00:27:38.901 read: IOPS=233, BW=29.2MiB/s (30.7MB/s)(146MiB/5006msec) 00:27:38.901 slat (nsec): min=5350, max=46918, avg=8454.52, stdev=2306.19 00:27:38.901 clat (usec): min=5535, max=55025, avg=12810.74, stdev=8892.18 00:27:38.901 lat (usec): min=5544, max=55032, avg=12819.19, stdev=8891.98 00:27:38.901 clat percentiles (usec): 00:27:38.901 | 1.00th=[ 6325], 5.00th=[ 7504], 10.00th=[ 8225], 20.00th=[ 9110], 00:27:38.901 | 30.00th=[ 9896], 40.00th=[10421], 50.00th=[10814], 60.00th=[11338], 00:27:38.901 | 70.00th=[12125], 80.00th=[12911], 90.00th=[14484], 95.00th=[17695], 00:27:38.901 | 99.00th=[52167], 99.50th=[53216], 99.90th=[54264], 99.95th=[54789], 00:27:38.901 | 99.99th=[54789] 00:27:38.901 bw ( KiB/s): min=19712, max=37376, per=34.39%, avg=29900.80, stdev=4971.06, samples=10 00:27:38.901 iops : min= 154, max= 292, avg=233.60, stdev=38.84, samples=10 00:27:38.901 lat (msec) : 10=31.77%, 20=63.36%, 50=1.28%, 100=3.59% 00:27:38.901 cpu : usr=94.49%, sys=4.92%, ctx=311, majf=0, minf=153 00:27:38.901 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:38.901 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:38.901 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:38.901 issued rwts: total=1171,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:38.901 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:38.901 filename0: (groupid=0, jobs=1): err= 0: pid=3589143: Fri Apr 26 12:23:39 2024 00:27:38.901 read: IOPS=233, BW=29.2MiB/s (30.7MB/s)(146MiB/5005msec) 00:27:38.901 slat (nsec): min=5370, max=30495, avg=7833.59, stdev=1578.57 00:27:38.901 clat (usec): min=5326, max=56413, avg=12811.42, stdev=7246.84 00:27:38.901 lat (usec): min=5332, max=56420, avg=12819.25, stdev=7247.00 00:27:38.901 clat percentiles (usec): 00:27:38.901 | 1.00th=[ 5800], 5.00th=[ 6456], 10.00th=[ 7439], 20.00th=[ 8979], 00:27:38.901 | 30.00th=[10028], 40.00th=[10945], 50.00th=[11863], 60.00th=[12780], 00:27:38.901 | 70.00th=[13829], 80.00th=[14746], 90.00th=[15795], 95.00th=[16712], 00:27:38.901 | 99.00th=[51119], 99.50th=[52691], 99.90th=[53740], 99.95th=[56361], 00:27:38.901 | 99.99th=[56361] 00:27:38.901 bw ( KiB/s): min=22784, max=33024, per=34.40%, avg=29907.10, stdev=3279.73, samples=10 00:27:38.901 iops : min= 178, max= 258, avg=233.60, stdev=25.59, samples=10 00:27:38.901 lat (msec) : 10=28.95%, 20=67.98%, 50=1.71%, 100=1.37% 00:27:38.901 cpu : usr=95.60%, sys=4.16%, ctx=11, majf=0, minf=106 00:27:38.901 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:38.901 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:38.901 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:38.901 issued rwts: total=1171,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:38.901 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:38.901 filename0: (groupid=0, jobs=1): err= 0: pid=3589144: Fri Apr 26 12:23:39 2024 00:27:38.901 read: IOPS=215, BW=26.9MiB/s (28.2MB/s)(136MiB/5045msec) 00:27:38.901 slat (nsec): min=5426, max=41708, avg=8314.75, stdev=1824.22 00:27:38.901 clat (usec): min=5649, max=89280, avg=13898.96, stdev=10683.35 00:27:38.901 lat (usec): min=5657, max=89288, avg=13907.28, stdev=10683.24 00:27:38.901 clat percentiles (usec): 00:27:38.901 | 1.00th=[ 6521], 5.00th=[ 7832], 10.00th=[ 8455], 20.00th=[ 9372], 00:27:38.901 | 30.00th=[10028], 40.00th=[10552], 50.00th=[10945], 60.00th=[11600], 00:27:38.901 | 70.00th=[12387], 80.00th=[13435], 90.00th=[15008], 95.00th=[50070], 00:27:38.901 | 99.00th=[52167], 99.50th=[53216], 99.90th=[55837], 99.95th=[89654], 00:27:38.901 | 99.99th=[89654] 00:27:38.901 bw ( KiB/s): min=22784, max=33792, per=31.86%, avg=27699.20, stdev=2842.51, samples=10 00:27:38.901 iops : min= 178, max= 264, avg=216.40, stdev=22.21, samples=10 00:27:38.901 lat (msec) : 10=28.94%, 20=63.78%, 50=2.49%, 100=4.79% 00:27:38.901 cpu : usr=95.82%, sys=3.95%, ctx=13, majf=0, minf=68 00:27:38.901 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:38.901 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:38.901 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:38.901 issued rwts: total=1085,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:38.901 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:38.901 00:27:38.901 Run status group 0 (all jobs): 00:27:38.901 READ: bw=84.9MiB/s (89.0MB/s), 26.9MiB/s-29.2MiB/s (28.2MB/s-30.7MB/s), io=428MiB (449MB), run=5005-5045msec 00:27:38.901 12:23:39 -- target/dif.sh@107 -- # destroy_subsystems 0 00:27:38.901 12:23:39 -- target/dif.sh@43 -- # local sub 00:27:38.901 12:23:39 -- target/dif.sh@45 -- # for sub in "$@" 00:27:38.901 12:23:39 -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:38.901 12:23:39 -- target/dif.sh@36 -- # local sub_id=0 00:27:38.901 12:23:39 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:38.901 12:23:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:38.901 12:23:39 -- common/autotest_common.sh@10 -- # set +x 00:27:38.901 12:23:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:38.901 12:23:39 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:38.901 12:23:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:38.901 12:23:39 -- common/autotest_common.sh@10 -- # set +x 00:27:38.901 12:23:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:38.901 12:23:39 -- target/dif.sh@109 -- # NULL_DIF=2 00:27:38.901 12:23:39 -- target/dif.sh@109 -- # bs=4k 00:27:38.901 12:23:39 -- target/dif.sh@109 -- # numjobs=8 00:27:38.901 12:23:39 -- target/dif.sh@109 -- # iodepth=16 00:27:38.901 12:23:39 -- target/dif.sh@109 -- # runtime= 00:27:38.901 12:23:39 -- target/dif.sh@109 -- # files=2 00:27:38.901 12:23:39 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:27:38.901 12:23:39 -- target/dif.sh@28 -- # local sub 00:27:38.901 12:23:39 -- target/dif.sh@30 -- # for sub in "$@" 00:27:38.901 12:23:39 -- target/dif.sh@31 -- # create_subsystem 0 00:27:38.901 12:23:39 -- target/dif.sh@18 -- # local sub_id=0 00:27:38.901 12:23:39 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:27:38.901 12:23:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:38.901 12:23:39 -- common/autotest_common.sh@10 -- # set +x 00:27:38.901 bdev_null0 00:27:38.901 12:23:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:38.901 12:23:39 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:38.901 12:23:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:38.901 12:23:39 -- common/autotest_common.sh@10 -- # set +x 00:27:38.901 12:23:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:38.901 12:23:39 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:38.901 12:23:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:38.901 12:23:39 -- common/autotest_common.sh@10 -- # set +x 00:27:38.901 12:23:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:38.901 12:23:39 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:38.901 12:23:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:38.901 12:23:39 -- common/autotest_common.sh@10 -- # set +x 00:27:38.901 [2024-04-26 12:23:39.212145] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:38.901 12:23:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:38.901 12:23:39 -- target/dif.sh@30 -- # for sub in "$@" 00:27:38.901 12:23:39 -- target/dif.sh@31 -- # create_subsystem 1 00:27:38.901 12:23:39 -- target/dif.sh@18 -- # local sub_id=1 00:27:38.901 12:23:39 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:27:38.901 12:23:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:38.901 12:23:39 -- common/autotest_common.sh@10 -- # set +x 00:27:38.901 bdev_null1 00:27:38.901 12:23:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:38.901 12:23:39 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:38.901 12:23:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:38.901 12:23:39 -- common/autotest_common.sh@10 -- # set +x 00:27:38.901 12:23:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:38.901 12:23:39 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:38.901 12:23:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:38.901 12:23:39 -- common/autotest_common.sh@10 -- # set +x 00:27:38.901 12:23:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:38.901 12:23:39 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:38.901 12:23:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:38.901 12:23:39 -- common/autotest_common.sh@10 -- # set +x 00:27:38.901 12:23:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:38.901 12:23:39 -- target/dif.sh@30 -- # for sub in "$@" 00:27:38.901 12:23:39 -- target/dif.sh@31 -- # create_subsystem 2 00:27:38.901 12:23:39 -- target/dif.sh@18 -- # local sub_id=2 00:27:38.901 12:23:39 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:27:38.901 12:23:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:38.901 12:23:39 -- common/autotest_common.sh@10 -- # set +x 00:27:38.901 bdev_null2 00:27:38.901 12:23:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:38.901 12:23:39 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:27:38.901 12:23:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:38.901 12:23:39 -- common/autotest_common.sh@10 -- # set +x 00:27:38.901 12:23:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:38.901 12:23:39 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:27:38.901 12:23:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:38.901 12:23:39 -- common/autotest_common.sh@10 -- # set +x 00:27:38.902 12:23:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:38.902 12:23:39 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:38.902 12:23:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:38.902 12:23:39 -- common/autotest_common.sh@10 -- # set +x 00:27:38.902 12:23:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:38.902 12:23:39 -- target/dif.sh@112 -- # fio /dev/fd/62 00:27:38.902 12:23:39 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:27:38.902 12:23:39 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:27:38.902 12:23:39 -- nvmf/common.sh@521 -- # config=() 00:27:38.902 12:23:39 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:38.902 12:23:39 -- nvmf/common.sh@521 -- # local subsystem config 00:27:38.902 12:23:39 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:38.902 12:23:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:38.902 12:23:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:38.902 { 00:27:38.902 "params": { 00:27:38.902 "name": "Nvme$subsystem", 00:27:38.902 "trtype": "$TEST_TRANSPORT", 00:27:38.902 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:38.902 "adrfam": "ipv4", 00:27:38.902 "trsvcid": "$NVMF_PORT", 00:27:38.902 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:38.902 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:38.902 "hdgst": ${hdgst:-false}, 00:27:38.902 "ddgst": ${ddgst:-false} 00:27:38.902 }, 00:27:38.902 "method": "bdev_nvme_attach_controller" 00:27:38.902 } 00:27:38.902 EOF 00:27:38.902 )") 00:27:38.902 12:23:39 -- target/dif.sh@82 -- # gen_fio_conf 00:27:38.902 12:23:39 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:27:38.902 12:23:39 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:38.902 12:23:39 -- target/dif.sh@54 -- # local file 00:27:38.902 12:23:39 -- common/autotest_common.sh@1325 -- # local sanitizers 00:27:38.902 12:23:39 -- target/dif.sh@56 -- # cat 00:27:38.902 12:23:39 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:38.902 12:23:39 -- common/autotest_common.sh@1327 -- # shift 00:27:38.902 12:23:39 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:27:38.902 12:23:39 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:38.902 12:23:39 -- nvmf/common.sh@543 -- # cat 00:27:38.902 12:23:39 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:38.902 12:23:39 -- target/dif.sh@72 -- # (( file = 1 )) 00:27:38.902 12:23:39 -- common/autotest_common.sh@1331 -- # grep libasan 00:27:38.902 12:23:39 -- target/dif.sh@72 -- # (( file <= files )) 00:27:38.902 12:23:39 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:38.902 12:23:39 -- target/dif.sh@73 -- # cat 00:27:38.902 12:23:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:38.902 12:23:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:38.902 { 00:27:38.902 "params": { 00:27:38.902 "name": "Nvme$subsystem", 00:27:38.902 "trtype": "$TEST_TRANSPORT", 00:27:38.902 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:38.902 "adrfam": "ipv4", 00:27:38.902 "trsvcid": "$NVMF_PORT", 00:27:38.902 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:38.902 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:38.902 "hdgst": ${hdgst:-false}, 00:27:38.902 "ddgst": ${ddgst:-false} 00:27:38.902 }, 00:27:38.902 "method": "bdev_nvme_attach_controller" 00:27:38.902 } 00:27:38.902 EOF 00:27:38.902 )") 00:27:38.902 12:23:39 -- target/dif.sh@72 -- # (( file++ )) 00:27:38.902 12:23:39 -- target/dif.sh@72 -- # (( file <= files )) 00:27:38.902 12:23:39 -- target/dif.sh@73 -- # cat 00:27:38.902 12:23:39 -- nvmf/common.sh@543 -- # cat 00:27:38.902 12:23:39 -- target/dif.sh@72 -- # (( file++ )) 00:27:38.902 12:23:39 -- target/dif.sh@72 -- # (( file <= files )) 00:27:38.902 12:23:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:38.902 12:23:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:38.902 { 00:27:38.902 "params": { 00:27:38.902 "name": "Nvme$subsystem", 00:27:38.902 "trtype": "$TEST_TRANSPORT", 00:27:38.902 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:38.902 "adrfam": "ipv4", 00:27:38.902 "trsvcid": "$NVMF_PORT", 00:27:38.902 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:38.902 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:38.902 "hdgst": ${hdgst:-false}, 00:27:38.902 "ddgst": ${ddgst:-false} 00:27:38.902 }, 00:27:38.902 "method": "bdev_nvme_attach_controller" 00:27:38.902 } 00:27:38.902 EOF 00:27:38.902 )") 00:27:38.902 12:23:39 -- nvmf/common.sh@543 -- # cat 00:27:38.902 12:23:39 -- nvmf/common.sh@545 -- # jq . 00:27:38.902 12:23:39 -- nvmf/common.sh@546 -- # IFS=, 00:27:38.902 12:23:39 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:27:38.902 "params": { 00:27:38.902 "name": "Nvme0", 00:27:38.902 "trtype": "tcp", 00:27:38.902 "traddr": "10.0.0.2", 00:27:38.902 "adrfam": "ipv4", 00:27:38.902 "trsvcid": "4420", 00:27:38.902 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:38.902 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:38.902 "hdgst": false, 00:27:38.902 "ddgst": false 00:27:38.902 }, 00:27:38.902 "method": "bdev_nvme_attach_controller" 00:27:38.902 },{ 00:27:38.902 "params": { 00:27:38.902 "name": "Nvme1", 00:27:38.902 "trtype": "tcp", 00:27:38.902 "traddr": "10.0.0.2", 00:27:38.902 "adrfam": "ipv4", 00:27:38.902 "trsvcid": "4420", 00:27:38.902 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:38.902 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:38.902 "hdgst": false, 00:27:38.902 "ddgst": false 00:27:38.902 }, 00:27:38.902 "method": "bdev_nvme_attach_controller" 00:27:38.902 },{ 00:27:38.902 "params": { 00:27:38.902 "name": "Nvme2", 00:27:38.902 "trtype": "tcp", 00:27:38.902 "traddr": "10.0.0.2", 00:27:38.902 "adrfam": "ipv4", 00:27:38.902 "trsvcid": "4420", 00:27:38.902 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:38.902 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:38.902 "hdgst": false, 00:27:38.902 "ddgst": false 00:27:38.902 }, 00:27:38.902 "method": "bdev_nvme_attach_controller" 00:27:38.902 }' 00:27:38.902 12:23:39 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:38.902 12:23:39 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:38.902 12:23:39 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:38.902 12:23:39 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:38.902 12:23:39 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:27:38.902 12:23:39 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:38.902 12:23:39 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:38.902 12:23:39 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:38.902 12:23:39 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:38.902 12:23:39 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:38.902 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:38.902 ... 00:27:38.902 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:38.902 ... 00:27:38.902 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:38.902 ... 00:27:38.902 fio-3.35 00:27:38.902 Starting 24 threads 00:27:38.902 EAL: No free 2048 kB hugepages reported on node 1 00:27:51.136 00:27:51.136 filename0: (groupid=0, jobs=1): err= 0: pid=3590614: Fri Apr 26 12:23:50 2024 00:27:51.136 read: IOPS=479, BW=1917KiB/s (1963kB/s)(18.8MiB/10016msec) 00:27:51.136 slat (nsec): min=5600, max=52523, avg=11344.04, stdev=6440.46 00:27:51.136 clat (usec): min=17994, max=36653, avg=33289.30, stdev=1655.99 00:27:51.136 lat (usec): min=18003, max=36661, avg=33300.64, stdev=1655.62 00:27:51.136 clat percentiles (usec): 00:27:51.136 | 1.00th=[21103], 5.00th=[32375], 10.00th=[32375], 20.00th=[32900], 00:27:51.136 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:27:51.136 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34866], 95.00th=[34866], 00:27:51.136 | 99.00th=[35914], 99.50th=[36439], 99.90th=[36439], 99.95th=[36439], 00:27:51.136 | 99.99th=[36439] 00:27:51.136 bw ( KiB/s): min= 1788, max= 2048, per=4.06%, avg=1912.55, stdev=77.61, samples=20 00:27:51.136 iops : min= 447, max= 512, avg=478.10, stdev=19.34, samples=20 00:27:51.136 lat (msec) : 20=0.38%, 50=99.63% 00:27:51.136 cpu : usr=98.88%, sys=0.74%, ctx=96, majf=0, minf=9 00:27:51.136 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:27:51.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.136 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.136 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:51.136 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:51.136 filename0: (groupid=0, jobs=1): err= 0: pid=3590615: Fri Apr 26 12:23:50 2024 00:27:51.136 read: IOPS=477, BW=1911KiB/s (1956kB/s)(18.7MiB/10016msec) 00:27:51.136 slat (nsec): min=5573, max=71947, avg=11324.20, stdev=8396.74 00:27:51.136 clat (usec): min=19747, max=46796, avg=33403.99, stdev=1330.33 00:27:51.136 lat (usec): min=19753, max=46820, avg=33415.31, stdev=1329.30 00:27:51.136 clat percentiles (usec): 00:27:51.136 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32375], 20.00th=[32900], 00:27:51.136 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:27:51.136 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34866], 95.00th=[34866], 00:27:51.136 | 99.00th=[35914], 99.50th=[35914], 99.90th=[45351], 99.95th=[46924], 00:27:51.136 | 99.99th=[46924] 00:27:51.136 bw ( KiB/s): min= 1792, max= 2048, per=4.05%, avg=1906.11, stdev=58.64, samples=19 00:27:51.136 iops : min= 448, max= 512, avg=476.53, stdev=14.66, samples=19 00:27:51.136 lat (msec) : 20=0.04%, 50=99.96% 00:27:51.136 cpu : usr=99.17%, sys=0.56%, ctx=23, majf=0, minf=9 00:27:51.136 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:27:51.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.136 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.136 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:51.136 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:51.136 filename0: (groupid=0, jobs=1): err= 0: pid=3590617: Fri Apr 26 12:23:50 2024 00:27:51.136 read: IOPS=494, BW=1978KiB/s (2025kB/s)(19.3MiB/10010msec) 00:27:51.136 slat (nsec): min=5503, max=86206, avg=15419.86, stdev=11260.38 00:27:51.136 clat (usec): min=11559, max=52116, avg=32241.54, stdev=4568.95 00:27:51.136 lat (usec): min=11569, max=52132, avg=32256.96, stdev=4570.17 00:27:51.136 clat percentiles (usec): 00:27:51.136 | 1.00th=[20579], 5.00th=[22938], 10.00th=[25297], 20.00th=[29492], 00:27:51.136 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:27:51.136 | 70.00th=[33817], 80.00th=[33817], 90.00th=[35390], 95.00th=[39060], 00:27:51.136 | 99.00th=[45876], 99.50th=[49021], 99.90th=[52167], 99.95th=[52167], 00:27:51.136 | 99.99th=[52167] 00:27:51.136 bw ( KiB/s): min= 1792, max= 2208, per=4.20%, avg=1978.53, stdev=96.23, samples=19 00:27:51.136 iops : min= 448, max= 552, avg=494.63, stdev=24.06, samples=19 00:27:51.136 lat (msec) : 20=0.57%, 50=99.11%, 100=0.32% 00:27:51.136 cpu : usr=99.02%, sys=0.67%, ctx=70, majf=0, minf=9 00:27:51.136 IO depths : 1=2.4%, 2=4.9%, 4=11.8%, 8=68.9%, 16=12.1%, 32=0.0%, >=64=0.0% 00:27:51.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.136 complete : 0=0.0%, 4=90.9%, 8=5.4%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.136 issued rwts: total=4950,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:51.136 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:51.136 filename0: (groupid=0, jobs=1): err= 0: pid=3590618: Fri Apr 26 12:23:50 2024 00:27:51.136 read: IOPS=525, BW=2101KiB/s (2151kB/s)(20.6MiB/10016msec) 00:27:51.136 slat (nsec): min=5503, max=54434, avg=10908.60, stdev=7538.53 00:27:51.136 clat (usec): min=13663, max=48808, avg=30354.34, stdev=5119.47 00:27:51.136 lat (usec): min=13669, max=48821, avg=30365.25, stdev=5122.08 00:27:51.136 clat percentiles (usec): 00:27:51.136 | 1.00th=[17695], 5.00th=[20841], 10.00th=[21890], 20.00th=[23725], 00:27:51.136 | 30.00th=[32113], 40.00th=[32637], 50.00th=[32900], 60.00th=[33162], 00:27:51.136 | 70.00th=[33162], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:27:51.136 | 99.00th=[35914], 99.50th=[36439], 99.90th=[47449], 99.95th=[47449], 00:27:51.136 | 99.99th=[49021] 00:27:51.136 bw ( KiB/s): min= 1788, max= 2784, per=4.46%, avg=2100.85, stdev=316.39, samples=20 00:27:51.136 iops : min= 447, max= 696, avg=525.15, stdev=79.06, samples=20 00:27:51.136 lat (msec) : 20=2.87%, 50=97.13% 00:27:51.136 cpu : usr=98.54%, sys=0.85%, ctx=190, majf=0, minf=9 00:27:51.136 IO depths : 1=3.4%, 2=8.6%, 4=21.8%, 8=57.1%, 16=9.1%, 32=0.0%, >=64=0.0% 00:27:51.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.136 complete : 0=0.0%, 4=93.3%, 8=1.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.136 issued rwts: total=5261,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:51.136 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:51.136 filename0: (groupid=0, jobs=1): err= 0: pid=3590619: Fri Apr 26 12:23:50 2024 00:27:51.136 read: IOPS=477, BW=1910KiB/s (1955kB/s)(18.7MiB/10013msec) 00:27:51.136 slat (nsec): min=5491, max=88480, avg=21057.31, stdev=15055.33 00:27:51.136 clat (usec): min=14375, max=55460, avg=33315.26, stdev=3984.01 00:27:51.136 lat (usec): min=14381, max=55466, avg=33336.32, stdev=3982.93 00:27:51.136 clat percentiles (usec): 00:27:51.136 | 1.00th=[20579], 5.00th=[27919], 10.00th=[32113], 20.00th=[32375], 00:27:51.136 | 30.00th=[32637], 40.00th=[32900], 50.00th=[33162], 60.00th=[33424], 00:27:51.136 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34866], 95.00th=[37487], 00:27:51.136 | 99.00th=[49546], 99.50th=[54264], 99.90th=[54789], 99.95th=[55313], 00:27:51.136 | 99.99th=[55313] 00:27:51.136 bw ( KiB/s): min= 1788, max= 2048, per=4.05%, avg=1906.89, stdev=69.89, samples=19 00:27:51.136 iops : min= 447, max= 512, avg=476.68, stdev=17.54, samples=19 00:27:51.136 lat (msec) : 20=0.63%, 50=98.41%, 100=0.96% 00:27:51.136 cpu : usr=99.05%, sys=0.64%, ctx=42, majf=0, minf=9 00:27:51.136 IO depths : 1=4.7%, 2=9.3%, 4=20.0%, 8=57.5%, 16=8.5%, 32=0.0%, >=64=0.0% 00:27:51.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.136 complete : 0=0.0%, 4=92.9%, 8=1.9%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.136 issued rwts: total=4780,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:51.136 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:51.137 filename0: (groupid=0, jobs=1): err= 0: pid=3590620: Fri Apr 26 12:23:50 2024 00:27:51.137 read: IOPS=477, BW=1909KiB/s (1955kB/s)(18.7MiB/10023msec) 00:27:51.137 slat (nsec): min=5605, max=70315, avg=14114.74, stdev=10490.09 00:27:51.137 clat (usec): min=20193, max=47891, avg=33401.47, stdev=1433.73 00:27:51.137 lat (usec): min=20201, max=47897, avg=33415.58, stdev=1431.84 00:27:51.137 clat percentiles (usec): 00:27:51.137 | 1.00th=[29754], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:27:51.137 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:27:51.137 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:27:51.137 | 99.00th=[36439], 99.50th=[42730], 99.90th=[45876], 99.95th=[46400], 00:27:51.137 | 99.99th=[47973] 00:27:51.137 bw ( KiB/s): min= 1792, max= 2048, per=4.04%, avg=1905.25, stdev=70.54, samples=20 00:27:51.137 iops : min= 448, max= 512, avg=476.30, stdev=17.64, samples=20 00:27:51.137 lat (msec) : 50=100.00% 00:27:51.137 cpu : usr=99.07%, sys=0.59%, ctx=92, majf=0, minf=9 00:27:51.137 IO depths : 1=5.9%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:27:51.137 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.137 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.137 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:51.137 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:51.137 filename0: (groupid=0, jobs=1): err= 0: pid=3590621: Fri Apr 26 12:23:50 2024 00:27:51.137 read: IOPS=477, BW=1908KiB/s (1954kB/s)(18.7MiB/10027msec) 00:27:51.137 slat (nsec): min=5647, max=58689, avg=11582.13, stdev=7444.00 00:27:51.137 clat (usec): min=18430, max=46412, avg=33436.53, stdev=1579.62 00:27:51.137 lat (usec): min=18438, max=46429, avg=33448.11, stdev=1580.09 00:27:51.137 clat percentiles (usec): 00:27:51.137 | 1.00th=[29754], 5.00th=[32375], 10.00th=[32375], 20.00th=[32900], 00:27:51.137 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:27:51.137 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:27:51.137 | 99.00th=[36439], 99.50th=[40633], 99.90th=[46400], 99.95th=[46400], 00:27:51.137 | 99.99th=[46400] 00:27:51.137 bw ( KiB/s): min= 1788, max= 2048, per=4.05%, avg=1906.55, stdev=71.76, samples=20 00:27:51.137 iops : min= 447, max= 512, avg=476.60, stdev=17.93, samples=20 00:27:51.137 lat (msec) : 20=0.33%, 50=99.67% 00:27:51.137 cpu : usr=99.22%, sys=0.52%, ctx=10, majf=0, minf=9 00:27:51.137 IO depths : 1=5.8%, 2=12.0%, 4=24.9%, 8=50.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:27:51.137 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.137 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.137 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:51.137 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:51.137 filename0: (groupid=0, jobs=1): err= 0: pid=3590622: Fri Apr 26 12:23:50 2024 00:27:51.137 read: IOPS=478, BW=1913KiB/s (1959kB/s)(18.7MiB/10009msec) 00:27:51.137 slat (nsec): min=5511, max=91432, avg=22987.85, stdev=14475.13 00:27:51.137 clat (usec): min=10945, max=53082, avg=33238.64, stdev=3217.66 00:27:51.137 lat (usec): min=10955, max=53088, avg=33261.63, stdev=3217.45 00:27:51.137 clat percentiles (usec): 00:27:51.137 | 1.00th=[21627], 5.00th=[31851], 10.00th=[32113], 20.00th=[32637], 00:27:51.137 | 30.00th=[32637], 40.00th=[32900], 50.00th=[33162], 60.00th=[33424], 00:27:51.137 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:27:51.137 | 99.00th=[46400], 99.50th=[50594], 99.90th=[53216], 99.95th=[53216], 00:27:51.137 | 99.99th=[53216] 00:27:51.137 bw ( KiB/s): min= 1788, max= 2112, per=4.05%, avg=1907.95, stdev=80.74, samples=19 00:27:51.137 iops : min= 447, max= 528, avg=476.95, stdev=20.24, samples=19 00:27:51.137 lat (msec) : 20=0.42%, 50=98.91%, 100=0.67% 00:27:51.137 cpu : usr=98.92%, sys=0.78%, ctx=47, majf=0, minf=9 00:27:51.137 IO depths : 1=5.6%, 2=11.4%, 4=23.5%, 8=52.5%, 16=7.0%, 32=0.0%, >=64=0.0% 00:27:51.137 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.137 complete : 0=0.0%, 4=93.7%, 8=0.6%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.137 issued rwts: total=4788,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:51.137 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:51.137 filename1: (groupid=0, jobs=1): err= 0: pid=3590623: Fri Apr 26 12:23:50 2024 00:27:51.137 read: IOPS=482, BW=1930KiB/s (1976kB/s)(18.9MiB/10021msec) 00:27:51.137 slat (nsec): min=5508, max=74356, avg=17625.99, stdev=11807.52 00:27:51.137 clat (usec): min=17332, max=50178, avg=33011.30, stdev=2833.22 00:27:51.137 lat (usec): min=17341, max=50184, avg=33028.93, stdev=2834.21 00:27:51.137 clat percentiles (usec): 00:27:51.137 | 1.00th=[22152], 5.00th=[27657], 10.00th=[32113], 20.00th=[32637], 00:27:51.137 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33424], 00:27:51.137 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34341], 95.00th=[35390], 00:27:51.137 | 99.00th=[43779], 99.50th=[45876], 99.90th=[50070], 99.95th=[50070], 00:27:51.137 | 99.99th=[50070] 00:27:51.137 bw ( KiB/s): min= 1792, max= 2096, per=4.09%, avg=1927.00, stdev=64.96, samples=20 00:27:51.137 iops : min= 448, max= 524, avg=481.75, stdev=16.24, samples=20 00:27:51.137 lat (msec) : 20=0.25%, 50=99.63%, 100=0.12% 00:27:51.137 cpu : usr=99.11%, sys=0.57%, ctx=35, majf=0, minf=9 00:27:51.137 IO depths : 1=4.9%, 2=10.8%, 4=23.9%, 8=52.8%, 16=7.6%, 32=0.0%, >=64=0.0% 00:27:51.137 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.137 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.137 issued rwts: total=4834,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:51.137 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:51.137 filename1: (groupid=0, jobs=1): err= 0: pid=3590625: Fri Apr 26 12:23:50 2024 00:27:51.137 read: IOPS=475, BW=1901KiB/s (1947kB/s)(18.6MiB/10007msec) 00:27:51.137 slat (nsec): min=5502, max=83608, avg=19313.63, stdev=14563.24 00:27:51.137 clat (usec): min=12518, max=61899, avg=33513.32, stdev=5527.20 00:27:51.137 lat (usec): min=12524, max=61905, avg=33532.64, stdev=5527.04 00:27:51.137 clat percentiles (usec): 00:27:51.137 | 1.00th=[19792], 5.00th=[23987], 10.00th=[28181], 20.00th=[32375], 00:27:51.137 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33424], 00:27:51.137 | 70.00th=[33817], 80.00th=[34341], 90.00th=[39060], 95.00th=[44827], 00:27:51.137 | 99.00th=[54789], 99.50th=[55837], 99.90th=[62129], 99.95th=[62129], 00:27:51.137 | 99.99th=[62129] 00:27:51.137 bw ( KiB/s): min= 1712, max= 2064, per=4.04%, avg=1901.16, stdev=102.49, samples=19 00:27:51.137 iops : min= 428, max= 516, avg=475.21, stdev=25.62, samples=19 00:27:51.137 lat (msec) : 20=1.18%, 50=96.61%, 100=2.21% 00:27:51.137 cpu : usr=98.97%, sys=0.73%, ctx=35, majf=0, minf=9 00:27:51.137 IO depths : 1=2.8%, 2=5.7%, 4=14.1%, 8=66.4%, 16=11.0%, 32=0.0%, >=64=0.0% 00:27:51.137 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.137 complete : 0=0.0%, 4=91.5%, 8=4.2%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.137 issued rwts: total=4756,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:51.137 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:51.137 filename1: (groupid=0, jobs=1): err= 0: pid=3590626: Fri Apr 26 12:23:50 2024 00:27:51.137 read: IOPS=476, BW=1907KiB/s (1953kB/s)(18.6MiB/10002msec) 00:27:51.137 slat (nsec): min=5592, max=67779, avg=17879.60, stdev=11698.59 00:27:51.137 clat (usec): min=22440, max=50632, avg=33399.32, stdev=1390.36 00:27:51.137 lat (usec): min=22447, max=50654, avg=33417.20, stdev=1388.76 00:27:51.137 clat percentiles (usec): 00:27:51.137 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:27:51.137 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33424], 00:27:51.137 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:27:51.137 | 99.00th=[35914], 99.50th=[36439], 99.90th=[50594], 99.95th=[50594], 00:27:51.137 | 99.99th=[50594] 00:27:51.137 bw ( KiB/s): min= 1788, max= 2043, per=4.04%, avg=1905.84, stdev=58.47, samples=19 00:27:51.137 iops : min= 447, max= 510, avg=476.42, stdev=14.52, samples=19 00:27:51.137 lat (msec) : 50=99.66%, 100=0.34% 00:27:51.137 cpu : usr=98.58%, sys=0.88%, ctx=76, majf=0, minf=9 00:27:51.137 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:27:51.137 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.137 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.137 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:51.137 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:51.137 filename1: (groupid=0, jobs=1): err= 0: pid=3590627: Fri Apr 26 12:23:50 2024 00:27:51.137 read: IOPS=474, BW=1899KiB/s (1944kB/s)(18.6MiB/10010msec) 00:27:51.137 slat (nsec): min=5564, max=90963, avg=22735.76, stdev=14637.98 00:27:51.137 clat (usec): min=11258, max=64457, avg=33511.70, stdev=2568.64 00:27:51.137 lat (usec): min=11264, max=64471, avg=33534.44, stdev=2566.83 00:27:51.137 clat percentiles (usec): 00:27:51.137 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:27:51.137 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:27:51.137 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:27:51.137 | 99.00th=[43779], 99.50th=[45351], 99.90th=[64226], 99.95th=[64226], 00:27:51.137 | 99.99th=[64226] 00:27:51.137 bw ( KiB/s): min= 1788, max= 2048, per=4.02%, avg=1892.63, stdev=68.77, samples=19 00:27:51.137 iops : min= 447, max= 512, avg=473.16, stdev=17.19, samples=19 00:27:51.137 lat (msec) : 20=0.53%, 50=99.14%, 100=0.34% 00:27:51.137 cpu : usr=98.59%, sys=0.90%, ctx=100, majf=0, minf=9 00:27:51.137 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:27:51.137 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.137 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.137 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:51.137 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:51.137 filename1: (groupid=0, jobs=1): err= 0: pid=3590628: Fri Apr 26 12:23:50 2024 00:27:51.137 read: IOPS=477, BW=1910KiB/s (1956kB/s)(18.7MiB/10019msec) 00:27:51.137 slat (nsec): min=5513, max=92316, avg=20293.67, stdev=15440.51 00:27:51.137 clat (usec): min=20261, max=55905, avg=33332.11, stdev=1660.10 00:27:51.137 lat (usec): min=20267, max=55912, avg=33352.40, stdev=1657.74 00:27:51.137 clat percentiles (usec): 00:27:51.137 | 1.00th=[28705], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:27:51.137 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33424], 00:27:51.137 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34866], 95.00th=[35390], 00:27:51.137 | 99.00th=[36963], 99.50th=[39060], 99.90th=[55837], 99.95th=[55837], 00:27:51.137 | 99.99th=[55837] 00:27:51.137 bw ( KiB/s): min= 1792, max= 2048, per=4.05%, avg=1906.40, stdev=57.12, samples=20 00:27:51.137 iops : min= 448, max= 512, avg=476.60, stdev=14.28, samples=20 00:27:51.137 lat (msec) : 50=99.87%, 100=0.13% 00:27:51.138 cpu : usr=99.24%, sys=0.47%, ctx=52, majf=0, minf=9 00:27:51.138 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:27:51.138 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.138 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.138 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:51.138 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:51.138 filename1: (groupid=0, jobs=1): err= 0: pid=3590629: Fri Apr 26 12:23:50 2024 00:27:51.138 read: IOPS=482, BW=1930KiB/s (1977kB/s)(18.9MiB/10012msec) 00:27:51.138 slat (nsec): min=5523, max=73907, avg=15694.67, stdev=9741.79 00:27:51.138 clat (usec): min=2478, max=36718, avg=33006.36, stdev=3148.17 00:27:51.138 lat (usec): min=2495, max=36737, avg=33022.06, stdev=3148.66 00:27:51.138 clat percentiles (usec): 00:27:51.138 | 1.00th=[17171], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:27:51.138 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:27:51.138 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:27:51.138 | 99.00th=[35914], 99.50th=[36439], 99.90th=[36439], 99.95th=[36439], 00:27:51.138 | 99.99th=[36963] 00:27:51.138 bw ( KiB/s): min= 1788, max= 2304, per=4.09%, avg=1926.00, stdev=113.81, samples=20 00:27:51.138 iops : min= 447, max= 576, avg=481.50, stdev=28.45, samples=20 00:27:51.138 lat (msec) : 4=0.41%, 10=0.46%, 20=0.58%, 50=98.55% 00:27:51.138 cpu : usr=99.05%, sys=0.62%, ctx=71, majf=0, minf=9 00:27:51.138 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:27:51.138 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.138 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.138 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:51.138 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:51.138 filename1: (groupid=0, jobs=1): err= 0: pid=3590630: Fri Apr 26 12:23:50 2024 00:27:51.138 read: IOPS=477, BW=1908KiB/s (1954kB/s)(18.6MiB/10008msec) 00:27:51.138 slat (nsec): min=5533, max=90651, avg=21162.94, stdev=14543.14 00:27:51.138 clat (usec): min=12599, max=51277, avg=33334.75, stdev=2144.13 00:27:51.138 lat (usec): min=12615, max=51293, avg=33355.91, stdev=2142.62 00:27:51.138 clat percentiles (usec): 00:27:51.138 | 1.00th=[28443], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:27:51.138 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33424], 00:27:51.138 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34866], 95.00th=[35390], 00:27:51.138 | 99.00th=[36439], 99.50th=[50070], 99.90th=[51119], 99.95th=[51119], 00:27:51.138 | 99.99th=[51119] 00:27:51.138 bw ( KiB/s): min= 1792, max= 2048, per=4.04%, avg=1901.89, stdev=65.84, samples=19 00:27:51.138 iops : min= 448, max= 512, avg=475.47, stdev=16.46, samples=19 00:27:51.138 lat (msec) : 20=0.34%, 50=99.16%, 100=0.50% 00:27:51.138 cpu : usr=98.62%, sys=0.82%, ctx=75, majf=0, minf=9 00:27:51.138 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:27:51.138 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.138 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.138 issued rwts: total=4774,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:51.138 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:51.138 filename1: (groupid=0, jobs=1): err= 0: pid=3590631: Fri Apr 26 12:23:50 2024 00:27:51.138 read: IOPS=477, BW=1910KiB/s (1956kB/s)(18.7MiB/10020msec) 00:27:51.138 slat (nsec): min=5576, max=63272, avg=11921.01, stdev=8179.63 00:27:51.138 clat (usec): min=21853, max=45479, avg=33413.44, stdev=1048.46 00:27:51.138 lat (usec): min=21875, max=45487, avg=33425.36, stdev=1048.71 00:27:51.138 clat percentiles (usec): 00:27:51.138 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32375], 20.00th=[32900], 00:27:51.138 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:27:51.138 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:27:51.138 | 99.00th=[35914], 99.50th=[36439], 99.90th=[42206], 99.95th=[42206], 00:27:51.138 | 99.99th=[45351] 00:27:51.138 bw ( KiB/s): min= 1792, max= 2048, per=4.05%, avg=1907.15, stdev=57.48, samples=20 00:27:51.138 iops : min= 448, max= 512, avg=476.75, stdev=14.36, samples=20 00:27:51.138 lat (msec) : 50=100.00% 00:27:51.138 cpu : usr=99.19%, sys=0.53%, ctx=21, majf=0, minf=9 00:27:51.138 IO depths : 1=5.8%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:27:51.138 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.138 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.138 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:51.138 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:51.138 filename2: (groupid=0, jobs=1): err= 0: pid=3590632: Fri Apr 26 12:23:50 2024 00:27:51.138 read: IOPS=475, BW=1903KiB/s (1948kB/s)(18.6MiB/10023msec) 00:27:51.138 slat (nsec): min=5561, max=86672, avg=21528.47, stdev=15232.27 00:27:51.138 clat (usec): min=26580, max=46852, avg=33443.38, stdev=1419.95 00:27:51.138 lat (usec): min=26591, max=46859, avg=33464.91, stdev=1416.37 00:27:51.138 clat percentiles (usec): 00:27:51.138 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:27:51.138 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33424], 00:27:51.138 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:27:51.138 | 99.00th=[39584], 99.50th=[42206], 99.90th=[46924], 99.95th=[46924], 00:27:51.138 | 99.99th=[46924] 00:27:51.138 bw ( KiB/s): min= 1792, max= 2048, per=4.03%, avg=1900.00, stdev=62.44, samples=20 00:27:51.138 iops : min= 448, max= 512, avg=475.00, stdev=15.61, samples=20 00:27:51.138 lat (msec) : 50=100.00% 00:27:51.138 cpu : usr=99.21%, sys=0.48%, ctx=59, majf=0, minf=9 00:27:51.138 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:27:51.138 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.138 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.138 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:51.138 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:51.138 filename2: (groupid=0, jobs=1): err= 0: pid=3590634: Fri Apr 26 12:23:50 2024 00:27:51.138 read: IOPS=477, BW=1910KiB/s (1956kB/s)(18.7MiB/10019msec) 00:27:51.138 slat (nsec): min=5553, max=51228, avg=10587.85, stdev=6394.53 00:27:51.138 clat (usec): min=22483, max=42239, avg=33404.64, stdev=1178.80 00:27:51.138 lat (usec): min=22490, max=42247, avg=33415.23, stdev=1178.74 00:27:51.138 clat percentiles (usec): 00:27:51.138 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:27:51.138 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:27:51.138 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:27:51.138 | 99.00th=[35914], 99.50th=[36439], 99.90th=[40633], 99.95th=[41157], 00:27:51.138 | 99.99th=[42206] 00:27:51.138 bw ( KiB/s): min= 1788, max= 2048, per=4.05%, avg=1906.11, stdev=59.13, samples=19 00:27:51.138 iops : min= 447, max= 512, avg=476.53, stdev=14.78, samples=19 00:27:51.138 lat (msec) : 50=100.00% 00:27:51.138 cpu : usr=98.84%, sys=0.73%, ctx=167, majf=0, minf=9 00:27:51.138 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:27:51.138 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.138 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.138 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:51.138 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:51.138 filename2: (groupid=0, jobs=1): err= 0: pid=3590635: Fri Apr 26 12:23:50 2024 00:27:51.138 read: IOPS=485, BW=1942KiB/s (1989kB/s)(19.0MiB/10009msec) 00:27:51.138 slat (nsec): min=5500, max=78928, avg=20019.44, stdev=12423.73 00:27:51.138 clat (usec): min=11113, max=51183, avg=32772.25, stdev=3300.08 00:27:51.138 lat (usec): min=11123, max=51200, avg=32792.27, stdev=3301.18 00:27:51.138 clat percentiles (usec): 00:27:51.138 | 1.00th=[21890], 5.00th=[24511], 10.00th=[32113], 20.00th=[32375], 00:27:51.138 | 30.00th=[32637], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:27:51.138 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:27:51.138 | 99.00th=[44303], 99.50th=[48497], 99.90th=[51119], 99.95th=[51119], 00:27:51.138 | 99.99th=[51119] 00:27:51.138 bw ( KiB/s): min= 1788, max= 2288, per=4.12%, avg=1938.26, stdev=103.03, samples=19 00:27:51.138 iops : min= 447, max= 572, avg=484.53, stdev=25.82, samples=19 00:27:51.138 lat (msec) : 20=0.33%, 50=99.18%, 100=0.49% 00:27:51.138 cpu : usr=99.00%, sys=0.74%, ctx=13, majf=0, minf=9 00:27:51.138 IO depths : 1=5.4%, 2=10.9%, 4=22.7%, 8=53.7%, 16=7.2%, 32=0.0%, >=64=0.0% 00:27:51.138 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.138 complete : 0=0.0%, 4=93.4%, 8=0.9%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.138 issued rwts: total=4860,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:51.138 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:51.138 filename2: (groupid=0, jobs=1): err= 0: pid=3590636: Fri Apr 26 12:23:50 2024 00:27:51.138 read: IOPS=483, BW=1932KiB/s (1979kB/s)(18.9MiB/10020msec) 00:27:51.138 slat (nsec): min=5499, max=76502, avg=15806.09, stdev=11203.37 00:27:51.138 clat (usec): min=18640, max=54096, avg=32998.04, stdev=3799.02 00:27:51.138 lat (usec): min=18646, max=54108, avg=33013.85, stdev=3799.66 00:27:51.138 clat percentiles (usec): 00:27:51.138 | 1.00th=[21103], 5.00th=[25035], 10.00th=[31327], 20.00th=[32375], 00:27:51.138 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33424], 00:27:51.138 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34866], 95.00th=[35914], 00:27:51.138 | 99.00th=[50070], 99.50th=[51643], 99.90th=[54264], 99.95th=[54264], 00:27:51.138 | 99.99th=[54264] 00:27:51.138 bw ( KiB/s): min= 1792, max= 2048, per=4.10%, avg=1929.55, stdev=74.86, samples=20 00:27:51.138 iops : min= 448, max= 512, avg=482.35, stdev=18.79, samples=20 00:27:51.138 lat (msec) : 20=0.43%, 50=98.60%, 100=0.97% 00:27:51.138 cpu : usr=98.87%, sys=0.85%, ctx=12, majf=0, minf=9 00:27:51.138 IO depths : 1=5.1%, 2=10.3%, 4=21.7%, 8=55.3%, 16=7.6%, 32=0.0%, >=64=0.0% 00:27:51.138 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.138 complete : 0=0.0%, 4=93.2%, 8=1.2%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.138 issued rwts: total=4840,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:51.138 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:51.138 filename2: (groupid=0, jobs=1): err= 0: pid=3590637: Fri Apr 26 12:23:50 2024 00:27:51.138 read: IOPS=476, BW=1907KiB/s (1953kB/s)(18.6MiB/10013msec) 00:27:51.138 slat (nsec): min=5559, max=67489, avg=13590.92, stdev=8926.10 00:27:51.138 clat (usec): min=12761, max=61698, avg=33446.66, stdev=2376.44 00:27:51.138 lat (usec): min=12769, max=61714, avg=33460.25, stdev=2375.97 00:27:51.138 clat percentiles (usec): 00:27:51.138 | 1.00th=[26608], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:27:51.139 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:27:51.139 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:27:51.139 | 99.00th=[41681], 99.50th=[50070], 99.90th=[53216], 99.95th=[53216], 00:27:51.139 | 99.99th=[61604] 00:27:51.139 bw ( KiB/s): min= 1792, max= 2027, per=4.04%, avg=1901.79, stdev=57.85, samples=19 00:27:51.139 iops : min= 448, max= 506, avg=475.37, stdev=14.45, samples=19 00:27:51.139 lat (msec) : 20=0.34%, 50=99.12%, 100=0.54% 00:27:51.139 cpu : usr=98.79%, sys=0.66%, ctx=137, majf=0, minf=9 00:27:51.139 IO depths : 1=3.7%, 2=9.8%, 4=24.5%, 8=53.2%, 16=8.8%, 32=0.0%, >=64=0.0% 00:27:51.139 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.139 complete : 0=0.0%, 4=94.1%, 8=0.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.139 issued rwts: total=4774,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:51.139 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:51.139 filename2: (groupid=0, jobs=1): err= 0: pid=3590638: Fri Apr 26 12:23:50 2024 00:27:51.139 read: IOPS=613, BW=2453KiB/s (2512kB/s)(24.0MiB/10022msec) 00:27:51.139 slat (nsec): min=2959, max=27487, avg=6547.05, stdev=932.00 00:27:51.139 clat (usec): min=1507, max=35756, avg=26003.85, stdev=6027.49 00:27:51.139 lat (usec): min=1512, max=35763, avg=26010.39, stdev=6027.63 00:27:51.139 clat percentiles (usec): 00:27:51.139 | 1.00th=[ 2573], 5.00th=[19792], 10.00th=[20579], 20.00th=[21627], 00:27:51.139 | 30.00th=[22414], 40.00th=[22938], 50.00th=[23987], 60.00th=[25297], 00:27:51.139 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:27:51.139 | 99.00th=[33817], 99.50th=[35390], 99.90th=[35914], 99.95th=[35914], 00:27:51.139 | 99.99th=[35914] 00:27:51.139 bw ( KiB/s): min= 1916, max= 2944, per=5.21%, avg=2455.50, stdev=420.08, samples=20 00:27:51.139 iops : min= 479, max= 736, avg=613.85, stdev=105.00, samples=20 00:27:51.139 lat (msec) : 2=0.75%, 4=0.55%, 10=0.52%, 20=4.73%, 50=93.44% 00:27:51.139 cpu : usr=99.17%, sys=0.58%, ctx=13, majf=0, minf=0 00:27:51.139 IO depths : 1=4.5%, 2=9.1%, 4=20.0%, 8=58.3%, 16=8.1%, 32=0.0%, >=64=0.0% 00:27:51.139 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.139 complete : 0=0.0%, 4=92.7%, 8=1.6%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.139 issued rwts: total=6146,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:51.139 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:51.139 filename2: (groupid=0, jobs=1): err= 0: pid=3590639: Fri Apr 26 12:23:50 2024 00:27:51.139 read: IOPS=587, BW=2351KiB/s (2407kB/s)(23.0MiB/10016msec) 00:27:51.139 slat (nsec): min=5494, max=47330, avg=8532.21, stdev=4863.32 00:27:51.139 clat (usec): min=11501, max=54579, avg=27171.19, stdev=6560.20 00:27:51.139 lat (usec): min=11510, max=54588, avg=27179.73, stdev=6561.42 00:27:51.139 clat percentiles (usec): 00:27:51.139 | 1.00th=[11863], 5.00th=[13173], 10.00th=[20055], 20.00th=[21890], 00:27:51.139 | 30.00th=[23200], 40.00th=[24511], 50.00th=[26870], 60.00th=[32113], 00:27:51.139 | 70.00th=[32900], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:27:51.139 | 99.00th=[35390], 99.50th=[38011], 99.90th=[53740], 99.95th=[53740], 00:27:51.139 | 99.99th=[54789] 00:27:51.139 bw ( KiB/s): min= 1788, max= 2858, per=4.98%, avg=2346.65, stdev=392.38, samples=20 00:27:51.139 iops : min= 447, max= 714, avg=586.55, stdev=98.06, samples=20 00:27:51.139 lat (msec) : 20=9.77%, 50=89.93%, 100=0.31% 00:27:51.139 cpu : usr=98.69%, sys=0.80%, ctx=44, majf=0, minf=9 00:27:51.139 IO depths : 1=1.9%, 2=4.1%, 4=12.6%, 8=70.4%, 16=11.0%, 32=0.0%, >=64=0.0% 00:27:51.139 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.139 complete : 0=0.0%, 4=90.6%, 8=4.3%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.139 issued rwts: total=5886,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:51.139 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:51.139 filename2: (groupid=0, jobs=1): err= 0: pid=3590640: Fri Apr 26 12:23:50 2024 00:27:51.139 read: IOPS=477, BW=1910KiB/s (1956kB/s)(18.7MiB/10017msec) 00:27:51.139 slat (nsec): min=5572, max=82385, avg=16360.83, stdev=11531.12 00:27:51.139 clat (usec): min=17355, max=46122, avg=33361.63, stdev=1357.06 00:27:51.139 lat (usec): min=17361, max=46135, avg=33377.99, stdev=1357.18 00:27:51.139 clat percentiles (usec): 00:27:51.139 | 1.00th=[31589], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:27:51.139 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:27:51.139 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[35390], 00:27:51.139 | 99.00th=[36439], 99.50th=[36439], 99.90th=[39060], 99.95th=[42206], 00:27:51.139 | 99.99th=[45876] 00:27:51.139 bw ( KiB/s): min= 1792, max= 2048, per=4.05%, avg=1907.00, stdev=57.20, samples=20 00:27:51.139 iops : min= 448, max= 512, avg=476.75, stdev=14.30, samples=20 00:27:51.139 lat (msec) : 20=0.33%, 50=99.67% 00:27:51.139 cpu : usr=98.48%, sys=0.92%, ctx=126, majf=0, minf=9 00:27:51.139 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:27:51.139 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.139 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.139 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:51.139 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:51.139 00:27:51.139 Run status group 0 (all jobs): 00:27:51.139 READ: bw=46.0MiB/s (48.2MB/s), 1899KiB/s-2453KiB/s (1944kB/s-2512kB/s), io=461MiB (484MB), run=10002-10027msec 00:27:51.139 12:23:50 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:27:51.139 12:23:50 -- target/dif.sh@43 -- # local sub 00:27:51.139 12:23:50 -- target/dif.sh@45 -- # for sub in "$@" 00:27:51.139 12:23:50 -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:51.139 12:23:50 -- target/dif.sh@36 -- # local sub_id=0 00:27:51.139 12:23:50 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:51.139 12:23:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:51.139 12:23:50 -- common/autotest_common.sh@10 -- # set +x 00:27:51.139 12:23:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:51.139 12:23:50 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:51.139 12:23:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:51.139 12:23:50 -- common/autotest_common.sh@10 -- # set +x 00:27:51.139 12:23:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:51.139 12:23:50 -- target/dif.sh@45 -- # for sub in "$@" 00:27:51.139 12:23:50 -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:51.139 12:23:50 -- target/dif.sh@36 -- # local sub_id=1 00:27:51.139 12:23:50 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:51.139 12:23:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:51.139 12:23:50 -- common/autotest_common.sh@10 -- # set +x 00:27:51.139 12:23:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:51.139 12:23:50 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:51.139 12:23:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:51.139 12:23:50 -- common/autotest_common.sh@10 -- # set +x 00:27:51.139 12:23:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:51.139 12:23:50 -- target/dif.sh@45 -- # for sub in "$@" 00:27:51.139 12:23:50 -- target/dif.sh@46 -- # destroy_subsystem 2 00:27:51.139 12:23:50 -- target/dif.sh@36 -- # local sub_id=2 00:27:51.139 12:23:50 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:51.139 12:23:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:51.139 12:23:50 -- common/autotest_common.sh@10 -- # set +x 00:27:51.139 12:23:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:51.139 12:23:50 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:27:51.139 12:23:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:51.139 12:23:50 -- common/autotest_common.sh@10 -- # set +x 00:27:51.139 12:23:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:51.139 12:23:50 -- target/dif.sh@115 -- # NULL_DIF=1 00:27:51.139 12:23:50 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:27:51.139 12:23:50 -- target/dif.sh@115 -- # numjobs=2 00:27:51.139 12:23:50 -- target/dif.sh@115 -- # iodepth=8 00:27:51.139 12:23:50 -- target/dif.sh@115 -- # runtime=5 00:27:51.139 12:23:50 -- target/dif.sh@115 -- # files=1 00:27:51.139 12:23:50 -- target/dif.sh@117 -- # create_subsystems 0 1 00:27:51.139 12:23:50 -- target/dif.sh@28 -- # local sub 00:27:51.139 12:23:50 -- target/dif.sh@30 -- # for sub in "$@" 00:27:51.139 12:23:50 -- target/dif.sh@31 -- # create_subsystem 0 00:27:51.139 12:23:50 -- target/dif.sh@18 -- # local sub_id=0 00:27:51.139 12:23:50 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:51.139 12:23:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:51.139 12:23:50 -- common/autotest_common.sh@10 -- # set +x 00:27:51.139 bdev_null0 00:27:51.139 12:23:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:51.139 12:23:50 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:51.139 12:23:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:51.139 12:23:50 -- common/autotest_common.sh@10 -- # set +x 00:27:51.139 12:23:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:51.139 12:23:50 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:51.139 12:23:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:51.139 12:23:50 -- common/autotest_common.sh@10 -- # set +x 00:27:51.139 12:23:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:51.139 12:23:50 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:51.139 12:23:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:51.139 12:23:50 -- common/autotest_common.sh@10 -- # set +x 00:27:51.139 [2024-04-26 12:23:50.948812] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:51.139 12:23:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:51.139 12:23:50 -- target/dif.sh@30 -- # for sub in "$@" 00:27:51.139 12:23:50 -- target/dif.sh@31 -- # create_subsystem 1 00:27:51.139 12:23:50 -- target/dif.sh@18 -- # local sub_id=1 00:27:51.139 12:23:50 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:27:51.139 12:23:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:51.139 12:23:50 -- common/autotest_common.sh@10 -- # set +x 00:27:51.139 bdev_null1 00:27:51.139 12:23:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:51.139 12:23:50 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:51.139 12:23:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:51.139 12:23:50 -- common/autotest_common.sh@10 -- # set +x 00:27:51.139 12:23:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:51.139 12:23:50 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:51.139 12:23:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:51.140 12:23:50 -- common/autotest_common.sh@10 -- # set +x 00:27:51.140 12:23:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:51.140 12:23:50 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:51.140 12:23:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:51.140 12:23:50 -- common/autotest_common.sh@10 -- # set +x 00:27:51.140 12:23:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:51.140 12:23:50 -- target/dif.sh@118 -- # fio /dev/fd/62 00:27:51.140 12:23:50 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:27:51.140 12:23:50 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:27:51.140 12:23:50 -- nvmf/common.sh@521 -- # config=() 00:27:51.140 12:23:50 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:51.140 12:23:50 -- nvmf/common.sh@521 -- # local subsystem config 00:27:51.140 12:23:50 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:51.140 12:23:51 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:51.140 12:23:51 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:51.140 { 00:27:51.140 "params": { 00:27:51.140 "name": "Nvme$subsystem", 00:27:51.140 "trtype": "$TEST_TRANSPORT", 00:27:51.140 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:51.140 "adrfam": "ipv4", 00:27:51.140 "trsvcid": "$NVMF_PORT", 00:27:51.140 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:51.140 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:51.140 "hdgst": ${hdgst:-false}, 00:27:51.140 "ddgst": ${ddgst:-false} 00:27:51.140 }, 00:27:51.140 "method": "bdev_nvme_attach_controller" 00:27:51.140 } 00:27:51.140 EOF 00:27:51.140 )") 00:27:51.140 12:23:51 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:27:51.140 12:23:50 -- target/dif.sh@82 -- # gen_fio_conf 00:27:51.140 12:23:51 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:51.140 12:23:51 -- target/dif.sh@54 -- # local file 00:27:51.140 12:23:51 -- common/autotest_common.sh@1325 -- # local sanitizers 00:27:51.140 12:23:51 -- target/dif.sh@56 -- # cat 00:27:51.140 12:23:51 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:51.140 12:23:51 -- common/autotest_common.sh@1327 -- # shift 00:27:51.140 12:23:51 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:27:51.140 12:23:51 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:51.140 12:23:51 -- nvmf/common.sh@543 -- # cat 00:27:51.140 12:23:51 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:51.140 12:23:51 -- common/autotest_common.sh@1331 -- # grep libasan 00:27:51.140 12:23:51 -- target/dif.sh@72 -- # (( file = 1 )) 00:27:51.140 12:23:51 -- target/dif.sh@72 -- # (( file <= files )) 00:27:51.140 12:23:51 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:51.140 12:23:51 -- target/dif.sh@73 -- # cat 00:27:51.140 12:23:51 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:51.140 12:23:51 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:51.140 { 00:27:51.140 "params": { 00:27:51.140 "name": "Nvme$subsystem", 00:27:51.140 "trtype": "$TEST_TRANSPORT", 00:27:51.140 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:51.140 "adrfam": "ipv4", 00:27:51.140 "trsvcid": "$NVMF_PORT", 00:27:51.140 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:51.140 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:51.140 "hdgst": ${hdgst:-false}, 00:27:51.140 "ddgst": ${ddgst:-false} 00:27:51.140 }, 00:27:51.140 "method": "bdev_nvme_attach_controller" 00:27:51.140 } 00:27:51.140 EOF 00:27:51.140 )") 00:27:51.140 12:23:51 -- target/dif.sh@72 -- # (( file++ )) 00:27:51.140 12:23:51 -- target/dif.sh@72 -- # (( file <= files )) 00:27:51.140 12:23:51 -- nvmf/common.sh@543 -- # cat 00:27:51.140 12:23:51 -- nvmf/common.sh@545 -- # jq . 00:27:51.140 12:23:51 -- nvmf/common.sh@546 -- # IFS=, 00:27:51.140 12:23:51 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:27:51.140 "params": { 00:27:51.140 "name": "Nvme0", 00:27:51.140 "trtype": "tcp", 00:27:51.140 "traddr": "10.0.0.2", 00:27:51.140 "adrfam": "ipv4", 00:27:51.140 "trsvcid": "4420", 00:27:51.140 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:51.140 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:51.140 "hdgst": false, 00:27:51.140 "ddgst": false 00:27:51.140 }, 00:27:51.140 "method": "bdev_nvme_attach_controller" 00:27:51.140 },{ 00:27:51.140 "params": { 00:27:51.140 "name": "Nvme1", 00:27:51.140 "trtype": "tcp", 00:27:51.140 "traddr": "10.0.0.2", 00:27:51.140 "adrfam": "ipv4", 00:27:51.140 "trsvcid": "4420", 00:27:51.140 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:51.140 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:51.140 "hdgst": false, 00:27:51.140 "ddgst": false 00:27:51.140 }, 00:27:51.140 "method": "bdev_nvme_attach_controller" 00:27:51.140 }' 00:27:51.140 12:23:51 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:51.140 12:23:51 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:51.140 12:23:51 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:51.140 12:23:51 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:51.140 12:23:51 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:27:51.140 12:23:51 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:51.140 12:23:51 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:51.140 12:23:51 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:51.140 12:23:51 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:51.140 12:23:51 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:51.140 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:27:51.140 ... 00:27:51.140 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:27:51.140 ... 00:27:51.140 fio-3.35 00:27:51.140 Starting 4 threads 00:27:51.140 EAL: No free 2048 kB hugepages reported on node 1 00:27:56.427 00:27:56.427 filename0: (groupid=0, jobs=1): err= 0: pid=3592872: Fri Apr 26 12:23:57 2024 00:27:56.427 read: IOPS=2075, BW=16.2MiB/s (17.0MB/s)(81.1MiB/5004msec) 00:27:56.427 slat (nsec): min=5325, max=40386, avg=7484.60, stdev=3214.56 00:27:56.427 clat (usec): min=1431, max=6917, avg=3833.80, stdev=726.81 00:27:56.427 lat (usec): min=1455, max=6946, avg=3841.29, stdev=726.78 00:27:56.427 clat percentiles (usec): 00:27:56.427 | 1.00th=[ 2507], 5.00th=[ 2933], 10.00th=[ 3195], 20.00th=[ 3392], 00:27:56.427 | 30.00th=[ 3490], 40.00th=[ 3556], 50.00th=[ 3621], 60.00th=[ 3752], 00:27:56.427 | 70.00th=[ 3851], 80.00th=[ 4146], 90.00th=[ 5276], 95.00th=[ 5342], 00:27:56.427 | 99.00th=[ 5866], 99.50th=[ 5997], 99.90th=[ 6325], 99.95th=[ 6456], 00:27:56.427 | 99.99th=[ 6915] 00:27:56.427 bw ( KiB/s): min=16304, max=17376, per=25.35%, avg=16606.40, stdev=336.00, samples=10 00:27:56.427 iops : min= 2038, max= 2172, avg=2075.80, stdev=42.00, samples=10 00:27:56.427 lat (msec) : 2=0.24%, 4=74.65%, 10=25.11% 00:27:56.427 cpu : usr=97.50%, sys=2.22%, ctx=7, majf=0, minf=0 00:27:56.427 IO depths : 1=0.1%, 2=0.7%, 4=71.5%, 8=27.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:56.427 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:56.427 complete : 0=0.0%, 4=93.0%, 8=7.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:56.427 issued rwts: total=10384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:56.427 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:56.427 filename0: (groupid=0, jobs=1): err= 0: pid=3592873: Fri Apr 26 12:23:57 2024 00:27:56.427 read: IOPS=2042, BW=16.0MiB/s (16.7MB/s)(79.8MiB/5002msec) 00:27:56.427 slat (nsec): min=5328, max=40951, avg=7648.10, stdev=3248.76 00:27:56.427 clat (usec): min=1496, max=48038, avg=3895.07, stdev=1415.00 00:27:56.427 lat (usec): min=1502, max=48061, avg=3902.72, stdev=1415.11 00:27:56.427 clat percentiles (usec): 00:27:56.427 | 1.00th=[ 2671], 5.00th=[ 3097], 10.00th=[ 3261], 20.00th=[ 3392], 00:27:56.427 | 30.00th=[ 3523], 40.00th=[ 3589], 50.00th=[ 3687], 60.00th=[ 3785], 00:27:56.427 | 70.00th=[ 3884], 80.00th=[ 4146], 90.00th=[ 5276], 95.00th=[ 5342], 00:27:56.427 | 99.00th=[ 5866], 99.50th=[ 6063], 99.90th=[ 6587], 99.95th=[47973], 00:27:56.427 | 99.99th=[47973] 00:27:56.427 bw ( KiB/s): min=14640, max=16800, per=24.86%, avg=16288.00, stdev=641.50, samples=9 00:27:56.427 iops : min= 1830, max= 2100, avg=2036.00, stdev=80.19, samples=9 00:27:56.427 lat (msec) : 2=0.04%, 4=75.36%, 10=24.52%, 50=0.08% 00:27:56.427 cpu : usr=96.84%, sys=2.88%, ctx=5, majf=0, minf=9 00:27:56.427 IO depths : 1=0.3%, 2=0.9%, 4=71.3%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:56.427 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:56.427 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:56.427 issued rwts: total=10217,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:56.427 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:56.427 filename1: (groupid=0, jobs=1): err= 0: pid=3592874: Fri Apr 26 12:23:57 2024 00:27:56.427 read: IOPS=2045, BW=16.0MiB/s (16.8MB/s)(79.9MiB/5001msec) 00:27:56.427 slat (nsec): min=5322, max=49073, avg=7259.39, stdev=3135.33 00:27:56.427 clat (usec): min=2085, max=7158, avg=3891.64, stdev=716.30 00:27:56.427 lat (usec): min=2102, max=7166, avg=3898.90, stdev=716.24 00:27:56.427 clat percentiles (usec): 00:27:56.427 | 1.00th=[ 2704], 5.00th=[ 3097], 10.00th=[ 3261], 20.00th=[ 3425], 00:27:56.427 | 30.00th=[ 3523], 40.00th=[ 3589], 50.00th=[ 3687], 60.00th=[ 3785], 00:27:56.427 | 70.00th=[ 3916], 80.00th=[ 4228], 90.00th=[ 5276], 95.00th=[ 5407], 00:27:56.427 | 99.00th=[ 5932], 99.50th=[ 6128], 99.90th=[ 6718], 99.95th=[ 7046], 00:27:56.427 | 99.99th=[ 7177] 00:27:56.427 bw ( KiB/s): min=16096, max=16768, per=24.96%, avg=16355.56, stdev=217.00, samples=9 00:27:56.428 iops : min= 2012, max= 2096, avg=2044.44, stdev=27.13, samples=9 00:27:56.428 lat (msec) : 4=72.97%, 10=27.03% 00:27:56.428 cpu : usr=97.08%, sys=2.68%, ctx=5, majf=0, minf=9 00:27:56.428 IO depths : 1=0.2%, 2=0.7%, 4=71.6%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:56.428 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:56.428 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:56.428 issued rwts: total=10228,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:56.428 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:56.428 filename1: (groupid=0, jobs=1): err= 0: pid=3592875: Fri Apr 26 12:23:57 2024 00:27:56.428 read: IOPS=2074, BW=16.2MiB/s (17.0MB/s)(81.7MiB/5042msec) 00:27:56.428 slat (nsec): min=5326, max=51561, avg=7727.02, stdev=3158.23 00:27:56.428 clat (usec): min=1542, max=42910, avg=3815.26, stdev=977.13 00:27:56.428 lat (usec): min=1548, max=42915, avg=3822.99, stdev=976.91 00:27:56.428 clat percentiles (usec): 00:27:56.428 | 1.00th=[ 2540], 5.00th=[ 2900], 10.00th=[ 3130], 20.00th=[ 3326], 00:27:56.428 | 30.00th=[ 3458], 40.00th=[ 3523], 50.00th=[ 3589], 60.00th=[ 3720], 00:27:56.428 | 70.00th=[ 3851], 80.00th=[ 4228], 90.00th=[ 5211], 95.00th=[ 5342], 00:27:56.428 | 99.00th=[ 5866], 99.50th=[ 5997], 99.90th=[ 6521], 99.95th=[ 6652], 00:27:56.428 | 99.99th=[42730] 00:27:56.428 bw ( KiB/s): min=15664, max=17344, per=25.54%, avg=16732.80, stdev=474.15, samples=10 00:27:56.428 iops : min= 1958, max= 2168, avg=2091.60, stdev=59.27, samples=10 00:27:56.428 lat (msec) : 2=0.15%, 4=74.69%, 10=25.13%, 50=0.03% 00:27:56.428 cpu : usr=96.63%, sys=3.11%, ctx=8, majf=0, minf=9 00:27:56.428 IO depths : 1=0.1%, 2=0.7%, 4=71.4%, 8=27.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:56.428 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:56.428 complete : 0=0.0%, 4=93.2%, 8=6.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:56.428 issued rwts: total=10461,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:56.428 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:56.428 00:27:56.428 Run status group 0 (all jobs): 00:27:56.428 READ: bw=64.0MiB/s (67.1MB/s), 16.0MiB/s-16.2MiB/s (16.7MB/s-17.0MB/s), io=323MiB (338MB), run=5001-5042msec 00:27:56.428 12:23:57 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:27:56.428 12:23:57 -- target/dif.sh@43 -- # local sub 00:27:56.428 12:23:57 -- target/dif.sh@45 -- # for sub in "$@" 00:27:56.428 12:23:57 -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:56.428 12:23:57 -- target/dif.sh@36 -- # local sub_id=0 00:27:56.428 12:23:57 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:56.428 12:23:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:56.428 12:23:57 -- common/autotest_common.sh@10 -- # set +x 00:27:56.428 12:23:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:56.428 12:23:57 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:56.428 12:23:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:56.428 12:23:57 -- common/autotest_common.sh@10 -- # set +x 00:27:56.428 12:23:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:56.428 12:23:57 -- target/dif.sh@45 -- # for sub in "$@" 00:27:56.428 12:23:57 -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:56.428 12:23:57 -- target/dif.sh@36 -- # local sub_id=1 00:27:56.428 12:23:57 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:56.428 12:23:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:56.428 12:23:57 -- common/autotest_common.sh@10 -- # set +x 00:27:56.428 12:23:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:56.428 12:23:57 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:56.428 12:23:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:56.428 12:23:57 -- common/autotest_common.sh@10 -- # set +x 00:27:56.428 12:23:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:56.428 00:27:56.428 real 0m24.431s 00:27:56.428 user 5m17.291s 00:27:56.428 sys 0m3.961s 00:27:56.428 12:23:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:56.428 12:23:57 -- common/autotest_common.sh@10 -- # set +x 00:27:56.428 ************************************ 00:27:56.428 END TEST fio_dif_rand_params 00:27:56.428 ************************************ 00:27:56.428 12:23:57 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:27:56.428 12:23:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:56.428 12:23:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:56.428 12:23:57 -- common/autotest_common.sh@10 -- # set +x 00:27:56.428 ************************************ 00:27:56.428 START TEST fio_dif_digest 00:27:56.428 ************************************ 00:27:56.428 12:23:57 -- common/autotest_common.sh@1111 -- # fio_dif_digest 00:27:56.428 12:23:57 -- target/dif.sh@123 -- # local NULL_DIF 00:27:56.428 12:23:57 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:27:56.428 12:23:57 -- target/dif.sh@125 -- # local hdgst ddgst 00:27:56.428 12:23:57 -- target/dif.sh@127 -- # NULL_DIF=3 00:27:56.428 12:23:57 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:27:56.428 12:23:57 -- target/dif.sh@127 -- # numjobs=3 00:27:56.428 12:23:57 -- target/dif.sh@127 -- # iodepth=3 00:27:56.428 12:23:57 -- target/dif.sh@127 -- # runtime=10 00:27:56.428 12:23:57 -- target/dif.sh@128 -- # hdgst=true 00:27:56.428 12:23:57 -- target/dif.sh@128 -- # ddgst=true 00:27:56.428 12:23:57 -- target/dif.sh@130 -- # create_subsystems 0 00:27:56.428 12:23:57 -- target/dif.sh@28 -- # local sub 00:27:56.428 12:23:57 -- target/dif.sh@30 -- # for sub in "$@" 00:27:56.428 12:23:57 -- target/dif.sh@31 -- # create_subsystem 0 00:27:56.428 12:23:57 -- target/dif.sh@18 -- # local sub_id=0 00:27:56.428 12:23:57 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:27:56.428 12:23:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:56.428 12:23:57 -- common/autotest_common.sh@10 -- # set +x 00:27:56.428 bdev_null0 00:27:56.428 12:23:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:56.428 12:23:57 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:56.428 12:23:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:56.689 12:23:57 -- common/autotest_common.sh@10 -- # set +x 00:27:56.689 12:23:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:56.689 12:23:57 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:56.689 12:23:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:56.689 12:23:57 -- common/autotest_common.sh@10 -- # set +x 00:27:56.689 12:23:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:56.689 12:23:57 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:56.689 12:23:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:56.689 12:23:57 -- common/autotest_common.sh@10 -- # set +x 00:27:56.689 [2024-04-26 12:23:57.676709] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:56.689 12:23:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:56.689 12:23:57 -- target/dif.sh@131 -- # fio /dev/fd/62 00:27:56.689 12:23:57 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:27:56.689 12:23:57 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:56.689 12:23:57 -- nvmf/common.sh@521 -- # config=() 00:27:56.689 12:23:57 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:56.689 12:23:57 -- nvmf/common.sh@521 -- # local subsystem config 00:27:56.689 12:23:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:56.689 12:23:57 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:56.689 12:23:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:56.689 { 00:27:56.689 "params": { 00:27:56.689 "name": "Nvme$subsystem", 00:27:56.689 "trtype": "$TEST_TRANSPORT", 00:27:56.689 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:56.689 "adrfam": "ipv4", 00:27:56.689 "trsvcid": "$NVMF_PORT", 00:27:56.689 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:56.689 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:56.689 "hdgst": ${hdgst:-false}, 00:27:56.689 "ddgst": ${ddgst:-false} 00:27:56.689 }, 00:27:56.689 "method": "bdev_nvme_attach_controller" 00:27:56.689 } 00:27:56.689 EOF 00:27:56.689 )") 00:27:56.689 12:23:57 -- target/dif.sh@82 -- # gen_fio_conf 00:27:56.689 12:23:57 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:27:56.689 12:23:57 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:56.689 12:23:57 -- target/dif.sh@54 -- # local file 00:27:56.689 12:23:57 -- common/autotest_common.sh@1325 -- # local sanitizers 00:27:56.689 12:23:57 -- target/dif.sh@56 -- # cat 00:27:56.689 12:23:57 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:56.689 12:23:57 -- common/autotest_common.sh@1327 -- # shift 00:27:56.689 12:23:57 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:27:56.689 12:23:57 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:56.689 12:23:57 -- nvmf/common.sh@543 -- # cat 00:27:56.689 12:23:57 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:56.689 12:23:57 -- target/dif.sh@72 -- # (( file = 1 )) 00:27:56.689 12:23:57 -- common/autotest_common.sh@1331 -- # grep libasan 00:27:56.689 12:23:57 -- target/dif.sh@72 -- # (( file <= files )) 00:27:56.689 12:23:57 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:56.689 12:23:57 -- nvmf/common.sh@545 -- # jq . 00:27:56.689 12:23:57 -- nvmf/common.sh@546 -- # IFS=, 00:27:56.689 12:23:57 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:27:56.689 "params": { 00:27:56.689 "name": "Nvme0", 00:27:56.689 "trtype": "tcp", 00:27:56.689 "traddr": "10.0.0.2", 00:27:56.689 "adrfam": "ipv4", 00:27:56.689 "trsvcid": "4420", 00:27:56.689 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:56.689 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:56.689 "hdgst": true, 00:27:56.689 "ddgst": true 00:27:56.689 }, 00:27:56.689 "method": "bdev_nvme_attach_controller" 00:27:56.689 }' 00:27:56.689 12:23:57 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:56.689 12:23:57 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:56.690 12:23:57 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:56.690 12:23:57 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:56.690 12:23:57 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:27:56.690 12:23:57 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:56.690 12:23:57 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:56.690 12:23:57 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:56.690 12:23:57 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:56.690 12:23:57 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:56.949 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:27:56.949 ... 00:27:56.949 fio-3.35 00:27:56.949 Starting 3 threads 00:27:56.949 EAL: No free 2048 kB hugepages reported on node 1 00:28:09.239 00:28:09.239 filename0: (groupid=0, jobs=1): err= 0: pid=3594379: Fri Apr 26 12:24:08 2024 00:28:09.239 read: IOPS=218, BW=27.3MiB/s (28.6MB/s)(274MiB/10045msec) 00:28:09.239 slat (nsec): min=5579, max=34917, avg=6438.26, stdev=1047.42 00:28:09.239 clat (usec): min=8778, max=51758, avg=13726.18, stdev=1602.56 00:28:09.240 lat (usec): min=8800, max=51764, avg=13732.62, stdev=1602.44 00:28:09.240 clat percentiles (usec): 00:28:09.240 | 1.00th=[10421], 5.00th=[11731], 10.00th=[12256], 20.00th=[12780], 00:28:09.240 | 30.00th=[13173], 40.00th=[13435], 50.00th=[13698], 60.00th=[13960], 00:28:09.240 | 70.00th=[14222], 80.00th=[14615], 90.00th=[15008], 95.00th=[15533], 00:28:09.240 | 99.00th=[16450], 99.50th=[16909], 99.90th=[18482], 99.95th=[49021], 00:28:09.240 | 99.99th=[51643] 00:28:09.240 bw ( KiB/s): min=27392, max=28928, per=33.59%, avg=28019.20, stdev=501.62, samples=20 00:28:09.240 iops : min= 214, max= 226, avg=218.90, stdev= 3.92, samples=20 00:28:09.240 lat (msec) : 10=0.55%, 20=99.36%, 50=0.05%, 100=0.05% 00:28:09.240 cpu : usr=94.50%, sys=5.26%, ctx=29, majf=0, minf=134 00:28:09.240 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:09.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:09.240 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:09.240 issued rwts: total=2191,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:09.240 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:09.240 filename0: (groupid=0, jobs=1): err= 0: pid=3594380: Fri Apr 26 12:24:08 2024 00:28:09.240 read: IOPS=223, BW=28.0MiB/s (29.3MB/s)(281MiB/10045msec) 00:28:09.240 slat (nsec): min=5545, max=30879, avg=6405.32, stdev=822.97 00:28:09.240 clat (usec): min=8540, max=45740, avg=13359.81, stdev=1406.14 00:28:09.240 lat (usec): min=8547, max=45746, avg=13366.22, stdev=1406.09 00:28:09.240 clat percentiles (usec): 00:28:09.240 | 1.00th=[ 9503], 5.00th=[11338], 10.00th=[11863], 20.00th=[12387], 00:28:09.240 | 30.00th=[12780], 40.00th=[13173], 50.00th=[13435], 60.00th=[13698], 00:28:09.240 | 70.00th=[13960], 80.00th=[14222], 90.00th=[14877], 95.00th=[15270], 00:28:09.240 | 99.00th=[16188], 99.50th=[16581], 99.90th=[17171], 99.95th=[17171], 00:28:09.240 | 99.99th=[45876] 00:28:09.240 bw ( KiB/s): min=27648, max=30976, per=34.48%, avg=28761.60, stdev=876.15, samples=20 00:28:09.240 iops : min= 216, max= 242, avg=224.70, stdev= 6.84, samples=20 00:28:09.240 lat (msec) : 10=1.29%, 20=98.67%, 50=0.04% 00:28:09.240 cpu : usr=94.52%, sys=5.23%, ctx=25, majf=0, minf=133 00:28:09.240 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:09.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:09.240 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:09.240 issued rwts: total=2248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:09.240 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:09.240 filename0: (groupid=0, jobs=1): err= 0: pid=3594381: Fri Apr 26 12:24:08 2024 00:28:09.240 read: IOPS=209, BW=26.2MiB/s (27.5MB/s)(264MiB/10046msec) 00:28:09.240 slat (nsec): min=5564, max=70534, avg=6639.10, stdev=1752.12 00:28:09.240 clat (usec): min=9542, max=57992, avg=14268.07, stdev=3122.82 00:28:09.240 lat (usec): min=9548, max=57998, avg=14274.71, stdev=3122.98 00:28:09.240 clat percentiles (usec): 00:28:09.240 | 1.00th=[11338], 5.00th=[12125], 10.00th=[12649], 20.00th=[13042], 00:28:09.240 | 30.00th=[13435], 40.00th=[13698], 50.00th=[14091], 60.00th=[14353], 00:28:09.240 | 70.00th=[14615], 80.00th=[15008], 90.00th=[15664], 95.00th=[16057], 00:28:09.240 | 99.00th=[17171], 99.50th=[47449], 99.90th=[56886], 99.95th=[57410], 00:28:09.240 | 99.99th=[57934] 00:28:09.240 bw ( KiB/s): min=23808, max=28416, per=32.31%, avg=26956.80, stdev=1086.43, samples=20 00:28:09.240 iops : min= 186, max= 222, avg=210.60, stdev= 8.49, samples=20 00:28:09.240 lat (msec) : 10=0.05%, 20=99.43%, 50=0.09%, 100=0.43% 00:28:09.240 cpu : usr=95.10%, sys=4.64%, ctx=41, majf=0, minf=167 00:28:09.240 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:09.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:09.240 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:09.240 issued rwts: total=2108,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:09.240 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:09.240 00:28:09.240 Run status group 0 (all jobs): 00:28:09.240 READ: bw=81.5MiB/s (85.4MB/s), 26.2MiB/s-28.0MiB/s (27.5MB/s-29.3MB/s), io=818MiB (858MB), run=10045-10046msec 00:28:09.240 12:24:08 -- target/dif.sh@132 -- # destroy_subsystems 0 00:28:09.240 12:24:08 -- target/dif.sh@43 -- # local sub 00:28:09.240 12:24:08 -- target/dif.sh@45 -- # for sub in "$@" 00:28:09.240 12:24:08 -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:09.240 12:24:08 -- target/dif.sh@36 -- # local sub_id=0 00:28:09.240 12:24:08 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:09.240 12:24:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.240 12:24:08 -- common/autotest_common.sh@10 -- # set +x 00:28:09.240 12:24:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.240 12:24:08 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:09.240 12:24:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.240 12:24:08 -- common/autotest_common.sh@10 -- # set +x 00:28:09.240 12:24:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.240 00:28:09.240 real 0m11.102s 00:28:09.240 user 0m42.398s 00:28:09.240 sys 0m1.823s 00:28:09.240 12:24:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:09.240 12:24:08 -- common/autotest_common.sh@10 -- # set +x 00:28:09.240 ************************************ 00:28:09.240 END TEST fio_dif_digest 00:28:09.240 ************************************ 00:28:09.240 12:24:08 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:28:09.240 12:24:08 -- target/dif.sh@147 -- # nvmftestfini 00:28:09.240 12:24:08 -- nvmf/common.sh@477 -- # nvmfcleanup 00:28:09.240 12:24:08 -- nvmf/common.sh@117 -- # sync 00:28:09.240 12:24:08 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:09.240 12:24:08 -- nvmf/common.sh@120 -- # set +e 00:28:09.240 12:24:08 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:09.240 12:24:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:09.240 rmmod nvme_tcp 00:28:09.240 rmmod nvme_fabrics 00:28:09.240 rmmod nvme_keyring 00:28:09.240 12:24:08 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:09.240 12:24:08 -- nvmf/common.sh@124 -- # set -e 00:28:09.240 12:24:08 -- nvmf/common.sh@125 -- # return 0 00:28:09.240 12:24:08 -- nvmf/common.sh@478 -- # '[' -n 3583846 ']' 00:28:09.240 12:24:08 -- nvmf/common.sh@479 -- # killprocess 3583846 00:28:09.240 12:24:08 -- common/autotest_common.sh@936 -- # '[' -z 3583846 ']' 00:28:09.240 12:24:08 -- common/autotest_common.sh@940 -- # kill -0 3583846 00:28:09.240 12:24:08 -- common/autotest_common.sh@941 -- # uname 00:28:09.240 12:24:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:09.240 12:24:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3583846 00:28:09.240 12:24:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:09.240 12:24:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:09.240 12:24:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3583846' 00:28:09.240 killing process with pid 3583846 00:28:09.240 12:24:08 -- common/autotest_common.sh@955 -- # kill 3583846 00:28:09.240 12:24:08 -- common/autotest_common.sh@960 -- # wait 3583846 00:28:09.240 12:24:09 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:28:09.240 12:24:09 -- nvmf/common.sh@482 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:11.158 Waiting for block devices as requested 00:28:11.158 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:11.158 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:11.419 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:11.419 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:11.419 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:11.680 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:11.680 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:11.680 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:11.942 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:28:11.942 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:12.202 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:12.202 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:12.202 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:12.202 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:12.462 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:12.462 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:12.462 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:12.722 12:24:13 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:28:12.722 12:24:13 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:28:12.722 12:24:13 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:12.722 12:24:13 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:12.722 12:24:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:12.722 12:24:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:12.722 12:24:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:15.268 12:24:15 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:15.268 00:28:15.268 real 1m17.727s 00:28:15.268 user 8m6.918s 00:28:15.268 sys 0m20.287s 00:28:15.268 12:24:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:15.268 12:24:15 -- common/autotest_common.sh@10 -- # set +x 00:28:15.268 ************************************ 00:28:15.268 END TEST nvmf_dif 00:28:15.268 ************************************ 00:28:15.268 12:24:15 -- spdk/autotest.sh@291 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:15.268 12:24:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:15.268 12:24:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:15.268 12:24:15 -- common/autotest_common.sh@10 -- # set +x 00:28:15.268 ************************************ 00:28:15.268 START TEST nvmf_abort_qd_sizes 00:28:15.268 ************************************ 00:28:15.268 12:24:16 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:15.268 * Looking for test storage... 00:28:15.268 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:15.268 12:24:16 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:15.269 12:24:16 -- nvmf/common.sh@7 -- # uname -s 00:28:15.269 12:24:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:15.269 12:24:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:15.269 12:24:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:15.269 12:24:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:15.269 12:24:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:15.269 12:24:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:15.269 12:24:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:15.269 12:24:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:15.269 12:24:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:15.269 12:24:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:15.269 12:24:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:15.269 12:24:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:15.269 12:24:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:15.269 12:24:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:15.269 12:24:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:15.269 12:24:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:15.269 12:24:16 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:15.269 12:24:16 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:15.269 12:24:16 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:15.269 12:24:16 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:15.269 12:24:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.269 12:24:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.269 12:24:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.269 12:24:16 -- paths/export.sh@5 -- # export PATH 00:28:15.269 12:24:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.269 12:24:16 -- nvmf/common.sh@47 -- # : 0 00:28:15.269 12:24:16 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:15.269 12:24:16 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:15.269 12:24:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:15.269 12:24:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:15.269 12:24:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:15.269 12:24:16 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:15.269 12:24:16 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:15.269 12:24:16 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:15.269 12:24:16 -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:28:15.269 12:24:16 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:28:15.269 12:24:16 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:15.269 12:24:16 -- nvmf/common.sh@437 -- # prepare_net_devs 00:28:15.269 12:24:16 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:28:15.269 12:24:16 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:28:15.269 12:24:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:15.269 12:24:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:15.269 12:24:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:15.269 12:24:16 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:28:15.269 12:24:16 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:28:15.269 12:24:16 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:15.269 12:24:16 -- common/autotest_common.sh@10 -- # set +x 00:28:23.413 12:24:23 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:23.413 12:24:23 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:23.413 12:24:23 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:23.413 12:24:23 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:23.413 12:24:23 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:23.413 12:24:23 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:23.413 12:24:23 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:23.413 12:24:23 -- nvmf/common.sh@295 -- # net_devs=() 00:28:23.413 12:24:23 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:23.413 12:24:23 -- nvmf/common.sh@296 -- # e810=() 00:28:23.413 12:24:23 -- nvmf/common.sh@296 -- # local -ga e810 00:28:23.413 12:24:23 -- nvmf/common.sh@297 -- # x722=() 00:28:23.413 12:24:23 -- nvmf/common.sh@297 -- # local -ga x722 00:28:23.413 12:24:23 -- nvmf/common.sh@298 -- # mlx=() 00:28:23.414 12:24:23 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:23.414 12:24:23 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:23.414 12:24:23 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:23.414 12:24:23 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:23.414 12:24:23 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:23.414 12:24:23 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:23.414 12:24:23 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:23.414 12:24:23 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:23.414 12:24:23 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:23.414 12:24:23 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:23.414 12:24:23 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:23.414 12:24:23 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:23.414 12:24:23 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:23.414 12:24:23 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:23.414 12:24:23 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:23.414 12:24:23 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:23.414 12:24:23 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:23.414 12:24:23 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:23.414 12:24:23 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:23.414 12:24:23 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:23.414 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:23.414 12:24:23 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:23.414 12:24:23 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:23.414 12:24:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:23.414 12:24:23 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:23.414 12:24:23 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:23.414 12:24:23 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:23.414 12:24:23 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:23.414 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:23.414 12:24:23 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:23.414 12:24:23 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:23.414 12:24:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:23.414 12:24:23 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:23.414 12:24:23 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:23.414 12:24:23 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:23.414 12:24:23 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:23.414 12:24:23 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:23.414 12:24:23 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:23.414 12:24:23 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:23.414 12:24:23 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:28:23.414 12:24:23 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:23.414 12:24:23 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:23.414 Found net devices under 0000:31:00.0: cvl_0_0 00:28:23.414 12:24:23 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:28:23.414 12:24:23 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:23.414 12:24:23 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:23.414 12:24:23 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:28:23.414 12:24:23 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:23.414 12:24:23 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:23.414 Found net devices under 0000:31:00.1: cvl_0_1 00:28:23.414 12:24:23 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:28:23.414 12:24:23 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:28:23.414 12:24:23 -- nvmf/common.sh@403 -- # is_hw=yes 00:28:23.414 12:24:23 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:28:23.414 12:24:23 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:28:23.414 12:24:23 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:28:23.414 12:24:23 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:23.414 12:24:23 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:23.414 12:24:23 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:23.414 12:24:23 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:23.414 12:24:23 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:23.414 12:24:23 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:23.414 12:24:23 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:23.414 12:24:23 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:23.414 12:24:23 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:23.414 12:24:23 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:23.414 12:24:23 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:23.414 12:24:23 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:23.414 12:24:23 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:23.414 12:24:23 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:23.414 12:24:23 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:23.414 12:24:23 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:23.414 12:24:23 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:23.414 12:24:23 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:23.414 12:24:23 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:23.414 12:24:23 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:23.414 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:23.414 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.672 ms 00:28:23.414 00:28:23.414 --- 10.0.0.2 ping statistics --- 00:28:23.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:23.414 rtt min/avg/max/mdev = 0.672/0.672/0.672/0.000 ms 00:28:23.414 12:24:23 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:23.414 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:23.414 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.329 ms 00:28:23.414 00:28:23.414 --- 10.0.0.1 ping statistics --- 00:28:23.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:23.414 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:28:23.414 12:24:23 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:23.414 12:24:23 -- nvmf/common.sh@411 -- # return 0 00:28:23.414 12:24:23 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:28:23.414 12:24:23 -- nvmf/common.sh@440 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:25.961 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:25.961 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:25.961 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:25.961 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:25.961 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:25.961 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:25.961 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:25.961 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:25.961 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:25.961 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:25.961 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:25.961 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:25.961 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:25.961 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:25.961 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:25.961 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:26.222 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:28:26.482 12:24:27 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:26.482 12:24:27 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:28:26.482 12:24:27 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:28:26.482 12:24:27 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:26.482 12:24:27 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:28:26.482 12:24:27 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:28:26.482 12:24:27 -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:28:26.482 12:24:27 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:28:26.482 12:24:27 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:26.482 12:24:27 -- common/autotest_common.sh@10 -- # set +x 00:28:26.482 12:24:27 -- nvmf/common.sh@470 -- # nvmfpid=3604495 00:28:26.482 12:24:27 -- nvmf/common.sh@471 -- # waitforlisten 3604495 00:28:26.482 12:24:27 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:28:26.482 12:24:27 -- common/autotest_common.sh@817 -- # '[' -z 3604495 ']' 00:28:26.482 12:24:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:26.483 12:24:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:26.483 12:24:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:26.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:26.483 12:24:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:26.483 12:24:27 -- common/autotest_common.sh@10 -- # set +x 00:28:26.483 [2024-04-26 12:24:27.654386] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:28:26.483 [2024-04-26 12:24:27.654441] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:26.483 EAL: No free 2048 kB hugepages reported on node 1 00:28:26.742 [2024-04-26 12:24:27.723520] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:26.742 [2024-04-26 12:24:27.794450] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:26.742 [2024-04-26 12:24:27.794490] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:26.742 [2024-04-26 12:24:27.794499] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:26.742 [2024-04-26 12:24:27.794507] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:26.742 [2024-04-26 12:24:27.794515] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:26.742 [2024-04-26 12:24:27.795128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:26.742 [2024-04-26 12:24:27.795312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:26.742 [2024-04-26 12:24:27.795488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:26.742 [2024-04-26 12:24:27.795489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:27.312 12:24:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:27.312 12:24:28 -- common/autotest_common.sh@850 -- # return 0 00:28:27.312 12:24:28 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:28:27.312 12:24:28 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:27.312 12:24:28 -- common/autotest_common.sh@10 -- # set +x 00:28:27.312 12:24:28 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:27.312 12:24:28 -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:28:27.312 12:24:28 -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:28:27.312 12:24:28 -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:28:27.312 12:24:28 -- scripts/common.sh@309 -- # local bdf bdfs 00:28:27.312 12:24:28 -- scripts/common.sh@310 -- # local nvmes 00:28:27.312 12:24:28 -- scripts/common.sh@312 -- # [[ -n 0000:65:00.0 ]] 00:28:27.312 12:24:28 -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:28:27.312 12:24:28 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:28:27.312 12:24:28 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:28:27.312 12:24:28 -- scripts/common.sh@320 -- # uname -s 00:28:27.312 12:24:28 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:28:27.312 12:24:28 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:28:27.312 12:24:28 -- scripts/common.sh@325 -- # (( 1 )) 00:28:27.312 12:24:28 -- scripts/common.sh@326 -- # printf '%s\n' 0000:65:00.0 00:28:27.312 12:24:28 -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:28:27.312 12:24:28 -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:28:27.312 12:24:28 -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:28:27.312 12:24:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:27.312 12:24:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:27.312 12:24:28 -- common/autotest_common.sh@10 -- # set +x 00:28:27.573 ************************************ 00:28:27.573 START TEST spdk_target_abort 00:28:27.573 ************************************ 00:28:27.573 12:24:28 -- common/autotest_common.sh@1111 -- # spdk_target 00:28:27.573 12:24:28 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:28:27.573 12:24:28 -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:28:27.573 12:24:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:27.573 12:24:28 -- common/autotest_common.sh@10 -- # set +x 00:28:27.834 spdk_targetn1 00:28:27.834 12:24:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:27.834 12:24:28 -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:27.834 12:24:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:27.834 12:24:28 -- common/autotest_common.sh@10 -- # set +x 00:28:27.834 [2024-04-26 12:24:28.939999] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:27.834 12:24:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:27.834 12:24:28 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:28:27.834 12:24:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:27.834 12:24:28 -- common/autotest_common.sh@10 -- # set +x 00:28:27.834 12:24:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:27.834 12:24:28 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:28:27.834 12:24:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:27.834 12:24:28 -- common/autotest_common.sh@10 -- # set +x 00:28:27.834 12:24:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:27.834 12:24:28 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:28:27.834 12:24:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:27.834 12:24:28 -- common/autotest_common.sh@10 -- # set +x 00:28:27.834 [2024-04-26 12:24:28.980269] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:27.834 12:24:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:27.834 12:24:28 -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:28:27.834 12:24:28 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:27.834 12:24:28 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:27.834 12:24:28 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:28:27.834 12:24:28 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:27.834 12:24:28 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:28:27.834 12:24:28 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:27.834 12:24:28 -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:27.834 12:24:28 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:27.834 12:24:28 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:27.834 12:24:28 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:27.834 12:24:28 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:27.834 12:24:28 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:27.834 12:24:28 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:27.834 12:24:28 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:28:27.834 12:24:28 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:27.834 12:24:28 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:27.834 12:24:28 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:27.834 12:24:28 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:27.834 12:24:28 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:27.834 12:24:28 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:27.834 EAL: No free 2048 kB hugepages reported on node 1 00:28:28.094 [2024-04-26 12:24:29.151323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:656 len:8 PRP1 0x2000078c6000 PRP2 0x0 00:28:28.094 [2024-04-26 12:24:29.151346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0054 p:1 m:0 dnr:0 00:28:28.094 [2024-04-26 12:24:29.198272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:2824 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:28:28.094 [2024-04-26 12:24:29.198289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:31.390 Initializing NVMe Controllers 00:28:31.390 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:31.390 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:31.390 Initialization complete. Launching workers. 00:28:31.390 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9710, failed: 2 00:28:31.390 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2241, failed to submit 7471 00:28:31.390 success 554, unsuccess 1687, failed 0 00:28:31.390 12:24:32 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:31.390 12:24:32 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:31.390 EAL: No free 2048 kB hugepages reported on node 1 00:28:31.390 [2024-04-26 12:24:32.335035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:181 nsid:1 lba:480 len:8 PRP1 0x200007c3c000 PRP2 0x0 00:28:31.390 [2024-04-26 12:24:32.335076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:181 cdw0:0 sqhd:0048 p:1 m:0 dnr:0 00:28:31.390 [2024-04-26 12:24:32.342673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:189 nsid:1 lba:688 len:8 PRP1 0x200007c54000 PRP2 0x0 00:28:31.390 [2024-04-26 12:24:32.342696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:189 cdw0:0 sqhd:0057 p:1 m:0 dnr:0 00:28:31.390 [2024-04-26 12:24:32.350022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:187 nsid:1 lba:816 len:8 PRP1 0x200007c4e000 PRP2 0x0 00:28:31.390 [2024-04-26 12:24:32.350043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:187 cdw0:0 sqhd:0072 p:1 m:0 dnr:0 00:28:31.390 [2024-04-26 12:24:32.441027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:173 nsid:1 lba:3056 len:8 PRP1 0x200007c52000 PRP2 0x0 00:28:31.390 [2024-04-26 12:24:32.441054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:173 cdw0:0 sqhd:0086 p:0 m:0 dnr:0 00:28:31.650 [2024-04-26 12:24:32.744108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:171 nsid:1 lba:10144 len:8 PRP1 0x200007c56000 PRP2 0x0 00:28:31.650 [2024-04-26 12:24:32.744135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:171 cdw0:0 sqhd:00fc p:1 m:0 dnr:0 00:28:31.650 [2024-04-26 12:24:32.837915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:174 nsid:1 lba:12280 len:8 PRP1 0x200007c3a000 PRP2 0x0 00:28:31.650 [2024-04-26 12:24:32.837939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:174 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:34.195 [2024-04-26 12:24:34.798975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:178 nsid:1 lba:56352 len:8 PRP1 0x200007c48000 PRP2 0x0 00:28:34.195 [2024-04-26 12:24:34.799015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:178 cdw0:0 sqhd:0088 p:0 m:0 dnr:0 00:28:34.457 Initializing NVMe Controllers 00:28:34.457 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:34.457 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:34.457 Initialization complete. Launching workers. 00:28:34.457 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8515, failed: 7 00:28:34.457 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1215, failed to submit 7307 00:28:34.457 success 388, unsuccess 827, failed 0 00:28:34.458 12:24:35 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:34.458 12:24:35 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:34.458 EAL: No free 2048 kB hugepages reported on node 1 00:28:37.002 [2024-04-26 12:24:37.600135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:157 nsid:1 lba:228808 len:8 PRP1 0x200007914000 PRP2 0x0 00:28:37.002 [2024-04-26 12:24:37.600168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:157 cdw0:0 sqhd:00af p:0 m:0 dnr:0 00:28:37.575 Initializing NVMe Controllers 00:28:37.575 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:37.575 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:37.575 Initialization complete. Launching workers. 00:28:37.575 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 42223, failed: 1 00:28:37.575 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2556, failed to submit 39668 00:28:37.575 success 583, unsuccess 1973, failed 0 00:28:37.575 12:24:38 -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:28:37.575 12:24:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:37.575 12:24:38 -- common/autotest_common.sh@10 -- # set +x 00:28:37.575 12:24:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:37.575 12:24:38 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:28:37.575 12:24:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:37.575 12:24:38 -- common/autotest_common.sh@10 -- # set +x 00:28:39.488 12:24:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:39.488 12:24:40 -- target/abort_qd_sizes.sh@61 -- # killprocess 3604495 00:28:39.488 12:24:40 -- common/autotest_common.sh@936 -- # '[' -z 3604495 ']' 00:28:39.488 12:24:40 -- common/autotest_common.sh@940 -- # kill -0 3604495 00:28:39.488 12:24:40 -- common/autotest_common.sh@941 -- # uname 00:28:39.488 12:24:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:39.488 12:24:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3604495 00:28:39.488 12:24:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:39.488 12:24:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:39.488 12:24:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3604495' 00:28:39.488 killing process with pid 3604495 00:28:39.488 12:24:40 -- common/autotest_common.sh@955 -- # kill 3604495 00:28:39.488 12:24:40 -- common/autotest_common.sh@960 -- # wait 3604495 00:28:39.488 00:28:39.488 real 0m12.011s 00:28:39.488 user 0m49.524s 00:28:39.488 sys 0m1.619s 00:28:39.488 12:24:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:39.488 12:24:40 -- common/autotest_common.sh@10 -- # set +x 00:28:39.488 ************************************ 00:28:39.488 END TEST spdk_target_abort 00:28:39.488 ************************************ 00:28:39.488 12:24:40 -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:28:39.488 12:24:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:39.488 12:24:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:39.488 12:24:40 -- common/autotest_common.sh@10 -- # set +x 00:28:39.749 ************************************ 00:28:39.749 START TEST kernel_target_abort 00:28:39.749 ************************************ 00:28:39.749 12:24:40 -- common/autotest_common.sh@1111 -- # kernel_target 00:28:39.749 12:24:40 -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:28:39.749 12:24:40 -- nvmf/common.sh@717 -- # local ip 00:28:39.749 12:24:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:39.749 12:24:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:39.749 12:24:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:39.749 12:24:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:39.749 12:24:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:39.749 12:24:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:39.749 12:24:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:39.749 12:24:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:39.749 12:24:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:39.749 12:24:40 -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:28:39.749 12:24:40 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:28:39.749 12:24:40 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:28:39.749 12:24:40 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:39.749 12:24:40 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:39.749 12:24:40 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:39.750 12:24:40 -- nvmf/common.sh@628 -- # local block nvme 00:28:39.750 12:24:40 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:28:39.750 12:24:40 -- nvmf/common.sh@631 -- # modprobe nvmet 00:28:39.750 12:24:40 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:39.750 12:24:40 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:43.067 Waiting for block devices as requested 00:28:43.067 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:43.067 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:43.067 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:43.067 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:43.067 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:43.328 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:43.328 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:43.328 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:43.589 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:28:43.589 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:43.589 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:43.850 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:43.850 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:43.850 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:43.850 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:44.111 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:44.111 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:44.371 12:24:45 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:28:44.371 12:24:45 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:44.371 12:24:45 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:28:44.371 12:24:45 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:28:44.371 12:24:45 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:44.371 12:24:45 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:28:44.371 12:24:45 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:28:44.371 12:24:45 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:28:44.371 12:24:45 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:44.371 No valid GPT data, bailing 00:28:44.371 12:24:45 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:44.371 12:24:45 -- scripts/common.sh@391 -- # pt= 00:28:44.371 12:24:45 -- scripts/common.sh@392 -- # return 1 00:28:44.371 12:24:45 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:28:44.371 12:24:45 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:28:44.371 12:24:45 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:44.371 12:24:45 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:44.371 12:24:45 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:44.371 12:24:45 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:28:44.371 12:24:45 -- nvmf/common.sh@656 -- # echo 1 00:28:44.371 12:24:45 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:28:44.371 12:24:45 -- nvmf/common.sh@658 -- # echo 1 00:28:44.371 12:24:45 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:28:44.371 12:24:45 -- nvmf/common.sh@661 -- # echo tcp 00:28:44.371 12:24:45 -- nvmf/common.sh@662 -- # echo 4420 00:28:44.371 12:24:45 -- nvmf/common.sh@663 -- # echo ipv4 00:28:44.371 12:24:45 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:44.371 12:24:45 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:28:44.632 00:28:44.632 Discovery Log Number of Records 2, Generation counter 2 00:28:44.632 =====Discovery Log Entry 0====== 00:28:44.632 trtype: tcp 00:28:44.632 adrfam: ipv4 00:28:44.632 subtype: current discovery subsystem 00:28:44.632 treq: not specified, sq flow control disable supported 00:28:44.632 portid: 1 00:28:44.632 trsvcid: 4420 00:28:44.632 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:44.632 traddr: 10.0.0.1 00:28:44.632 eflags: none 00:28:44.632 sectype: none 00:28:44.632 =====Discovery Log Entry 1====== 00:28:44.632 trtype: tcp 00:28:44.632 adrfam: ipv4 00:28:44.632 subtype: nvme subsystem 00:28:44.632 treq: not specified, sq flow control disable supported 00:28:44.632 portid: 1 00:28:44.632 trsvcid: 4420 00:28:44.632 subnqn: nqn.2016-06.io.spdk:testnqn 00:28:44.632 traddr: 10.0.0.1 00:28:44.632 eflags: none 00:28:44.632 sectype: none 00:28:44.632 12:24:45 -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:28:44.632 12:24:45 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:44.632 12:24:45 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:44.632 12:24:45 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:28:44.632 12:24:45 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:44.632 12:24:45 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:28:44.632 12:24:45 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:44.632 12:24:45 -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:44.632 12:24:45 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:44.632 12:24:45 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:44.632 12:24:45 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:44.632 12:24:45 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:44.632 12:24:45 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:44.633 12:24:45 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:44.633 12:24:45 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:28:44.633 12:24:45 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:44.633 12:24:45 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:28:44.633 12:24:45 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:44.633 12:24:45 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:44.633 12:24:45 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:44.633 12:24:45 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:44.633 EAL: No free 2048 kB hugepages reported on node 1 00:28:48.103 Initializing NVMe Controllers 00:28:48.103 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:48.103 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:48.103 Initialization complete. Launching workers. 00:28:48.103 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 66591, failed: 0 00:28:48.103 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 66591, failed to submit 0 00:28:48.103 success 0, unsuccess 66591, failed 0 00:28:48.103 12:24:48 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:48.103 12:24:48 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:48.103 EAL: No free 2048 kB hugepages reported on node 1 00:28:50.650 Initializing NVMe Controllers 00:28:50.650 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:50.650 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:50.650 Initialization complete. Launching workers. 00:28:50.650 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 108440, failed: 0 00:28:50.650 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 27298, failed to submit 81142 00:28:50.650 success 0, unsuccess 27298, failed 0 00:28:50.650 12:24:51 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:50.650 12:24:51 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:50.650 EAL: No free 2048 kB hugepages reported on node 1 00:28:53.951 Initializing NVMe Controllers 00:28:53.951 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:53.951 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:53.951 Initialization complete. Launching workers. 00:28:53.951 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 103840, failed: 0 00:28:53.952 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25974, failed to submit 77866 00:28:53.952 success 0, unsuccess 25974, failed 0 00:28:53.952 12:24:54 -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:28:53.952 12:24:54 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:28:53.952 12:24:54 -- nvmf/common.sh@675 -- # echo 0 00:28:53.952 12:24:54 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:53.952 12:24:54 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:53.952 12:24:54 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:53.952 12:24:54 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:53.952 12:24:54 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:28:53.952 12:24:54 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:28:53.952 12:24:54 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:57.263 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:57.263 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:57.263 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:57.263 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:57.263 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:57.263 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:57.524 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:57.524 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:57.524 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:57.524 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:57.524 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:57.524 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:57.524 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:57.524 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:57.524 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:57.524 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:59.435 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:28:59.697 00:28:59.697 real 0m19.853s 00:28:59.697 user 0m9.502s 00:28:59.697 sys 0m5.955s 00:28:59.697 12:25:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:59.697 12:25:00 -- common/autotest_common.sh@10 -- # set +x 00:28:59.697 ************************************ 00:28:59.697 END TEST kernel_target_abort 00:28:59.697 ************************************ 00:28:59.697 12:25:00 -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:28:59.697 12:25:00 -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:28:59.697 12:25:00 -- nvmf/common.sh@477 -- # nvmfcleanup 00:28:59.697 12:25:00 -- nvmf/common.sh@117 -- # sync 00:28:59.697 12:25:00 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:59.697 12:25:00 -- nvmf/common.sh@120 -- # set +e 00:28:59.697 12:25:00 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:59.697 12:25:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:59.697 rmmod nvme_tcp 00:28:59.697 rmmod nvme_fabrics 00:28:59.697 rmmod nvme_keyring 00:28:59.697 12:25:00 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:59.697 12:25:00 -- nvmf/common.sh@124 -- # set -e 00:28:59.697 12:25:00 -- nvmf/common.sh@125 -- # return 0 00:28:59.697 12:25:00 -- nvmf/common.sh@478 -- # '[' -n 3604495 ']' 00:28:59.697 12:25:00 -- nvmf/common.sh@479 -- # killprocess 3604495 00:28:59.697 12:25:00 -- common/autotest_common.sh@936 -- # '[' -z 3604495 ']' 00:28:59.697 12:25:00 -- common/autotest_common.sh@940 -- # kill -0 3604495 00:28:59.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (3604495) - No such process 00:28:59.697 12:25:00 -- common/autotest_common.sh@963 -- # echo 'Process with pid 3604495 is not found' 00:28:59.697 Process with pid 3604495 is not found 00:28:59.697 12:25:00 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:28:59.697 12:25:00 -- nvmf/common.sh@482 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:03.015 Waiting for block devices as requested 00:29:03.015 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:29:03.275 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:29:03.275 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:29:03.275 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:29:03.275 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:29:03.535 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:29:03.535 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:29:03.535 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:29:03.796 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:29:03.796 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:29:04.057 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:29:04.057 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:29:04.057 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:29:04.057 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:29:04.318 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:29:04.318 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:29:04.318 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:29:04.579 12:25:05 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:29:04.579 12:25:05 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:29:04.579 12:25:05 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:04.579 12:25:05 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:04.579 12:25:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:04.579 12:25:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:04.579 12:25:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:07.129 12:25:07 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:07.129 00:29:07.129 real 0m51.692s 00:29:07.129 user 1m4.492s 00:29:07.129 sys 0m18.500s 00:29:07.129 12:25:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:07.129 12:25:07 -- common/autotest_common.sh@10 -- # set +x 00:29:07.129 ************************************ 00:29:07.129 END TEST nvmf_abort_qd_sizes 00:29:07.129 ************************************ 00:29:07.129 12:25:07 -- spdk/autotest.sh@293 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:29:07.129 12:25:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:07.129 12:25:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:07.129 12:25:07 -- common/autotest_common.sh@10 -- # set +x 00:29:07.129 ************************************ 00:29:07.129 START TEST keyring_file 00:29:07.129 ************************************ 00:29:07.129 12:25:08 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:29:07.129 * Looking for test storage... 00:29:07.129 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:29:07.129 12:25:08 -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:29:07.129 12:25:08 -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:07.129 12:25:08 -- nvmf/common.sh@7 -- # uname -s 00:29:07.129 12:25:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:07.129 12:25:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:07.129 12:25:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:07.129 12:25:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:07.129 12:25:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:07.129 12:25:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:07.129 12:25:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:07.129 12:25:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:07.129 12:25:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:07.129 12:25:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:07.129 12:25:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:07.129 12:25:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:07.129 12:25:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:07.129 12:25:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:07.129 12:25:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:07.129 12:25:08 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:07.129 12:25:08 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:07.129 12:25:08 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:07.129 12:25:08 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:07.129 12:25:08 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:07.129 12:25:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.129 12:25:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.129 12:25:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.129 12:25:08 -- paths/export.sh@5 -- # export PATH 00:29:07.129 12:25:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.129 12:25:08 -- nvmf/common.sh@47 -- # : 0 00:29:07.129 12:25:08 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:07.129 12:25:08 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:07.129 12:25:08 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:07.129 12:25:08 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:07.129 12:25:08 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:07.129 12:25:08 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:07.129 12:25:08 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:07.129 12:25:08 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:07.129 12:25:08 -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:29:07.129 12:25:08 -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:29:07.129 12:25:08 -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:29:07.129 12:25:08 -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:29:07.129 12:25:08 -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:29:07.129 12:25:08 -- keyring/file.sh@24 -- # trap cleanup EXIT 00:29:07.129 12:25:08 -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:29:07.129 12:25:08 -- keyring/common.sh@15 -- # local name key digest path 00:29:07.129 12:25:08 -- keyring/common.sh@17 -- # name=key0 00:29:07.129 12:25:08 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:07.129 12:25:08 -- keyring/common.sh@17 -- # digest=0 00:29:07.129 12:25:08 -- keyring/common.sh@18 -- # mktemp 00:29:07.129 12:25:08 -- keyring/common.sh@18 -- # path=/tmp/tmp.T0m2ZqF0BR 00:29:07.129 12:25:08 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:07.129 12:25:08 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:07.129 12:25:08 -- nvmf/common.sh@691 -- # local prefix key digest 00:29:07.129 12:25:08 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:29:07.129 12:25:08 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:29:07.129 12:25:08 -- nvmf/common.sh@693 -- # digest=0 00:29:07.129 12:25:08 -- nvmf/common.sh@694 -- # python - 00:29:07.129 12:25:08 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.T0m2ZqF0BR 00:29:07.129 12:25:08 -- keyring/common.sh@23 -- # echo /tmp/tmp.T0m2ZqF0BR 00:29:07.129 12:25:08 -- keyring/file.sh@26 -- # key0path=/tmp/tmp.T0m2ZqF0BR 00:29:07.129 12:25:08 -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:29:07.129 12:25:08 -- keyring/common.sh@15 -- # local name key digest path 00:29:07.129 12:25:08 -- keyring/common.sh@17 -- # name=key1 00:29:07.129 12:25:08 -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:29:07.129 12:25:08 -- keyring/common.sh@17 -- # digest=0 00:29:07.129 12:25:08 -- keyring/common.sh@18 -- # mktemp 00:29:07.129 12:25:08 -- keyring/common.sh@18 -- # path=/tmp/tmp.RPLjgnkYDu 00:29:07.129 12:25:08 -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:29:07.129 12:25:08 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:29:07.129 12:25:08 -- nvmf/common.sh@691 -- # local prefix key digest 00:29:07.129 12:25:08 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:29:07.129 12:25:08 -- nvmf/common.sh@693 -- # key=112233445566778899aabbccddeeff00 00:29:07.129 12:25:08 -- nvmf/common.sh@693 -- # digest=0 00:29:07.129 12:25:08 -- nvmf/common.sh@694 -- # python - 00:29:07.129 12:25:08 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.RPLjgnkYDu 00:29:07.129 12:25:08 -- keyring/common.sh@23 -- # echo /tmp/tmp.RPLjgnkYDu 00:29:07.129 12:25:08 -- keyring/file.sh@27 -- # key1path=/tmp/tmp.RPLjgnkYDu 00:29:07.129 12:25:08 -- keyring/file.sh@30 -- # tgtpid=3615078 00:29:07.129 12:25:08 -- keyring/file.sh@32 -- # waitforlisten 3615078 00:29:07.129 12:25:08 -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:29:07.129 12:25:08 -- common/autotest_common.sh@817 -- # '[' -z 3615078 ']' 00:29:07.129 12:25:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:07.129 12:25:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:07.129 12:25:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:07.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:07.129 12:25:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:07.129 12:25:08 -- common/autotest_common.sh@10 -- # set +x 00:29:07.391 [2024-04-26 12:25:08.355261] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:29:07.391 [2024-04-26 12:25:08.355331] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3615078 ] 00:29:07.391 EAL: No free 2048 kB hugepages reported on node 1 00:29:07.391 [2024-04-26 12:25:08.417335] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:07.391 [2024-04-26 12:25:08.480020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:07.963 12:25:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:07.963 12:25:09 -- common/autotest_common.sh@850 -- # return 0 00:29:07.963 12:25:09 -- keyring/file.sh@33 -- # rpc_cmd 00:29:07.963 12:25:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:07.963 12:25:09 -- common/autotest_common.sh@10 -- # set +x 00:29:07.963 [2024-04-26 12:25:09.135982] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:07.963 null0 00:29:07.963 [2024-04-26 12:25:09.168038] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:07.963 [2024-04-26 12:25:09.168373] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:07.963 [2024-04-26 12:25:09.176047] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:29:07.963 12:25:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:07.963 12:25:09 -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:07.963 12:25:09 -- common/autotest_common.sh@638 -- # local es=0 00:29:07.963 12:25:09 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:07.963 12:25:09 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:29:08.224 12:25:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:08.224 12:25:09 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:29:08.224 12:25:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:08.224 12:25:09 -- common/autotest_common.sh@641 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:08.224 12:25:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:08.224 12:25:09 -- common/autotest_common.sh@10 -- # set +x 00:29:08.224 [2024-04-26 12:25:09.192090] nvmf_rpc.c: 769:nvmf_rpc_listen_paused: *ERROR*: A listener already exists with different secure channel option.request: 00:29:08.224 { 00:29:08.224 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:29:08.224 "secure_channel": false, 00:29:08.224 "listen_address": { 00:29:08.224 "trtype": "tcp", 00:29:08.224 "traddr": "127.0.0.1", 00:29:08.224 "trsvcid": "4420" 00:29:08.224 }, 00:29:08.224 "method": "nvmf_subsystem_add_listener", 00:29:08.224 "req_id": 1 00:29:08.224 } 00:29:08.224 Got JSON-RPC error response 00:29:08.224 response: 00:29:08.224 { 00:29:08.224 "code": -32602, 00:29:08.224 "message": "Invalid parameters" 00:29:08.224 } 00:29:08.224 12:25:09 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:29:08.224 12:25:09 -- common/autotest_common.sh@641 -- # es=1 00:29:08.224 12:25:09 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:08.224 12:25:09 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:29:08.224 12:25:09 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:08.224 12:25:09 -- keyring/file.sh@46 -- # bperfpid=3615121 00:29:08.224 12:25:09 -- keyring/file.sh@48 -- # waitforlisten 3615121 /var/tmp/bperf.sock 00:29:08.224 12:25:09 -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:29:08.224 12:25:09 -- common/autotest_common.sh@817 -- # '[' -z 3615121 ']' 00:29:08.224 12:25:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:08.224 12:25:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:08.224 12:25:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:08.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:08.224 12:25:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:08.224 12:25:09 -- common/autotest_common.sh@10 -- # set +x 00:29:08.224 [2024-04-26 12:25:09.245722] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:29:08.224 [2024-04-26 12:25:09.245768] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3615121 ] 00:29:08.224 EAL: No free 2048 kB hugepages reported on node 1 00:29:08.224 [2024-04-26 12:25:09.320606] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:08.224 [2024-04-26 12:25:09.383065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:08.796 12:25:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:08.796 12:25:10 -- common/autotest_common.sh@850 -- # return 0 00:29:08.796 12:25:10 -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.T0m2ZqF0BR 00:29:08.796 12:25:10 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.T0m2ZqF0BR 00:29:09.057 12:25:10 -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.RPLjgnkYDu 00:29:09.057 12:25:10 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.RPLjgnkYDu 00:29:09.318 12:25:10 -- keyring/file.sh@51 -- # get_key key0 00:29:09.318 12:25:10 -- keyring/file.sh@51 -- # jq -r .path 00:29:09.318 12:25:10 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:09.318 12:25:10 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:09.318 12:25:10 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:09.318 12:25:10 -- keyring/file.sh@51 -- # [[ /tmp/tmp.T0m2ZqF0BR == \/\t\m\p\/\t\m\p\.\T\0\m\2\Z\q\F\0\B\R ]] 00:29:09.318 12:25:10 -- keyring/file.sh@52 -- # get_key key1 00:29:09.318 12:25:10 -- keyring/file.sh@52 -- # jq -r .path 00:29:09.318 12:25:10 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:09.318 12:25:10 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:09.318 12:25:10 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:09.579 12:25:10 -- keyring/file.sh@52 -- # [[ /tmp/tmp.RPLjgnkYDu == \/\t\m\p\/\t\m\p\.\R\P\L\j\g\n\k\Y\D\u ]] 00:29:09.579 12:25:10 -- keyring/file.sh@53 -- # get_refcnt key0 00:29:09.579 12:25:10 -- keyring/common.sh@12 -- # get_key key0 00:29:09.579 12:25:10 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:09.579 12:25:10 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:09.579 12:25:10 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:09.579 12:25:10 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:09.579 12:25:10 -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:29:09.579 12:25:10 -- keyring/file.sh@54 -- # get_refcnt key1 00:29:09.579 12:25:10 -- keyring/common.sh@12 -- # get_key key1 00:29:09.579 12:25:10 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:09.579 12:25:10 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:09.579 12:25:10 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:09.579 12:25:10 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:09.840 12:25:10 -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:29:09.840 12:25:10 -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:09.840 12:25:10 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:10.100 [2024-04-26 12:25:11.075998] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:10.100 nvme0n1 00:29:10.100 12:25:11 -- keyring/file.sh@59 -- # get_refcnt key0 00:29:10.100 12:25:11 -- keyring/common.sh@12 -- # get_key key0 00:29:10.100 12:25:11 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:10.100 12:25:11 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:10.100 12:25:11 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:10.100 12:25:11 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:10.361 12:25:11 -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:29:10.361 12:25:11 -- keyring/file.sh@60 -- # get_refcnt key1 00:29:10.361 12:25:11 -- keyring/common.sh@12 -- # get_key key1 00:29:10.361 12:25:11 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:10.361 12:25:11 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:10.361 12:25:11 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:10.361 12:25:11 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:10.361 12:25:11 -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:29:10.361 12:25:11 -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:10.361 Running I/O for 1 seconds... 00:29:11.748 00:29:11.748 Latency(us) 00:29:11.748 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:11.748 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:29:11.748 nvme0n1 : 1.01 14232.32 55.60 0.00 0.00 8951.71 6471.68 17367.04 00:29:11.748 =================================================================================================================== 00:29:11.748 Total : 14232.32 55.60 0.00 0.00 8951.71 6471.68 17367.04 00:29:11.748 0 00:29:11.748 12:25:12 -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:11.748 12:25:12 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:11.748 12:25:12 -- keyring/file.sh@65 -- # get_refcnt key0 00:29:11.748 12:25:12 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:11.748 12:25:12 -- keyring/common.sh@12 -- # get_key key0 00:29:11.748 12:25:12 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:11.748 12:25:12 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:11.748 12:25:12 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:11.748 12:25:12 -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:29:11.748 12:25:12 -- keyring/file.sh@66 -- # get_refcnt key1 00:29:11.748 12:25:12 -- keyring/common.sh@12 -- # get_key key1 00:29:11.748 12:25:12 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:11.748 12:25:12 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:11.748 12:25:12 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:11.748 12:25:12 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:12.009 12:25:13 -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:29:12.009 12:25:13 -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:12.009 12:25:13 -- common/autotest_common.sh@638 -- # local es=0 00:29:12.009 12:25:13 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:12.009 12:25:13 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:29:12.009 12:25:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:12.009 12:25:13 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:29:12.009 12:25:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:12.009 12:25:13 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:12.009 12:25:13 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:12.270 [2024-04-26 12:25:13.241291] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:29:12.270 [2024-04-26 12:25:13.241528] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x692110 (107): Transport endpoint is not connected 00:29:12.270 [2024-04-26 12:25:13.242523] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x692110 (9): Bad file descriptor 00:29:12.270 [2024-04-26 12:25:13.243525] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:12.270 [2024-04-26 12:25:13.243533] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:29:12.270 [2024-04-26 12:25:13.243538] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:12.270 request: 00:29:12.270 { 00:29:12.270 "name": "nvme0", 00:29:12.270 "trtype": "tcp", 00:29:12.270 "traddr": "127.0.0.1", 00:29:12.270 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:12.270 "adrfam": "ipv4", 00:29:12.270 "trsvcid": "4420", 00:29:12.270 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:12.270 "psk": "key1", 00:29:12.271 "method": "bdev_nvme_attach_controller", 00:29:12.271 "req_id": 1 00:29:12.271 } 00:29:12.271 Got JSON-RPC error response 00:29:12.271 response: 00:29:12.271 { 00:29:12.271 "code": -32602, 00:29:12.271 "message": "Invalid parameters" 00:29:12.271 } 00:29:12.271 12:25:13 -- common/autotest_common.sh@641 -- # es=1 00:29:12.271 12:25:13 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:12.271 12:25:13 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:29:12.271 12:25:13 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:12.271 12:25:13 -- keyring/file.sh@71 -- # get_refcnt key0 00:29:12.271 12:25:13 -- keyring/common.sh@12 -- # get_key key0 00:29:12.271 12:25:13 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:12.271 12:25:13 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:12.271 12:25:13 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:12.271 12:25:13 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:12.271 12:25:13 -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:29:12.271 12:25:13 -- keyring/file.sh@72 -- # get_refcnt key1 00:29:12.271 12:25:13 -- keyring/common.sh@12 -- # get_key key1 00:29:12.271 12:25:13 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:12.271 12:25:13 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:12.271 12:25:13 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:12.271 12:25:13 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:12.532 12:25:13 -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:29:12.532 12:25:13 -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:29:12.532 12:25:13 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:12.532 12:25:13 -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:29:12.532 12:25:13 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:29:12.793 12:25:13 -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:29:12.793 12:25:13 -- keyring/file.sh@77 -- # jq length 00:29:12.793 12:25:13 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:13.055 12:25:14 -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:29:13.055 12:25:14 -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.T0m2ZqF0BR 00:29:13.055 12:25:14 -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.T0m2ZqF0BR 00:29:13.055 12:25:14 -- common/autotest_common.sh@638 -- # local es=0 00:29:13.055 12:25:14 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.T0m2ZqF0BR 00:29:13.055 12:25:14 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:29:13.055 12:25:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:13.055 12:25:14 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:29:13.055 12:25:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:13.055 12:25:14 -- common/autotest_common.sh@641 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.T0m2ZqF0BR 00:29:13.055 12:25:14 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.T0m2ZqF0BR 00:29:13.055 [2024-04-26 12:25:14.193361] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.T0m2ZqF0BR': 0100660 00:29:13.055 [2024-04-26 12:25:14.193382] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:29:13.055 request: 00:29:13.055 { 00:29:13.055 "name": "key0", 00:29:13.055 "path": "/tmp/tmp.T0m2ZqF0BR", 00:29:13.055 "method": "keyring_file_add_key", 00:29:13.055 "req_id": 1 00:29:13.055 } 00:29:13.055 Got JSON-RPC error response 00:29:13.055 response: 00:29:13.055 { 00:29:13.055 "code": -1, 00:29:13.055 "message": "Operation not permitted" 00:29:13.055 } 00:29:13.055 12:25:14 -- common/autotest_common.sh@641 -- # es=1 00:29:13.055 12:25:14 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:13.055 12:25:14 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:29:13.055 12:25:14 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:13.055 12:25:14 -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.T0m2ZqF0BR 00:29:13.055 12:25:14 -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.T0m2ZqF0BR 00:29:13.055 12:25:14 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.T0m2ZqF0BR 00:29:13.316 12:25:14 -- keyring/file.sh@86 -- # rm -f /tmp/tmp.T0m2ZqF0BR 00:29:13.317 12:25:14 -- keyring/file.sh@88 -- # get_refcnt key0 00:29:13.317 12:25:14 -- keyring/common.sh@12 -- # get_key key0 00:29:13.317 12:25:14 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:13.317 12:25:14 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:13.317 12:25:14 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:13.317 12:25:14 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:13.317 12:25:14 -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:29:13.317 12:25:14 -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:13.317 12:25:14 -- common/autotest_common.sh@638 -- # local es=0 00:29:13.317 12:25:14 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:13.317 12:25:14 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:29:13.317 12:25:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:13.317 12:25:14 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:29:13.317 12:25:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:13.317 12:25:14 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:13.317 12:25:14 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:13.578 [2024-04-26 12:25:14.666550] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.T0m2ZqF0BR': No such file or directory 00:29:13.578 [2024-04-26 12:25:14.666567] nvme_tcp.c:2570:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:29:13.578 [2024-04-26 12:25:14.666583] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:29:13.578 [2024-04-26 12:25:14.666588] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:13.578 [2024-04-26 12:25:14.666593] bdev_nvme.c:6208:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:29:13.578 request: 00:29:13.578 { 00:29:13.578 "name": "nvme0", 00:29:13.578 "trtype": "tcp", 00:29:13.578 "traddr": "127.0.0.1", 00:29:13.578 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:13.578 "adrfam": "ipv4", 00:29:13.578 "trsvcid": "4420", 00:29:13.578 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:13.578 "psk": "key0", 00:29:13.578 "method": "bdev_nvme_attach_controller", 00:29:13.578 "req_id": 1 00:29:13.578 } 00:29:13.578 Got JSON-RPC error response 00:29:13.578 response: 00:29:13.578 { 00:29:13.578 "code": -19, 00:29:13.578 "message": "No such device" 00:29:13.578 } 00:29:13.578 12:25:14 -- common/autotest_common.sh@641 -- # es=1 00:29:13.578 12:25:14 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:13.578 12:25:14 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:29:13.578 12:25:14 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:13.578 12:25:14 -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:29:13.578 12:25:14 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:13.839 12:25:14 -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:29:13.839 12:25:14 -- keyring/common.sh@15 -- # local name key digest path 00:29:13.839 12:25:14 -- keyring/common.sh@17 -- # name=key0 00:29:13.839 12:25:14 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:13.839 12:25:14 -- keyring/common.sh@17 -- # digest=0 00:29:13.839 12:25:14 -- keyring/common.sh@18 -- # mktemp 00:29:13.839 12:25:14 -- keyring/common.sh@18 -- # path=/tmp/tmp.ujjQgCLO14 00:29:13.839 12:25:14 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:13.839 12:25:14 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:13.839 12:25:14 -- nvmf/common.sh@691 -- # local prefix key digest 00:29:13.839 12:25:14 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:29:13.839 12:25:14 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:29:13.839 12:25:14 -- nvmf/common.sh@693 -- # digest=0 00:29:13.839 12:25:14 -- nvmf/common.sh@694 -- # python - 00:29:13.839 12:25:14 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.ujjQgCLO14 00:29:13.839 12:25:14 -- keyring/common.sh@23 -- # echo /tmp/tmp.ujjQgCLO14 00:29:13.839 12:25:14 -- keyring/file.sh@95 -- # key0path=/tmp/tmp.ujjQgCLO14 00:29:13.839 12:25:14 -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ujjQgCLO14 00:29:13.839 12:25:14 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ujjQgCLO14 00:29:14.099 12:25:15 -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:14.099 12:25:15 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:14.099 nvme0n1 00:29:14.099 12:25:15 -- keyring/file.sh@99 -- # get_refcnt key0 00:29:14.099 12:25:15 -- keyring/common.sh@12 -- # get_key key0 00:29:14.099 12:25:15 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:14.099 12:25:15 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:14.099 12:25:15 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:14.099 12:25:15 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:14.359 12:25:15 -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:29:14.360 12:25:15 -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:29:14.360 12:25:15 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:14.620 12:25:15 -- keyring/file.sh@101 -- # get_key key0 00:29:14.620 12:25:15 -- keyring/file.sh@101 -- # jq -r .removed 00:29:14.620 12:25:15 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:14.620 12:25:15 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:14.620 12:25:15 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:14.620 12:25:15 -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:29:14.620 12:25:15 -- keyring/file.sh@102 -- # get_refcnt key0 00:29:14.620 12:25:15 -- keyring/common.sh@12 -- # get_key key0 00:29:14.620 12:25:15 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:14.620 12:25:15 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:14.620 12:25:15 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:14.620 12:25:15 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:14.881 12:25:15 -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:29:14.881 12:25:15 -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:14.881 12:25:15 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:15.143 12:25:16 -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:29:15.143 12:25:16 -- keyring/file.sh@104 -- # jq length 00:29:15.143 12:25:16 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:15.143 12:25:16 -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:29:15.143 12:25:16 -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ujjQgCLO14 00:29:15.143 12:25:16 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ujjQgCLO14 00:29:15.404 12:25:16 -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.RPLjgnkYDu 00:29:15.404 12:25:16 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.RPLjgnkYDu 00:29:15.404 12:25:16 -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:15.404 12:25:16 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:15.665 nvme0n1 00:29:15.665 12:25:16 -- keyring/file.sh@112 -- # bperf_cmd save_config 00:29:15.665 12:25:16 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:29:15.927 12:25:17 -- keyring/file.sh@112 -- # config='{ 00:29:15.927 "subsystems": [ 00:29:15.927 { 00:29:15.927 "subsystem": "keyring", 00:29:15.927 "config": [ 00:29:15.927 { 00:29:15.927 "method": "keyring_file_add_key", 00:29:15.927 "params": { 00:29:15.927 "name": "key0", 00:29:15.927 "path": "/tmp/tmp.ujjQgCLO14" 00:29:15.927 } 00:29:15.927 }, 00:29:15.927 { 00:29:15.927 "method": "keyring_file_add_key", 00:29:15.927 "params": { 00:29:15.927 "name": "key1", 00:29:15.927 "path": "/tmp/tmp.RPLjgnkYDu" 00:29:15.927 } 00:29:15.927 } 00:29:15.927 ] 00:29:15.927 }, 00:29:15.927 { 00:29:15.927 "subsystem": "iobuf", 00:29:15.927 "config": [ 00:29:15.927 { 00:29:15.927 "method": "iobuf_set_options", 00:29:15.927 "params": { 00:29:15.927 "small_pool_count": 8192, 00:29:15.927 "large_pool_count": 1024, 00:29:15.927 "small_bufsize": 8192, 00:29:15.927 "large_bufsize": 135168 00:29:15.927 } 00:29:15.927 } 00:29:15.927 ] 00:29:15.927 }, 00:29:15.927 { 00:29:15.927 "subsystem": "sock", 00:29:15.927 "config": [ 00:29:15.927 { 00:29:15.927 "method": "sock_impl_set_options", 00:29:15.927 "params": { 00:29:15.927 "impl_name": "posix", 00:29:15.927 "recv_buf_size": 2097152, 00:29:15.927 "send_buf_size": 2097152, 00:29:15.927 "enable_recv_pipe": true, 00:29:15.927 "enable_quickack": false, 00:29:15.927 "enable_placement_id": 0, 00:29:15.927 "enable_zerocopy_send_server": true, 00:29:15.927 "enable_zerocopy_send_client": false, 00:29:15.927 "zerocopy_threshold": 0, 00:29:15.927 "tls_version": 0, 00:29:15.927 "enable_ktls": false 00:29:15.927 } 00:29:15.927 }, 00:29:15.927 { 00:29:15.927 "method": "sock_impl_set_options", 00:29:15.927 "params": { 00:29:15.927 "impl_name": "ssl", 00:29:15.927 "recv_buf_size": 4096, 00:29:15.927 "send_buf_size": 4096, 00:29:15.927 "enable_recv_pipe": true, 00:29:15.927 "enable_quickack": false, 00:29:15.927 "enable_placement_id": 0, 00:29:15.927 "enable_zerocopy_send_server": true, 00:29:15.927 "enable_zerocopy_send_client": false, 00:29:15.927 "zerocopy_threshold": 0, 00:29:15.927 "tls_version": 0, 00:29:15.927 "enable_ktls": false 00:29:15.927 } 00:29:15.927 } 00:29:15.927 ] 00:29:15.927 }, 00:29:15.927 { 00:29:15.927 "subsystem": "vmd", 00:29:15.927 "config": [] 00:29:15.927 }, 00:29:15.927 { 00:29:15.927 "subsystem": "accel", 00:29:15.927 "config": [ 00:29:15.927 { 00:29:15.927 "method": "accel_set_options", 00:29:15.927 "params": { 00:29:15.927 "small_cache_size": 128, 00:29:15.927 "large_cache_size": 16, 00:29:15.927 "task_count": 2048, 00:29:15.927 "sequence_count": 2048, 00:29:15.927 "buf_count": 2048 00:29:15.927 } 00:29:15.927 } 00:29:15.927 ] 00:29:15.927 }, 00:29:15.927 { 00:29:15.927 "subsystem": "bdev", 00:29:15.927 "config": [ 00:29:15.927 { 00:29:15.927 "method": "bdev_set_options", 00:29:15.927 "params": { 00:29:15.927 "bdev_io_pool_size": 65535, 00:29:15.927 "bdev_io_cache_size": 256, 00:29:15.927 "bdev_auto_examine": true, 00:29:15.927 "iobuf_small_cache_size": 128, 00:29:15.927 "iobuf_large_cache_size": 16 00:29:15.927 } 00:29:15.927 }, 00:29:15.927 { 00:29:15.927 "method": "bdev_raid_set_options", 00:29:15.927 "params": { 00:29:15.927 "process_window_size_kb": 1024 00:29:15.927 } 00:29:15.927 }, 00:29:15.927 { 00:29:15.927 "method": "bdev_iscsi_set_options", 00:29:15.927 "params": { 00:29:15.927 "timeout_sec": 30 00:29:15.927 } 00:29:15.927 }, 00:29:15.927 { 00:29:15.927 "method": "bdev_nvme_set_options", 00:29:15.927 "params": { 00:29:15.927 "action_on_timeout": "none", 00:29:15.927 "timeout_us": 0, 00:29:15.927 "timeout_admin_us": 0, 00:29:15.927 "keep_alive_timeout_ms": 10000, 00:29:15.927 "arbitration_burst": 0, 00:29:15.927 "low_priority_weight": 0, 00:29:15.927 "medium_priority_weight": 0, 00:29:15.927 "high_priority_weight": 0, 00:29:15.927 "nvme_adminq_poll_period_us": 10000, 00:29:15.927 "nvme_ioq_poll_period_us": 0, 00:29:15.927 "io_queue_requests": 512, 00:29:15.927 "delay_cmd_submit": true, 00:29:15.927 "transport_retry_count": 4, 00:29:15.927 "bdev_retry_count": 3, 00:29:15.927 "transport_ack_timeout": 0, 00:29:15.927 "ctrlr_loss_timeout_sec": 0, 00:29:15.927 "reconnect_delay_sec": 0, 00:29:15.927 "fast_io_fail_timeout_sec": 0, 00:29:15.927 "disable_auto_failback": false, 00:29:15.927 "generate_uuids": false, 00:29:15.927 "transport_tos": 0, 00:29:15.927 "nvme_error_stat": false, 00:29:15.927 "rdma_srq_size": 0, 00:29:15.927 "io_path_stat": false, 00:29:15.927 "allow_accel_sequence": false, 00:29:15.927 "rdma_max_cq_size": 0, 00:29:15.927 "rdma_cm_event_timeout_ms": 0, 00:29:15.927 "dhchap_digests": [ 00:29:15.927 "sha256", 00:29:15.927 "sha384", 00:29:15.927 "sha512" 00:29:15.927 ], 00:29:15.927 "dhchap_dhgroups": [ 00:29:15.927 "null", 00:29:15.927 "ffdhe2048", 00:29:15.927 "ffdhe3072", 00:29:15.927 "ffdhe4096", 00:29:15.927 "ffdhe6144", 00:29:15.927 "ffdhe8192" 00:29:15.927 ] 00:29:15.927 } 00:29:15.927 }, 00:29:15.927 { 00:29:15.927 "method": "bdev_nvme_attach_controller", 00:29:15.927 "params": { 00:29:15.927 "name": "nvme0", 00:29:15.927 "trtype": "TCP", 00:29:15.927 "adrfam": "IPv4", 00:29:15.927 "traddr": "127.0.0.1", 00:29:15.927 "trsvcid": "4420", 00:29:15.927 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:15.927 "prchk_reftag": false, 00:29:15.927 "prchk_guard": false, 00:29:15.927 "ctrlr_loss_timeout_sec": 0, 00:29:15.927 "reconnect_delay_sec": 0, 00:29:15.927 "fast_io_fail_timeout_sec": 0, 00:29:15.927 "psk": "key0", 00:29:15.927 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:15.927 "hdgst": false, 00:29:15.927 "ddgst": false 00:29:15.927 } 00:29:15.927 }, 00:29:15.927 { 00:29:15.927 "method": "bdev_nvme_set_hotplug", 00:29:15.927 "params": { 00:29:15.927 "period_us": 100000, 00:29:15.927 "enable": false 00:29:15.927 } 00:29:15.927 }, 00:29:15.927 { 00:29:15.928 "method": "bdev_wait_for_examine" 00:29:15.928 } 00:29:15.928 ] 00:29:15.928 }, 00:29:15.928 { 00:29:15.928 "subsystem": "nbd", 00:29:15.928 "config": [] 00:29:15.928 } 00:29:15.928 ] 00:29:15.928 }' 00:29:15.928 12:25:17 -- keyring/file.sh@114 -- # killprocess 3615121 00:29:15.928 12:25:17 -- common/autotest_common.sh@936 -- # '[' -z 3615121 ']' 00:29:15.928 12:25:17 -- common/autotest_common.sh@940 -- # kill -0 3615121 00:29:15.928 12:25:17 -- common/autotest_common.sh@941 -- # uname 00:29:15.928 12:25:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:15.928 12:25:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3615121 00:29:15.928 12:25:17 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:15.928 12:25:17 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:15.928 12:25:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3615121' 00:29:15.928 killing process with pid 3615121 00:29:15.928 12:25:17 -- common/autotest_common.sh@955 -- # kill 3615121 00:29:15.928 Received shutdown signal, test time was about 1.000000 seconds 00:29:15.928 00:29:15.928 Latency(us) 00:29:15.928 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:15.928 =================================================================================================================== 00:29:15.928 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:15.928 12:25:17 -- common/autotest_common.sh@960 -- # wait 3615121 00:29:16.189 12:25:17 -- keyring/file.sh@117 -- # bperfpid=3616925 00:29:16.189 12:25:17 -- keyring/file.sh@119 -- # waitforlisten 3616925 /var/tmp/bperf.sock 00:29:16.189 12:25:17 -- common/autotest_common.sh@817 -- # '[' -z 3616925 ']' 00:29:16.189 12:25:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:16.189 12:25:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:16.189 12:25:17 -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:29:16.189 12:25:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:16.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:16.189 12:25:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:16.189 12:25:17 -- common/autotest_common.sh@10 -- # set +x 00:29:16.189 12:25:17 -- keyring/file.sh@115 -- # echo '{ 00:29:16.189 "subsystems": [ 00:29:16.189 { 00:29:16.189 "subsystem": "keyring", 00:29:16.189 "config": [ 00:29:16.189 { 00:29:16.189 "method": "keyring_file_add_key", 00:29:16.189 "params": { 00:29:16.189 "name": "key0", 00:29:16.189 "path": "/tmp/tmp.ujjQgCLO14" 00:29:16.189 } 00:29:16.189 }, 00:29:16.189 { 00:29:16.189 "method": "keyring_file_add_key", 00:29:16.189 "params": { 00:29:16.189 "name": "key1", 00:29:16.189 "path": "/tmp/tmp.RPLjgnkYDu" 00:29:16.189 } 00:29:16.189 } 00:29:16.189 ] 00:29:16.189 }, 00:29:16.189 { 00:29:16.189 "subsystem": "iobuf", 00:29:16.189 "config": [ 00:29:16.189 { 00:29:16.189 "method": "iobuf_set_options", 00:29:16.189 "params": { 00:29:16.189 "small_pool_count": 8192, 00:29:16.189 "large_pool_count": 1024, 00:29:16.189 "small_bufsize": 8192, 00:29:16.189 "large_bufsize": 135168 00:29:16.189 } 00:29:16.189 } 00:29:16.189 ] 00:29:16.189 }, 00:29:16.189 { 00:29:16.189 "subsystem": "sock", 00:29:16.189 "config": [ 00:29:16.189 { 00:29:16.189 "method": "sock_impl_set_options", 00:29:16.189 "params": { 00:29:16.189 "impl_name": "posix", 00:29:16.189 "recv_buf_size": 2097152, 00:29:16.189 "send_buf_size": 2097152, 00:29:16.189 "enable_recv_pipe": true, 00:29:16.189 "enable_quickack": false, 00:29:16.189 "enable_placement_id": 0, 00:29:16.189 "enable_zerocopy_send_server": true, 00:29:16.189 "enable_zerocopy_send_client": false, 00:29:16.189 "zerocopy_threshold": 0, 00:29:16.189 "tls_version": 0, 00:29:16.189 "enable_ktls": false 00:29:16.189 } 00:29:16.189 }, 00:29:16.189 { 00:29:16.189 "method": "sock_impl_set_options", 00:29:16.189 "params": { 00:29:16.189 "impl_name": "ssl", 00:29:16.189 "recv_buf_size": 4096, 00:29:16.189 "send_buf_size": 4096, 00:29:16.189 "enable_recv_pipe": true, 00:29:16.189 "enable_quickack": false, 00:29:16.189 "enable_placement_id": 0, 00:29:16.189 "enable_zerocopy_send_server": true, 00:29:16.189 "enable_zerocopy_send_client": false, 00:29:16.189 "zerocopy_threshold": 0, 00:29:16.189 "tls_version": 0, 00:29:16.189 "enable_ktls": false 00:29:16.189 } 00:29:16.190 } 00:29:16.190 ] 00:29:16.190 }, 00:29:16.190 { 00:29:16.190 "subsystem": "vmd", 00:29:16.190 "config": [] 00:29:16.190 }, 00:29:16.190 { 00:29:16.190 "subsystem": "accel", 00:29:16.190 "config": [ 00:29:16.190 { 00:29:16.190 "method": "accel_set_options", 00:29:16.190 "params": { 00:29:16.190 "small_cache_size": 128, 00:29:16.190 "large_cache_size": 16, 00:29:16.190 "task_count": 2048, 00:29:16.190 "sequence_count": 2048, 00:29:16.190 "buf_count": 2048 00:29:16.190 } 00:29:16.190 } 00:29:16.190 ] 00:29:16.190 }, 00:29:16.190 { 00:29:16.190 "subsystem": "bdev", 00:29:16.190 "config": [ 00:29:16.190 { 00:29:16.190 "method": "bdev_set_options", 00:29:16.190 "params": { 00:29:16.190 "bdev_io_pool_size": 65535, 00:29:16.190 "bdev_io_cache_size": 256, 00:29:16.190 "bdev_auto_examine": true, 00:29:16.190 "iobuf_small_cache_size": 128, 00:29:16.190 "iobuf_large_cache_size": 16 00:29:16.190 } 00:29:16.190 }, 00:29:16.190 { 00:29:16.190 "method": "bdev_raid_set_options", 00:29:16.190 "params": { 00:29:16.190 "process_window_size_kb": 1024 00:29:16.190 } 00:29:16.190 }, 00:29:16.190 { 00:29:16.190 "method": "bdev_iscsi_set_options", 00:29:16.190 "params": { 00:29:16.190 "timeout_sec": 30 00:29:16.190 } 00:29:16.190 }, 00:29:16.190 { 00:29:16.190 "method": "bdev_nvme_set_options", 00:29:16.190 "params": { 00:29:16.190 "action_on_timeout": "none", 00:29:16.190 "timeout_us": 0, 00:29:16.190 "timeout_admin_us": 0, 00:29:16.190 "keep_alive_timeout_ms": 10000, 00:29:16.190 "arbitration_burst": 0, 00:29:16.190 "low_priority_weight": 0, 00:29:16.190 "medium_priority_weight": 0, 00:29:16.190 "high_priority_weight": 0, 00:29:16.190 "nvme_adminq_poll_period_us": 10000, 00:29:16.190 "nvme_ioq_poll_period_us": 0, 00:29:16.190 "io_queue_requests": 512, 00:29:16.190 "delay_cmd_submit": true, 00:29:16.190 "transport_retry_count": 4, 00:29:16.190 "bdev_retry_count": 3, 00:29:16.190 "transport_ack_timeout": 0, 00:29:16.190 "ctrlr_loss_timeout_sec": 0, 00:29:16.190 "reconnect_delay_sec": 0, 00:29:16.190 "fast_io_fail_timeout_sec": 0, 00:29:16.190 "disable_auto_failback": false, 00:29:16.190 "generate_uuids": false, 00:29:16.190 "transport_tos": 0, 00:29:16.190 "nvme_error_stat": false, 00:29:16.190 "rdma_srq_size": 0, 00:29:16.190 "io_path_stat": false, 00:29:16.190 "allow_accel_sequence": false, 00:29:16.190 "rdma_max_cq_size": 0, 00:29:16.190 "rdma_cm_event_timeout_ms": 0, 00:29:16.190 "dhchap_digests": [ 00:29:16.190 "sha256", 00:29:16.190 "sha384", 00:29:16.190 "sha512" 00:29:16.190 ], 00:29:16.190 "dhchap_dhgroups": [ 00:29:16.190 "null", 00:29:16.190 "ffdhe2048", 00:29:16.190 "ffdhe3072", 00:29:16.190 "ffdhe4096", 00:29:16.190 "ffdhe6144", 00:29:16.190 "ffdhe8192" 00:29:16.190 ] 00:29:16.190 } 00:29:16.190 }, 00:29:16.190 { 00:29:16.190 "method": "bdev_nvme_attach_controller", 00:29:16.190 "params": { 00:29:16.190 "name": "nvme0", 00:29:16.190 "trtype": "TCP", 00:29:16.190 "adrfam": "IPv4", 00:29:16.190 "traddr": "127.0.0.1", 00:29:16.190 "trsvcid": "4420", 00:29:16.190 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:16.190 "prchk_reftag": false, 00:29:16.190 "prchk_guard": false, 00:29:16.190 "ctrlr_loss_timeout_sec": 0, 00:29:16.190 "reconnect_delay_sec": 0, 00:29:16.190 "fast_io_fail_timeout_sec": 0, 00:29:16.190 "psk": "key0", 00:29:16.190 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:16.190 "hdgst": false, 00:29:16.190 "ddgst": false 00:29:16.190 } 00:29:16.190 }, 00:29:16.190 { 00:29:16.190 "method": "bdev_nvme_set_hotplug", 00:29:16.190 "params": { 00:29:16.190 "period_us": 100000, 00:29:16.190 "enable": false 00:29:16.190 } 00:29:16.190 }, 00:29:16.190 { 00:29:16.190 "method": "bdev_wait_for_examine" 00:29:16.190 } 00:29:16.190 ] 00:29:16.190 }, 00:29:16.190 { 00:29:16.190 "subsystem": "nbd", 00:29:16.190 "config": [] 00:29:16.190 } 00:29:16.190 ] 00:29:16.190 }' 00:29:16.190 [2024-04-26 12:25:17.254650] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:29:16.190 [2024-04-26 12:25:17.254706] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3616925 ] 00:29:16.190 EAL: No free 2048 kB hugepages reported on node 1 00:29:16.190 [2024-04-26 12:25:17.330061] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:16.190 [2024-04-26 12:25:17.381648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:16.451 [2024-04-26 12:25:17.515232] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:17.022 12:25:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:17.022 12:25:18 -- common/autotest_common.sh@850 -- # return 0 00:29:17.022 12:25:18 -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:29:17.022 12:25:18 -- keyring/file.sh@120 -- # jq length 00:29:17.022 12:25:18 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:17.022 12:25:18 -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:29:17.022 12:25:18 -- keyring/file.sh@121 -- # get_refcnt key0 00:29:17.022 12:25:18 -- keyring/common.sh@12 -- # get_key key0 00:29:17.022 12:25:18 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:17.022 12:25:18 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:17.022 12:25:18 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:17.022 12:25:18 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:17.283 12:25:18 -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:29:17.283 12:25:18 -- keyring/file.sh@122 -- # get_refcnt key1 00:29:17.283 12:25:18 -- keyring/common.sh@12 -- # get_key key1 00:29:17.283 12:25:18 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:17.283 12:25:18 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:17.283 12:25:18 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:17.283 12:25:18 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:17.283 12:25:18 -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:29:17.283 12:25:18 -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:29:17.283 12:25:18 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:29:17.283 12:25:18 -- keyring/file.sh@123 -- # jq -r '.[].name' 00:29:17.544 12:25:18 -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:29:17.544 12:25:18 -- keyring/file.sh@1 -- # cleanup 00:29:17.544 12:25:18 -- keyring/file.sh@19 -- # rm -f /tmp/tmp.ujjQgCLO14 /tmp/tmp.RPLjgnkYDu 00:29:17.544 12:25:18 -- keyring/file.sh@20 -- # killprocess 3616925 00:29:17.544 12:25:18 -- common/autotest_common.sh@936 -- # '[' -z 3616925 ']' 00:29:17.544 12:25:18 -- common/autotest_common.sh@940 -- # kill -0 3616925 00:29:17.544 12:25:18 -- common/autotest_common.sh@941 -- # uname 00:29:17.544 12:25:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:17.544 12:25:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3616925 00:29:17.544 12:25:18 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:17.544 12:25:18 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:17.544 12:25:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3616925' 00:29:17.544 killing process with pid 3616925 00:29:17.544 12:25:18 -- common/autotest_common.sh@955 -- # kill 3616925 00:29:17.544 Received shutdown signal, test time was about 1.000000 seconds 00:29:17.544 00:29:17.544 Latency(us) 00:29:17.544 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:17.544 =================================================================================================================== 00:29:17.544 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:17.544 12:25:18 -- common/autotest_common.sh@960 -- # wait 3616925 00:29:17.805 12:25:18 -- keyring/file.sh@21 -- # killprocess 3615078 00:29:17.805 12:25:18 -- common/autotest_common.sh@936 -- # '[' -z 3615078 ']' 00:29:17.805 12:25:18 -- common/autotest_common.sh@940 -- # kill -0 3615078 00:29:17.805 12:25:18 -- common/autotest_common.sh@941 -- # uname 00:29:17.805 12:25:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:17.805 12:25:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3615078 00:29:17.805 12:25:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:17.805 12:25:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:17.805 12:25:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3615078' 00:29:17.805 killing process with pid 3615078 00:29:17.805 12:25:18 -- common/autotest_common.sh@955 -- # kill 3615078 00:29:17.805 [2024-04-26 12:25:18.858746] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:29:17.805 12:25:18 -- common/autotest_common.sh@960 -- # wait 3615078 00:29:18.069 00:29:18.069 real 0m11.043s 00:29:18.069 user 0m26.357s 00:29:18.069 sys 0m2.533s 00:29:18.069 12:25:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:18.069 12:25:19 -- common/autotest_common.sh@10 -- # set +x 00:29:18.069 ************************************ 00:29:18.069 END TEST keyring_file 00:29:18.069 ************************************ 00:29:18.069 12:25:19 -- spdk/autotest.sh@294 -- # [[ n == y ]] 00:29:18.069 12:25:19 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:29:18.069 12:25:19 -- spdk/autotest.sh@310 -- # '[' 0 -eq 1 ']' 00:29:18.069 12:25:19 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:29:18.069 12:25:19 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:29:18.069 12:25:19 -- spdk/autotest.sh@328 -- # '[' 0 -eq 1 ']' 00:29:18.069 12:25:19 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:29:18.069 12:25:19 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:29:18.069 12:25:19 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:29:18.069 12:25:19 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:29:18.069 12:25:19 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:29:18.069 12:25:19 -- spdk/autotest.sh@354 -- # '[' 0 -eq 1 ']' 00:29:18.069 12:25:19 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:29:18.069 12:25:19 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:29:18.069 12:25:19 -- spdk/autotest.sh@369 -- # [[ 0 -eq 1 ]] 00:29:18.069 12:25:19 -- spdk/autotest.sh@373 -- # [[ 0 -eq 1 ]] 00:29:18.069 12:25:19 -- spdk/autotest.sh@378 -- # trap - SIGINT SIGTERM EXIT 00:29:18.069 12:25:19 -- spdk/autotest.sh@380 -- # timing_enter post_cleanup 00:29:18.069 12:25:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:29:18.069 12:25:19 -- common/autotest_common.sh@10 -- # set +x 00:29:18.069 12:25:19 -- spdk/autotest.sh@381 -- # autotest_cleanup 00:29:18.069 12:25:19 -- common/autotest_common.sh@1378 -- # local autotest_es=0 00:29:18.069 12:25:19 -- common/autotest_common.sh@1379 -- # xtrace_disable 00:29:18.069 12:25:19 -- common/autotest_common.sh@10 -- # set +x 00:29:26.290 INFO: APP EXITING 00:29:26.290 INFO: killing all VMs 00:29:26.290 INFO: killing vhost app 00:29:26.290 INFO: EXIT DONE 00:29:28.835 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:29:28.835 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:29:28.835 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:29:28.835 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:29:29.096 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:29:29.096 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:29:29.096 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:29:29.096 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:29:29.096 0000:65:00.0 (144d a80a): Already using the nvme driver 00:29:29.096 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:29:29.096 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:29:29.096 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:29:29.096 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:29:29.096 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:29:29.096 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:29:29.096 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:29:29.356 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:29:33.564 Cleaning 00:29:33.564 Removing: /var/run/dpdk/spdk0/config 00:29:33.564 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:29:33.564 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:29:33.564 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:29:33.564 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:29:33.564 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:29:33.564 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:29:33.564 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:29:33.564 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:29:33.564 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:29:33.564 Removing: /var/run/dpdk/spdk0/hugepage_info 00:29:33.564 Removing: /var/run/dpdk/spdk1/config 00:29:33.564 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:29:33.564 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:29:33.564 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:29:33.564 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:29:33.564 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:29:33.564 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:29:33.564 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:29:33.564 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:29:33.564 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:29:33.564 Removing: /var/run/dpdk/spdk1/hugepage_info 00:29:33.564 Removing: /var/run/dpdk/spdk1/mp_socket 00:29:33.565 Removing: /var/run/dpdk/spdk2/config 00:29:33.565 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:29:33.565 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:29:33.565 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:29:33.565 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:29:33.565 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:29:33.565 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:29:33.565 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:29:33.565 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:29:33.565 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:29:33.565 Removing: /var/run/dpdk/spdk2/hugepage_info 00:29:33.565 Removing: /var/run/dpdk/spdk3/config 00:29:33.565 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:29:33.565 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:29:33.565 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:29:33.565 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:29:33.565 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:29:33.565 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:29:33.565 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:29:33.565 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:29:33.565 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:29:33.565 Removing: /var/run/dpdk/spdk3/hugepage_info 00:29:33.565 Removing: /var/run/dpdk/spdk4/config 00:29:33.565 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:29:33.565 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:29:33.565 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:29:33.565 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:29:33.565 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:29:33.565 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:29:33.565 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:29:33.565 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:29:33.565 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:29:33.565 Removing: /var/run/dpdk/spdk4/hugepage_info 00:29:33.565 Removing: /dev/shm/bdev_svc_trace.1 00:29:33.565 Removing: /dev/shm/nvmf_trace.0 00:29:33.565 Removing: /dev/shm/spdk_tgt_trace.pid3192431 00:29:33.565 Removing: /var/run/dpdk/spdk0 00:29:33.565 Removing: /var/run/dpdk/spdk1 00:29:33.565 Removing: /var/run/dpdk/spdk2 00:29:33.565 Removing: /var/run/dpdk/spdk3 00:29:33.565 Removing: /var/run/dpdk/spdk4 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3190915 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3192431 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3193314 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3194536 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3194704 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3196092 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3196113 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3196567 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3197618 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3198163 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3198551 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3198953 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3199361 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3199770 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3200129 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3200406 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3200702 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3202202 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3205563 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3205937 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3206310 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3206623 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3207020 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3207208 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3207734 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3207753 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3208120 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3208310 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3208502 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3208836 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3209294 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3209648 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3210052 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3210384 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3210468 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3210667 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3210925 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3211282 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3211635 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3211997 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3212355 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3212708 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3212984 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3213226 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3213482 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3213834 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3214196 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3214549 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3214907 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3215252 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3215503 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3215777 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3216047 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3216399 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3216755 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3217116 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3217258 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3217726 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3222364 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3277574 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3282795 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3294187 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3300644 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3305707 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3306396 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3320435 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3320521 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3321560 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3322566 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3323573 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3324243 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3324251 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3324581 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3324602 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3324604 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3325643 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3326679 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3327778 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3328403 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3328532 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3328788 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3330097 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3331578 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3342463 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3342815 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3347947 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3355031 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3358040 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3370454 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3381285 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3383569 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3384729 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3406009 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3410760 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3416225 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3418226 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3420237 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3420536 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3420613 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3420937 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3421652 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3423664 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3424746 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3425129 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3427827 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3428533 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3429249 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3434371 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3447307 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3452130 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3459409 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3461027 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3462764 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3468045 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3473026 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3482239 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3482260 00:29:33.565 Removing: /var/run/dpdk/spdk_pid3487364 00:29:33.828 Removing: /var/run/dpdk/spdk_pid3487690 00:29:33.828 Removing: /var/run/dpdk/spdk_pid3488028 00:29:33.828 Removing: /var/run/dpdk/spdk_pid3488371 00:29:33.828 Removing: /var/run/dpdk/spdk_pid3488550 00:29:33.828 Removing: /var/run/dpdk/spdk_pid3493947 00:29:33.828 Removing: /var/run/dpdk/spdk_pid3495063 00:29:33.828 Removing: /var/run/dpdk/spdk_pid3500467 00:29:33.828 Removing: /var/run/dpdk/spdk_pid3503804 00:29:33.828 Removing: /var/run/dpdk/spdk_pid3510490 00:29:33.828 Removing: /var/run/dpdk/spdk_pid3516886 00:29:33.828 Removing: /var/run/dpdk/spdk_pid3525736 00:29:33.828 Removing: /var/run/dpdk/spdk_pid3525776 00:29:33.828 Removing: /var/run/dpdk/spdk_pid3548925 00:29:33.828 Removing: /var/run/dpdk/spdk_pid3550184 00:29:33.828 Removing: /var/run/dpdk/spdk_pid3550899 00:29:33.828 Removing: /var/run/dpdk/spdk_pid3551578 00:29:33.828 Removing: /var/run/dpdk/spdk_pid3552643 00:29:33.829 Removing: /var/run/dpdk/spdk_pid3553331 00:29:33.829 Removing: /var/run/dpdk/spdk_pid3554022 00:29:33.829 Removing: /var/run/dpdk/spdk_pid3554795 00:29:33.829 Removing: /var/run/dpdk/spdk_pid3559833 00:29:33.829 Removing: /var/run/dpdk/spdk_pid3560150 00:29:33.829 Removing: /var/run/dpdk/spdk_pid3567576 00:29:33.829 Removing: /var/run/dpdk/spdk_pid3567953 00:29:33.829 Removing: /var/run/dpdk/spdk_pid3570469 00:29:33.829 Removing: /var/run/dpdk/spdk_pid3577954 00:29:33.829 Removing: /var/run/dpdk/spdk_pid3577961 00:29:33.829 Removing: /var/run/dpdk/spdk_pid3583958 00:29:33.829 Removing: /var/run/dpdk/spdk_pid3586465 00:29:33.829 Removing: /var/run/dpdk/spdk_pid3588757 00:29:33.829 Removing: /var/run/dpdk/spdk_pid3590178 00:29:33.829 Removing: /var/run/dpdk/spdk_pid3592696 00:29:33.829 Removing: /var/run/dpdk/spdk_pid3594127 00:29:33.829 Removing: /var/run/dpdk/spdk_pid3604859 00:29:33.829 Removing: /var/run/dpdk/spdk_pid3605521 00:29:33.829 Removing: /var/run/dpdk/spdk_pid3606166 00:29:33.829 Removing: /var/run/dpdk/spdk_pid3608939 00:29:33.829 Removing: /var/run/dpdk/spdk_pid3609516 00:29:33.829 Removing: /var/run/dpdk/spdk_pid3610182 00:29:33.829 Removing: /var/run/dpdk/spdk_pid3615078 00:29:33.829 Removing: /var/run/dpdk/spdk_pid3615121 00:29:33.829 Removing: /var/run/dpdk/spdk_pid3616925 00:29:33.829 Clean 00:29:34.090 12:25:35 -- common/autotest_common.sh@1437 -- # return 0 00:29:34.090 12:25:35 -- spdk/autotest.sh@382 -- # timing_exit post_cleanup 00:29:34.090 12:25:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:34.090 12:25:35 -- common/autotest_common.sh@10 -- # set +x 00:29:34.090 12:25:35 -- spdk/autotest.sh@384 -- # timing_exit autotest 00:29:34.090 12:25:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:34.090 12:25:35 -- common/autotest_common.sh@10 -- # set +x 00:29:34.090 12:25:35 -- spdk/autotest.sh@385 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:29:34.090 12:25:35 -- spdk/autotest.sh@387 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:29:34.090 12:25:35 -- spdk/autotest.sh@387 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:29:34.090 12:25:35 -- spdk/autotest.sh@389 -- # hash lcov 00:29:34.090 12:25:35 -- spdk/autotest.sh@389 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:29:34.090 12:25:35 -- spdk/autotest.sh@391 -- # hostname 00:29:34.090 12:25:35 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-12 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:29:34.351 geninfo: WARNING: invalid characters removed from testname! 00:30:00.934 12:25:58 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:00.934 12:26:01 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:01.874 12:26:03 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:04.415 12:26:05 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:05.799 12:26:06 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:07.182 12:26:08 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:09.091 12:26:09 -- spdk/autotest.sh@398 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:09.091 12:26:09 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:09.091 12:26:09 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:30:09.091 12:26:09 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:09.091 12:26:09 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:09.091 12:26:09 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.091 12:26:09 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.091 12:26:09 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.091 12:26:09 -- paths/export.sh@5 -- $ export PATH 00:30:09.091 12:26:09 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.091 12:26:09 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:30:09.092 12:26:09 -- common/autobuild_common.sh@435 -- $ date +%s 00:30:09.092 12:26:09 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1714127169.XXXXXX 00:30:09.092 12:26:09 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1714127169.Enz2YC 00:30:09.092 12:26:09 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:30:09.092 12:26:09 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:30:09.092 12:26:09 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:30:09.092 12:26:09 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:30:09.092 12:26:09 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:30:09.092 12:26:09 -- common/autobuild_common.sh@451 -- $ get_config_params 00:30:09.092 12:26:09 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:30:09.092 12:26:09 -- common/autotest_common.sh@10 -- $ set +x 00:30:09.092 12:26:09 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:30:09.092 12:26:09 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:30:09.092 12:26:09 -- pm/common@17 -- $ local monitor 00:30:09.092 12:26:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:09.092 12:26:09 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3628779 00:30:09.092 12:26:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:09.092 12:26:09 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3628781 00:30:09.092 12:26:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:09.092 12:26:09 -- pm/common@21 -- $ date +%s 00:30:09.092 12:26:09 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3628783 00:30:09.092 12:26:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:09.092 12:26:09 -- pm/common@21 -- $ date +%s 00:30:09.092 12:26:09 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3628786 00:30:09.092 12:26:09 -- pm/common@26 -- $ sleep 1 00:30:09.092 12:26:09 -- pm/common@21 -- $ date +%s 00:30:09.092 12:26:09 -- pm/common@21 -- $ date +%s 00:30:09.092 12:26:09 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1714127169 00:30:09.092 12:26:09 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1714127169 00:30:09.092 12:26:09 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1714127169 00:30:09.092 12:26:09 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1714127169 00:30:09.092 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1714127169_collect-vmstat.pm.log 00:30:09.092 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1714127169_collect-cpu-load.pm.log 00:30:09.092 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1714127169_collect-bmc-pm.bmc.pm.log 00:30:09.092 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1714127169_collect-cpu-temp.pm.log 00:30:10.033 12:26:10 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:30:10.034 12:26:10 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j144 00:30:10.034 12:26:10 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:10.034 12:26:10 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:30:10.034 12:26:10 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:30:10.034 12:26:10 -- spdk/autopackage.sh@19 -- $ timing_finish 00:30:10.034 12:26:10 -- common/autotest_common.sh@722 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:10.034 12:26:10 -- common/autotest_common.sh@723 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:30:10.034 12:26:10 -- common/autotest_common.sh@725 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:30:10.034 12:26:10 -- spdk/autopackage.sh@20 -- $ exit 0 00:30:10.034 12:26:10 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:30:10.034 12:26:10 -- pm/common@30 -- $ signal_monitor_resources TERM 00:30:10.034 12:26:10 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:30:10.034 12:26:10 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:10.034 12:26:10 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:30:10.034 12:26:10 -- pm/common@45 -- $ pid=3628801 00:30:10.034 12:26:10 -- pm/common@52 -- $ sudo kill -TERM 3628801 00:30:10.034 12:26:11 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:10.034 12:26:11 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:30:10.034 12:26:11 -- pm/common@45 -- $ pid=3628805 00:30:10.034 12:26:11 -- pm/common@52 -- $ sudo kill -TERM 3628805 00:30:10.034 12:26:11 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:10.034 12:26:11 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:30:10.034 12:26:11 -- pm/common@45 -- $ pid=3628808 00:30:10.034 12:26:11 -- pm/common@52 -- $ sudo kill -TERM 3628808 00:30:10.034 12:26:11 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:10.034 12:26:11 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:30:10.034 12:26:11 -- pm/common@45 -- $ pid=3628809 00:30:10.034 12:26:11 -- pm/common@52 -- $ sudo kill -TERM 3628809 00:30:10.034 + [[ -n 3070238 ]] 00:30:10.034 + sudo kill 3070238 00:30:10.044 [Pipeline] } 00:30:10.062 [Pipeline] // stage 00:30:10.068 [Pipeline] } 00:30:10.084 [Pipeline] // timeout 00:30:10.088 [Pipeline] } 00:30:10.105 [Pipeline] // catchError 00:30:10.110 [Pipeline] } 00:30:10.128 [Pipeline] // wrap 00:30:10.134 [Pipeline] } 00:30:10.151 [Pipeline] // catchError 00:30:10.160 [Pipeline] stage 00:30:10.163 [Pipeline] { (Epilogue) 00:30:10.178 [Pipeline] catchError 00:30:10.180 [Pipeline] { 00:30:10.196 [Pipeline] echo 00:30:10.197 Cleanup processes 00:30:10.203 [Pipeline] sh 00:30:10.490 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:10.491 3628902 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:30:10.491 3629358 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:10.505 [Pipeline] sh 00:30:10.794 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:10.794 ++ grep -v 'sudo pgrep' 00:30:10.794 ++ awk '{print $1}' 00:30:10.794 + sudo kill -9 3628902 00:30:10.808 [Pipeline] sh 00:30:11.098 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:21.148 [Pipeline] sh 00:30:21.434 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:21.434 Artifacts sizes are good 00:30:21.450 [Pipeline] archiveArtifacts 00:30:21.457 Archiving artifacts 00:30:21.633 [Pipeline] sh 00:30:21.917 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:30:21.933 [Pipeline] cleanWs 00:30:21.943 [WS-CLEANUP] Deleting project workspace... 00:30:21.943 [WS-CLEANUP] Deferred wipeout is used... 00:30:21.950 [WS-CLEANUP] done 00:30:21.952 [Pipeline] } 00:30:21.971 [Pipeline] // catchError 00:30:21.982 [Pipeline] sh 00:30:22.267 + logger -p user.info -t JENKINS-CI 00:30:22.276 [Pipeline] } 00:30:22.292 [Pipeline] // stage 00:30:22.298 [Pipeline] } 00:30:22.326 [Pipeline] // node 00:30:22.329 [Pipeline] End of Pipeline 00:30:22.359 Finished: SUCCESS